Cogstate sent a survey to 261 English speaking raters trained by Cogstate clinical expert coaches to administer complex Alzheimer’s assessments. 48 individuals responded and gave overwhelmingly positive reviews, with questions on coaches’ competency and feedback receiving 100% positive (Strongly Agree/Agree) responses. (Wacker, et al, “Rater Perspectives on Applied Training of Cognitive Clinical Outcome Assessments, Delivered by Neuropsychology Experts,” AAIC 2022).
Case Examples of Data Quality Programs in Action
Site Raters Report Highly Positive Alzheimer’s Assessment Training from Cogstate Clinical Expert Coaches
Programmatic Rater Training Led to Efficiencies and Maintained Quality
Cogstate worked with a sponsor team to develop rater training across a program with multiple rare disease trials. Raters were trained and then given the opportunity to participate across four trials all leveraging the same scale. This led to tremendous efficiencies, as nearly half of the raters participated in two or more studies, reducing the individual training burden by 25%. In addition, only 7% of scale administrations had errors that required follow-up, showing the programmatic approach to training provided reliable data capture. (Ventola, et al, “Rater academy: increasing efficiency in training clinicians within rare disease trials.” NORD, 2021)
Central Raters Optimized Data Quality via Decreased Rater Variance
The Vineland-3 is a widely used measure of adaptive functioning in rare disease clinical trials. In one program of clinical trials, 53 site raters administered the assessment; in another program of clinical trials, 7 central raters administered the assessment. An analysis of these results showed the use of central raters offered greater efficiency and optimized data quality by decreasing rater variance, while also reducing the burden on sites. (Ventola, et al, “Leveraging Remote Assessment and Central Raters to Optimize Data Quality in Rare Neurodevelopment Disorders Clinical Trials,” ECRD, 2022)
- The team of 7 central raters conducted 150 administrations (~21 per person) compared to the 53 site raters who conducted 138 administrations (~2.5 per person).
- 0 of the administrations completed by the central raters had significant errors, compared to the site raters which had 4 administrations with errors that compromised the validity of the assessment.