ADAS-Cog, ADCS-ADL, CDR, and MMSE Rater Performance Analysis Data

June 21, 2024

Due to the length and complex nature of many common outcome measures in Alzheimer’s Disease clinical trials, it’s common for raters to make errors in scale administration and scoring. Assessments such as the ADAS-Cog and CDR require extensive training and experience for accurate data collection.

However, even with high quality preparation, raters will still make errors. This reality supports the implementation of central monitoring programs to review and correct rater performance, provide feedback and recalibration, and improve data accuracy.

Lessons from the TRAILBLAZER-ALZ-2 Rater Performance Central Monitoring Program

Data shared in an AD/PD poster—co-authored by Cogstate—highlight learnings from a rater performance monitoring program used in the TRAILBLAZER-ALZ 2 Randomized Clinical Trial of Donanemab in Early Symptomatic Alzheimer Disease.

Rater performance was analyzed across four outcome measures that were part of the study and are frequently leveraged in AD clinical trials: ADAS-Cog 13, ADCS-ADL, CDR, and MMSE.

The individuals reviewing rater performance were selected from a team of neuropsychologists at Cogstate who have deep expertise with AD assessment and were carefully calibrated on study requirements. Reviewers evaluated scale forms completed by raters and audio recordings of administrations to identify any deviation from standardized test or interviewing procedure. They also looked for any discrepancy between participants’ actual and documented responses, and errors of response scoring.

Review of assessments showed deviation from standard guidelines in 40-60% of administrations. The most errors were observed on the ADAS-Cog (the longest and most complex scale in the battery).

In addition, differences in performance accuracy were also noted across countries. This opens up considerations for cross-cultural differences in rater qualifications and experience as well as culturally driven differences in review approaches that may be helpful in tailoring rater training programs to the specific learning needs of raters in local regions.

Findings from this data indicate the importance of centrally monitoring rater performance to ensure data integrity and consistency throughout studies, particularly those utilizing complex and lengthy scales.

Read the poster here.

Back to Blog