Rater Staffing Shortages Addressed via Central Rating  

February 20, 2024

Clinical trials for rare neurodevelopmental disorders can offer many challenges, as assessments are complex and eligible participants limited. One of those obstacles is finding enough highly specialized, qualified, and experienced raters to administer the complex developmental and behavioral scales needed in these populations. Addressing this challenge can be difficult, but innovative approaches are being adopted by sponsor teams that can lead to more patient centric trial design, and improved data quality.  

Central Raters Enable Rare Disease Study Continuation

An example of meeting a challenge with innovation occurred when critical rater staffing gaps jeopardized patient enrollment at various U.S. and European sites for one study. The scenario placed eligible patients in the US, Italy, Spain, and the Netherlands at risk due to the absence of experienced raters.   

The sponsor team reached out to Cogstate, who stepped in to support by deploying its team of specialized, multilingual Central Raters. With central rating, cognitive and clinical outcome measures are administered by an independent team of highly qualified raters via telehealth (video or phone). The use of Central Raters in this study enabled the inclusion of all identified subjects from the US and EU.  

It should be noted that the primary endpoint—an interview-based scale—was well-suited for remote telehealth formats and did not require a protocol amendment due to the study’s initial design.  

Central Rating Optimizes Data Quality 

The question often comes up as to the quality of data captured via Central Rating. A poster presented at ECRD, Venolta et al (2022) shows an analysis of Vineland-3 administrations—a widely used measure of adaptive functioning in rare disease clinical trials—from site raters versus central raters.   

In one program of clinical trials, 53 site raters administered the assessment; in another program of clinical trials, 7 central raters administered the assessment. An analysis of these results showed the use of central raters offered greater efficiency and optimized data quality by decreasing rater variance, while also reducing the burden on sites. 

  • The team of central raters conducted 150 administrations (~21 per person) compared to the site raters who conducted 138 administrations (~2.5 per person). 
  • 0 of the administrations completed by the central raters had significant errors, compared to the site raters which had 4 administrations with errors that compromised the validity of the assessment. 

 In short, the use of central raters can bridge gaps in rater availability and expertise and decrease site burden while also improving the accuracy of data and administrations. 

 

Back to Blog