A biotechnology company was developing a compound to address symptoms of a rare seizure disorder and enlisted Cogstate to support their efforts across a natural history study and multiple Phase 2 studies. Based on guidance from Cogstate scientific leadership, the sponsor selected a measure of adaptive functioning as a key endpoint in their trials. They later asked Cogstate to develop a first-of-its-kind measurement method in rare disease to determine if the levels of change seen from the novel drug were resulting in meaningful impact on families. Based on the positive data gathered, the study team are now launching a Phase 3 trial with the key endpoint.
Case Studies of Rare Disease Solutions in Action
Rare Disease Clinical Trial Approach Supports Evaluation of What Families Feel Makes a Meaningful Impact
Programmatic Rater Training in Rare Disease Increased Efficiencies and Maintained Quality
Cogstate worked with a sponsor team to develop rater training across a program with multiple rare disease trials. Raters were trained and then given the opportunity to participate across four trials all leveraging the same scale. This led to tremendous efficiencies for the sponsor, as nearly half of the raters participated in two or more studies, reducing the individual training burden by 25%. In addition, only 7% of scale administrations had errors that required follow-up, showing the programmatic approach to training provided reliable data capture. (Ventola, et al, NORD, 2021)
Central Raters Increased Efficiency and Optimized Data Quality
The Vineland-3 is a widely used measure of adaptive functioning in rare disease clinical trials. In one program of clinical trials, 53 site raters administered the assessment; in another program of clinical trials, 7 central raters administered the assessment. An analysis of these results showed the use of central raters offered greater efficiency and optimized data quality by decreasing rater variance, while also reducing the burden on sites. (Ventola, et al, ECRD, 2022)
- The team of 7 central raters conducted 150 administrations (~21 per person) compared to the 53 site raters who conducted 138 administrations (~2.5 per person).
- 0 of the administrations completed by the central raters had significant errors, compared to the site raters which had 4 administrations with errors that compromised the validity of the assessment.