Computerized neurocognitive assessment tools (NCATS) are often used as a screening tool to identify cognitive deficits after mild traumatic brain injury (mTBI). However, differing methodology across studies renders it difficult to identify a consensus regarding the validity of NCATs. Thus, studies where multiple NCATs are administered in the same sample using the same methodology are warranted.
We investigated the validity of four NCATs: the ANAM4, CNS-VS, CogState, and ImPACT. Two NCATs were randomly assigned and a battery of traditional neuropsychological (NP) tests administered to healthy control active duty service members (n = 272) and to service members within 7 days of an mTBI (n = 231). Analyses included correlations between NCAT and the NP test scores to investigate convergent and discriminant validity, and regression analyses to identify the unique variance in NCAT and NP scores attributed to group status. Effect sizes (Cohen’s f2) were calculated to guide interpretation of data.
Only 37 (0.6%) of the 5,655 correlations calculated between NCATs and NP tests are large (i.e. r ≥ 0.50). The majority of correlations are small (i.e. 0.30 > r ≥ 0.10), with no clear patterns suggestive of convergent or discriminant validity between the NCATs and NP tests. Though there are statistically significant group differences across most NCAT and NP test scores, the unique variance accounted for by group status is minimal (i.e. semipartial R2 ≤ 0.033, 0.024, 0.062, and 0.011 for ANAM4, CNS-VS, CogState, and ImPACT, respectively), with effect sizes indicating small to no meaningful effect.
Though the results are not overly promising for the validity of the four NCATs we investigated, traditional methods of investigating psychometric properties may not be appropriate for computerized tests. We offer several conceptual and methodological considerations for future studies regarding the validity of NCATs.