Skip main navigation

Training and Certifying Users of the National Institutes of Health Stroke Scale

Originally publishedhttps://doi.org/10.1161/STROKEAHA.119.027234Stroke. 2020;51:990–993

Abstract

Background and Purpose—

The National Institutes of Health Stroke Scale, designed and validated for use in clinical stroke trials, is now required for all patients with stroke at hospital admission. Recertification is required annually but no data support this frequency; the effect of mandatory training before recertification is unknown.

Methods—

To clarify optimal recertification frequency and training effect, we assessed users’ mastery of the National Institutes of Health Stroke Scale over several years using correct scores (accuracy) on each scale item of the 15-point scale. We also constructed 9 technical errors that could result from misunderstanding the scoring rules. We measured accuracy and the frequency of these technical errors over time. Using multivariable regression, we assessed the effect of time, repeat testing, and profession on user mastery.

Results—

The final dataset included 1.3×106 examinations. Data were consistent among all 3 online vendors that provide training and certification. Test accuracy showed no significant changes over time. Technical error rates were remarkably low, ranging from 0.48 to 1.36 per 90 test items. Within 2 vendors (that do not require training), the technical error rates increased negligibly over time (P<0.05). In data from a third vendor, mandatory training before recertification improved (reduced) technical errors but not accuracy.

Conclusions—

The data suggest that mastery of National Institutes of Health Stroke Scale scoring rules is stable over time, and the recertification interval should be lengthened. Mandatory retraining may be needed after unsuccessful recertifications, but not routinely otherwise.

See related article, p 705

The National Institutes of Health Stroke Scale (NIHSS) has become the de facto standard for rating neurological deficits in patients with stroke.1–3 The NIHSS emerged during the National Institutes for Neurological Disorders and Stroke trial of rt-PA for Acute Ischemic Stroke (the Trial) to standardize assessments and minimize variation across trial sites.4 During the Trial, participating investigators and coordinators were asked to train and certify at trial launch, to recertify after 6 months, and then to recertify annually.5 No data supported this arbitrary annual recertification frequency.

Now, private vendors offer on-line training and certification.3 Certification is required annually, but we sought data to support that certification frequency. We further asked whether training should be mandated before re-certification.

Methods

The data that support the findings of this study are available from the corresponding author upon reasonable request. The Cedars-Sinai Institutional Review Board determined no informed consent was necessary for this project. Online NIHSS video training and certification is managed by 3 vendors using 3 groups of patient videos, Groups A, B, or C1. Each group contains video of 6 patients with stroke, chosen to include a balanced sample of deficits. Vendor 1 does not require training before the user attempts certification; Vendor 2 has always required training; Vendor 3 began requiring training before certification on May 5, 2017. User identities are masked on all 3 vendor sites.

We defined 9 technical errors that would occur without recalling the NIHSS scoring rules (Table I in the online-only Data Supplement). Correct patient assessment (accuracy) was measured by comparing each user response to the correct answers. In each cohort, there were 90 answers (15 items scored in 6 patients), so accuracy was averaged (±SD) per 90 items. A general linear model was used to predict error rate while holding constant Year, Exam (A, B, C), Cohort Size, and User Type (New, Repeat).

Results

We assessed NIHSS scoring across 1 313 733 raters from Vendor 1 (n=255 147), Vendor 2 (n=547 949), and Vendor 3 (n=510 637). Scores for each of the 90 items showed consistency across all 3 vendors (Figure in the online-only Data Supplement).

Results are detailed in Table II in the online-only Data Supplement. Accuracy ranged from 82.08 to 88.0 per 90 test items, showing remarkable consistency. Technical error rates ranged from 0.48 to 1.36 errors per 90 test items. There was no difference between repeat and new users in either testing accuracy or technical errors. Over several years, the number of technical errors in Vendor 1—which never required training before certification—increased trivially (0.13 error/year; P<0.001; Figure 1; Table III in the online-only Data Supplement). Errors in Vendor 2—which always required training—decreased negligibly (0.014/year; P<0.05; Figure 1; Table IV in the online-only Data Supplement). Within Vendor 3, there was no significant change in error rate over several years (P>0.05; Figure 2; Table V in the online-only Data Supplement). By profession, RNs had significantly decreased error rates compared with MDs (P<0.05) when holding constant all else (Figure 2).

Figure 1.

Figure 1. Technical errors for new and repeat users in vendors 1 and 2. The technical error rate in groups A, B, or C is plotted by year as mean and SD. Total number of users in each year is plotted on the right-hand vertical axis.

Figure 2.

Figure 2. Technical errors for MD and RN users in Vendor 3. Users’ self-described profession (MD or RN) was used to assess error rates by profession over time. Users repeated certification up to 10× and the certification groups are labeled on the x-axis. Number of users per data point is labeled. RN users made significantly fewer technical errors than MD users (P<0.05). Over the 10 y, the technical error rates trended higher, reaching statistical significance for Group C (P<0.001). MD indicates Doctor of Medicine; and RN, registered nurse.

Discussion

We found little evidence of decrement in NIHSS mastery over time (Table II in the online-only Data Supplement; Figures 1 and 2), suggesting that users retain a fundamental understanding of the proper use of the scale. Since re-certification is time-consuming and burdensome; and since the original annual re-certification frequency was arbitrary; and since the present data offers no rationale for preserving the annual requirement; therefore, we recommend lengthening the time interval between re-certifications (Table).

Table. Recommendations for NIHSS Training and Certification

Recommendations
1Training should be required before first certification.
2Recertification should occur 1 y after first certification. After that, recertification should be required 2 y after the last successful recertification. After 4 successful recertifications, the interval should increase to 3 y between recertifications.
3Retraining should be required after any unsuccessful recertification.
4Vendors should attempt to identify and track users across platforms, to standardize users’ certification and reduce gaming.

Based on the data we examined, we propose these training and certification intervals. The opinions expressed here are solely those of the authors and do not represent any recommendation or endorsement by the American Heart Association/American Stroke Association. NIHSS indicates National Institutes of Health Stroke Scale.

The comparability of the error rates between the vendors who never required training and who always did support an inference that frequent retraining may be unnecessary. Given the enormity of the data sets included here, and the triviality of the increases in error rates among users of Vendor 1, it is clear that users retain a fundamental understanding of the scoring rules over time; that training does not materially affect error rates; and that users tend to use the scale correctly over many years. There is a considerable opportunity cost associated with overly frequent training/certification and we feel that our data—limited though it is—affords the opportunity to reduce the re-certification burden on stroke professionals.

Physicians and nurses perform comparably but nurses exhibited fewer technical errors (Figure 2). This finding suggests that nurse scoring could substitute for physicians when needed during hospital admission.

The strengths of this study are the large dataset from 3 online vendors that offer online NIHSS certification. Users come from several countries and include users taking NIHSS training and certification in their own language.

This study comes with limitations. Some statistically significant findings are not clinically meaningful, due to the large sample size. We cannot track individual users across vendors, relying instead on users to self-identify profession and if they are new or repeat users. We based this analysis on 9 technical errors that rely on understanding the NIHSS scoring rules; other constructed errors, or the crude pass/fail rate could lead to different results.

In conclusion, using very large datasets comprising the 3 vendors offering NIHSS on-line certification, we confirmed the scale performs as intended over many years of recertification. We found no decrement in users’ mastery of the scoring rules suggesting the interval between recertifications could be lengthened. As intended, physician and nurse performance remain comparable.

Footnotes

Guest Editor for this article was Sean I. Savitz, MD.

The online-only Data Supplement is available with this article at https://www.ahajournals.org/doi/suppl/10.1161/STROKEAHA.119.027234.

Correspondence to Patrick Lyden, MD, Department of Neurology, AHSP 8318, 127 S. San Vicente Blvd, Los Angeles, CA 90048. Email

References

  • 1. Lyden P. Using the National Institutes of Health Stroke Scale: a cautionary tale.Stroke. 2017; 48:513–519. doi: 10.1161/STROKEAHA.116.015434LinkGoogle Scholar
  • 2. Lyden P, Raman R, Liu L, Emr M, Warren M, Marler J. National Institutes of Health Stroke Scale certification is reliable across multiple venues.Stroke. 2009; 40:2507–2511. doi: 10.1161/STROKEAHA.108.532069LinkGoogle Scholar
  • 3. Lyden P, Raman R, Liu L, Grotta J, Broderick J, Olson S, et al. NIHSS training and certification using a new digital video disk is reliable.Stroke. 2005; 36:2446–2449. doi: 10.1161/01.STR.0000185725.42768.92LinkGoogle Scholar
  • 4. Brott T, Adams HP, Olinger CP, Marler JR, Barsan WG, Biller J, et al. Measurements of acute cerebral infarction: a clinical examination scale.Stroke. 1989; 20:864–870. doi: 10.1161/01.str.20.7.864LinkGoogle Scholar
  • 5. The National Institute of Neurological Disorders and Stroke rt-PA Stroke Study Group. Tissue plasminogen activator for acute ischemic stroke.N Engl J Med. 1995; 333:1581–1587CrossrefMedlineGoogle Scholar

eLetters(0)

eLetters should relate to an article recently published in the journal and are not a forum for providing unpublished data. Comments are reviewed for appropriate use of tone and language. Comments are not peer-reviewed. Acceptable comments are posted to the journal website only. Comments are not published in an issue and are not indexed in PubMed. Comments should be no longer than 500 words and will only be posted online. References are limited to 10. Authors of the article cited in the comment will be invited to reply, as appropriate.

Comments and feedback on AHA/ASA Scientific Statements and Guidelines should be directed to the AHA/ASA Manuscript Oversight Committee via its Correspondence page.