Skip to main content
Research Article
Originally Published 11 June 2009
Free Access

National Institutes of Health Stroke Scale Certification Is Reliable Across Multiple Venues

Abstract

Background and Purpose— National Institutes of Health Stroke Scale certification is required for participation in modern stroke clinical trials and as part of good clinical care in stroke centers. A new training and demonstration DVD was produced to replace existing training and certification videotapes. Previously, this DVD, with 18 patients representing all possible scores on 15 scale items, was shown to be reliable among expert users. The DVD is now the standard for National Institutes of Health Stroke Scale training, but the videos have not been validated among general (ie, nonexpert) users.
Methods— We sought to measure interrater reliability of the certification DVD among general users using methodology previously published for the DVD. All raters who used the DVD certification through the American Heart Association web site were included in this study. Each rater evaluated one of 3 certification groups.
Results— Responses were received from 8214 raters overall, 7419 raters using the Internet and 795 raters using other venues. Among raters from other venues, 33% of all responses came from registered nurses, 23% from emergency department MD/other emergency department/other physicians, and 44% from neurologists. Half (51%) of raters were previously National Institutes of Health Stroke Scale-certified and 93% were from the United States/Canada. Item responses were tabulated, scoring performed as previously published, and agreement measured with unweighted kappa coefficients for individual items and an intraclass correlation coefficient for the overall score. In addition, agreement in this study was compared with the agreement obtained in the original DVD validation study to determine if there were differences between novice and experienced users. Kappas ranged from 0.15 (ataxia) to 0.81 (Item 1c, Level of Consciousness-commands [LOCC] questions). Of 15 items, 2 showed poor, 11 moderate, and 2 excellent agreement based on kappa scores. Agreement was slightly lower to that obtained from expert users for LOCC, best gaze, visual fields, facial weakness, motor left arm, motor right arm, and sensory loss. The intraclass correlation coefficient for total score was 0.85 (95% CI, 0.72 to 0.90). Reliability scores were similar among specialists and there were no major differences between nurses and physicians, although scores tended to be lower for neurologists and trended higher among raters not previously certified. Scores were similar across various certification settings.
Conclusions— The data suggest that certification using the National Institute of Neurological Disorders and Stroke DVDs is robust and surprisingly reliable for National Institutes of Health Stroke Scale certification across multiple venues.
Neurologists who care for patients with stroke are required to certify in use of the National Institutes of Health Stroke Scale (NIHSS) now that Disease-Specific Specialty Designation as a Primary Stroke Center is available from the Joint Commission.1,2 The NIHSS is a widely used stroke deficit assessment tool used in nearly all large clinical stroke trials to document baseline and outcome severity.3–5 A training and certification process exists to assure that raters use the NIHSS in a uniform manner6,7; videotapes were used for training and certification from 1988 to 2006. To update the training and certification process, the National Institute of Neurological Disorders and Stroke produced a DVD in 2006 that is distributed widely by the American Academy of Neurology, the American Heart Association, and the National Stroke Association. Originally the DVD was validated in 3 select stroke centers to obtain a best-case impression of how the DVD patients should be scored among expert users.8 The DVD was designed, however, for a nonexpert single user to view at home or in an office and the use among nonexperts has not been validated. In addition, the DVD certification in group settings is not validated. Also, scores may not be generally applicable when novice users view the training DVD and then attempt certification. Hence, we collected scores from single use, group use, and a web site to determine the reliability of the DVD certification outside of experienced centers and across multiple venues.

Methods

The training DVD includes 18 patients divided into 3 groups balanced for severity and stroke side. Raters were asked to certify using one of the 3 patient groups. Details on the DVD and the certification method have been described.8
We obtained certification scores from users in the following venues: single user (home or desktop), small groups, large groups, and a web site. Single users took the DVD home or to an office, watched the training video, and then watched the certification video cases. Small group certifications occurred at single sites where the training video was shown and then no more than 12 users watched the certification video and marked score sheets individually. Large group certifications occurred at meetings of trial investigators participating in a variety of clinical trials; the training video was shown and then certification patients were shown. In the large group settings, each user marked their own score sheet without discussion among other users. From all venues, score sheets were faxed to the University of California–San Diego Stroke Clinical Trial Coordinating Center for scoring using the published algorithm.7 The training/certification web site is sponsored by the American Heart Association. Users were encouraged to watch the training video over the Internet before certifying on one of the 3 certification groups; scores were recorded on the web site and then raw data were transmitted to the University of California–San Diego.
Descriptive analysis was performed on all data in the data set. The number of raters who certified using this DVD was tabulated by setting (individual, small group, investigator meeting, and web site) as well as specialty (RN, emergency department MD, neurology, other emergency department, other), prior certification status (yes, no), and country (US/Canada, others), if collected. Summaries of the individual item score as well as the total NIHSS were generated.
Reliability was assessed for the individual items of the NIHSS as well as the overall score. Scores of the individual items were tabulated. Agreement for the individual items among raters was assessed using the unweighted kappa statistic (κ) for multiple raters9 with a 95% CI obtained using the bootstrap resampling technique with 1000 replicates. The methods used here are similar to the methods used in the original DVD validation study to allow comparison between the 2 studies.8 In this study, the bootstrap technique was used instead of the jackknife technique because there are several instances when the jackknife technique was not appropriate.10 Agreement between this study and the original DVD study was considered to be statistically different if the estimated κ in the original study did not fall into the 95% CI for κ in this study. Using similar methods, reliability of the individual items was assessed separately for the subgroups of patients by setting as well as specialty, certification status, and country, if available. Comparison of κ statistics across subgroups was done using the bootstrap technique for correlated data.11 Ninety-five percent CIs for differences in κ between 2 subgroups were calculated. The Bonferroni correction was used to adjust for multiple comparisons within each subgroup comparison. In addition, the scatterplot of the item scores for each subject was used to visually compare and confirm the reliability graphically and the consistency of item score by group.
Agreement on the overall total NIHSS was assessed with an intraclass correlation coefficient (ICC) obtained using a one-way random effects model for repeated measurements with continuous outcomes (with ratings nested within patients).12 The bootstrap resampling technique was used to obtain 95% CIs for the ICC. There are 2 comparisons that are of interest in this study: (1) ICC in the current study with that obtained in the DVD validation study; and (2) ICC in this study among the subgroups. The first was assessed by determining if the 95% CI for the ICC in this study contained the ICC from the DVD validation study. If true, there was no evidence to indicate a difference in ICC between the 2 studies. ICCs in the present study were compared between subgroups for setting, specialty, prior certification status, and country by calculating the 95% CI for the difference in ICC for correlate data between 2 subgroups. If zero is included in the CI, there is no evidence to indicate a difference. To compare ICC among the 3 groups of patients (A, B, and C), the Fisher’s Z transformation for comparison of independent ICCs was used.1 In both instances, the Bonferroni correction was applied to adjust for multiple comparisons. Similar to item score, the scatterplot of the total NIHSS for each subject was used to visualize the variability of scores by subgroups.
To assess the mean effect of the covariates on the total NIHSS, a random intercept mixed effects regression model was fit to the data.

Results

We received score sheets from 379 single users, 178 small group users, 238 large group users, and 7419 web users. Among the 49 284 expected responses (8214×6), we received 49 272 ratings (99.9% completion rate). Responses were received from 8214 individual raters (4796 raters scored patients in Group A, 2762 in Group B, and 656 in Group C) who each rated between 3 and 6 patients. As a result, each patient had somewhere between 655 and 4796 ratings (unequal cluster sizes). Among the raters who provided demographic information, 33% of all responses came from registered nurses, 23% from emergency department/other physicians, and 44% from neurologists. Most of the raters (93%) were from the United States and half of the raters on an average (51%) were previously NIHSS-certified. Item responses were tabulated, scoring performed as described previously, and agreement measured with unweighted κ coefficients for individual items and an ICC for the overall score.
Table 1 indicates the range of values obtained on each item over all 18 patients. The mean NIHSS total score was 8.0±6.6 (median, 7; range, 0 to 41). The spread of responses in individual items and total scores appeared similar among the subgroups, namely, sites, specialties, and prior NIHSS certification status.
Table 1. Distribution of Reponses by NIHSS Item*
ItemTotal Responses on This ItemLevel of Responses, N (%)
01234
LOC indicates level of consciousness.
*The 15 items of the NIHSS and the level of responses to each item by 8214 raters. The unequal total responses to items are due to the missing values and the percentages do not add to 100 due to rounding. The level of response corresponds to the available responses for each item; some items have 3 and others have 4 or 5 possible responses; shaded cells represent responses that are not possible.
1a LOC49 27243 564 (88)3627 (7.4)1210 (2.5)871 (1.8) 
1b LOC questions49 27228 395 (58)12 699 (26)8178 (17)  
1c LOC command49 27245 815 (93)869 (1.8)2588 (5.3)  
2 Gaze49 27143 908 (89)3225 (6.5)2138 (4.3)  
3 Visual fields49 26939 378 (80)4845 (9.8)4836 (9.8)210 (0.4) 
4 Facial weakness49 27223 114 (47)19 263 (39)5439 (11)1456 (3) 
5a Motor left arm49 25834 310 (70)4958 (10)3770 (7.7)2695 (5.5)3525 (7.2)
5b Motor right arm49 26136 828 (75)6574 (13)236 (0.5)699 (1.4)4924 (10)
6a Motor left leg49 26127 665 (56)13 477 (27)5007 (10)3063 (6.2)49 (0.1)
6b Motor right leg49 26427 086 (55)10 952 (22)6859 (14)1637 (3.3)2730 (5.5)
7 Ataxia49 24329 828 (61)12 715 (26)6700 (14)  
8 Sensory49 2644645 (9.4)38 305 (78)6314 (13)  
9 Aphasia49 26427 877 (57)12 752 (26)6148 (13)2487 (5) 
10 Dysarthria49 25627 222 (55)17 188 (35)4846 (10)  
11 Extinction49 25633 012 (67)10 193 (21)6051 (12)  
Table 2 compares the agreement obtained using the unweighted κ from the current data set with that of the original DVD study.1 The agreements ranged from 0.15 (ataxia) to 0.81 (Item 1c, Level of Consciousness-commands [LOCC]) using the current data set. The agreements obtained from this group of raters were similar to that of the original DVD study on all items of the NIHSS except for 7 items with lower agreement (LOCC, best gaze, visual fields, facial weakness, motor left arm, motor right arm, and sensory loss).
Table 2. Interobserver Agreement for NIHSS Items*
Scale Item No.Original DVDCurrent DVD
OverallIndividualSmall GroupInvestigator MeetingWeb
LOC indicates level of consciousness.
*The agreement (unweighted κ and 95% CI) among all raters for 15 NIHSS items using new training and certification DVD. For comparison, the agreement on individual items in original DVD validation study8 is given. To assess the effect of subgroups, we used a pairwise comparison with a Bonferroni adjustment to account for the multiple comparisons.
1a LOC0.46 (0.39–0.53)0.43 (0.01, 0.51)0.62 (0–0.69)0.52 (0.31–0.70)0.43 (0.04–0.50)0.43 (0.01–0.51)
1b LOC questions0.77 (0.64–0.90)0.77 (0.66–0.84)0.85 (0.69, 0.91)0.73 (0.49–0.86)0.70 (0.31–0.92)0.77 (0.66–0.84)
1c LOC command0.92 (0.75–1.0)0.81 (0–0.86)0.92 (0–0.98)0.78 (0–1.00)0.93 (0–0.98)0.79 (0–0.85)
2 Gaze0.70 (0.39–1.0)0.45 (0.03–0.63)0.51 (0.04, 0.73)0.48 (0.04–0.69)0.72 (0.04–0.83)0.44 (0.03–0.63)
3 Visual fields0.72 (0.57–0.87)0.57 (0.27–0.62)0.71 (0.45, 0.78)0.71 (0.28–0.90)0.69 (0.27–0.80)0.56 (0.20–0.61)
4 Facial weakness0.38 (0.27–0.49)25 (0.14–0.32)0.29 (0.16, 0.38)0.29 (0.17–0.35)0.18 (0.07–0.26)0.25 (0.14–0.32)
5a Motor left arm0.65 (0.51–0.79)0.52 (0.21–0.62)0.63 (0.32, 0.75)0.62 (0.36–0.75)0.69 (0.20–0.77)0.51 (0.18–0.61)
5b Motor right arm0.72 (0.54–0.90)0.51 (0.28–0.65)0.52 (0.29, 0.76)0.52 (0.30–0.65)0.61 (0.23–0.74)0.51 (0.26–0.65)
6a Motor left leg0.64 (0.51–0.77)0.66 (0.64–0.73)0.70 (0.53, 0.80)0.67 (0.51–0.79)0.56 (0.19–0.69)0.66 (0.53–0.74)
6b Motor right leg0.64 (0.53–0.75)0.59 (0.49–0.64)0.63 (0.51, 0.70)0.56 (0.39–0.67)0.56 (0.28, 0.69)0.59 (0.49–0.64)
7 Ataxia0.21 (0.12, 0.30)0.15 (0.06–0.22)0.18 (0.06, 0.27)0.32 (0.10–0.49)0.17 (0.06–0.25)0.15 (0.06–0.22)
8 Sensory0.73 (0.53–0.93)0.54 (0.17–0.68)0.62 (0.30, 0.79)0.60 (0.25–0.80)0.65 (0.28–0.89)0.53 (0.16–0.69)
9 Aphasia0.64 (0.53–0.75)0.58 (0.37–0.71)0.58 (0.30, 0.77)0.58 (0.32–0.74)0.60 (0.23–0.71)0.59 (0.36–0.72)
10 Dysarthria0.56 (0.39–0.73)0.46 (0.28–0.58)0.43 (0.28, 0.54)0.37 (0.20–0.48)0.56 (0.22–0.68)0.46 (0.27–0.59)
11 Extinction0.57 (0.40–0.74)0.60 (0.49–0.64)0.56 (0.43, 0.67)0.55 (0.38–0.66)0.38 (0.01–0.42)0.61 (0.49–0.65)
Among all 18 certification patients, the agreement was similar across all subgroups and among all venues. Results were remarkably similar to the results in the original DVD validation study except for some small inconsistent differences across certain subgroups (data not shown). Agreement in 4 fields (LOCQ, LOCC, visual fields, and motor left leg) was higher in other countries compared with the United States/Canada. Among specialties, emergency department MDs had higher agreement in motor right leg compared with nurses; in LOCC, motor right leg and sensory loss compared with neurologists and in motor left leg and motor right leg compared with other specialties; nurses showed greater agreement in dysarthria compared with neurologists and in motor left arm and motor left leg when compared with other specialties. Agreement in LOCQ was higher in noncertified raters than that in certified raters. Comparing venues, individual users showed higher agreement in extinction/neglect compared with the large group setting and higher agreement in visual fields and motor left arm compared with web users; in the large group setting, scores showed lower agreement in extinction/neglect compared with the web setting; the small group setting showed higher agreement in motor left arm than web users. There is no significant difference in agreement across 3 certification groups.
Table 3 lists the intraclass correlation coefficient for the overall total NIHSS score and total NIHSS by subgroup. There continues to be very good agreement in the total NIHSS score across all venues and subgroups (overall ICC of 0.85; 95% CI, 0.72 to 0.90). There are no statistically significant differences in mean NIHSS scores by country and prior NIHSS certification status. There was a statistically significant interaction between specialty and setting in mean NIHSS scores (P=0.046); however, there were no clinically significant differences. Although there were slight differences in ICC across covariates, in all cases, the agreement still remained very high. Agreement was lower among raters from the United States/Canada compared with the raters from other countries. The ICC was slightly lower among neurologists compared with the nurses, emergency department MDs, other MDs, and other physicians. Similarly, the raters with prior certification had slightly lower agreement than those who were not certified previously. The ICC was slightly lower in the case of small group setting as compared with individual, investigator meeting setting, or web users. The ICCs for certification Groups A and B were slightly lower than Group C.
Table 3. ICC for NIHSS Total Score*
 No. of RatersICC95% CI
*The ICCs for total score by overall, by site, specialty, certification status, and group. To assess the effect of subgroups, we used a pairwise comparison with a Bonferroni adjustment to account for the multiple comparisons. The ICCs by country, specialty, and certification did not include web data because there is no related information available.
Overall49 2000.85(0.72–0.90)
Country   
    USA/Canada44160.86(0.72–0.90)
    Others3110.94(0.81–0.97)
Specialty   
    Nurse15330.94(0.80–0.97)
    ED MD1840.96(0.84–0.99)
    Neurology20850.79(0.63–0.89)
    Other MD3640.92(0.64–0.98)
    Other specialties5610.92(0.70–0.97)
Certification   
    Yes24160.82(0.70–0.90)
    No14140.94(0.80–0.97)
Setting   
    Individual22510.94(0.79–0.97)
    Small group10530.71(0.53–0.91)
    Investigator meeting14230.87(0.65–0.93)
    Web44 4730.85(0.72–0.90)
Group   
    A28 7220.83(0.50–0.89)
    B16 5620.84(0.21–0.86)
    C39160.93(0.40–0.96)

Discussion

Our data show that NIHSS training and certification using the DVD is valid and reliable among general users. The certification process showed remarkable consistency across widely differing venues, including single users, small groups, large groups, and certification data from the American Heart Association web site. The individuals in this study included novice users—who viewed the training video and then attempted certification—as well as previously certified users. The reliability assessments of this certification DVD among these novice users were similar to what was found using the experienced stroke centers, indicating that the DVD is a surprisingly valid and reliable replacement for the previous videotapes. The agreement among the items was similar whether it was used by a single user or in a group setting.
We found no differences in the ICC of the total NIHSS when the DVD was used by neurologists, emergency department physicians, and nurses, suggesting that the NIHSS may be appropriate for use in clinical research trials as well as in daily communication among healthcare providers. Agreement among those identifying themselves as neurologists was slightly lower than individuals identifying themselves as registered nurses, emergency department/other MDs, or other specialties, but the results were statistically similar and generally excellent. Agreement across various settings was similar and generally moderate to excellent.
The DVD format has some advantages over videotape. The digital images can be loaded onto a web site, and the American Heart Association successfully implemented a web-based training campus using our images. This web site allows raters to view the training and certification patient videos online. The DVD technology is more widely available now than videotapes, so NIHSS certification should be possible for many more years, even if videotapes become obsolete.
This study contains certain limitations, the most important of which is that most of the raters were from the United States and Canada. We were able to determine that the scoring sheet works well for novice as well as experienced users in North America. However, these scores may not be generally applicable for non-English-speakers or raters in other countries. Therefore, we continue to collect scores from the web site to determine if the same scoring sheet generally works well outside of North America. Another inherent limitation is that video technology is a poor substitute for direct examination. In the absence of widespread proctored certification, however, no other option is available. Video certification is now widely used in many disciplines with reasonable validity and reliability.2 It is likely that web-based video training and certification will become more widespread, because the cost efficiencies are significant. Finally, the web site does not require viewing of the training video before attempted certification, so an unknown number of novice users could have tried to certify without proper training.
Due to the unbalanced group sizes, small cells for item scores, and a crossed study design, we did not use weighted κ statistics. Unweighted κ scores may underestimate agreement, yet in this study, the unweighted κ scores were comparable to the unweighted scores obtained in the primary DVD study and the weighted scores obtained in previous videotape studies. Therefore, the agreement among the viewers was at least as good and likely better than that seen previously with the videotapes. Agreement using the DVD continues to be surprisingly good and consistent among experienced as well as novice users.

Acknowledgments

We acknowledge the diligent effort and expertise of Ms Alyssa Chardi and Karen Rapp, RN.
Sources of Funding
This work was supported by National Institute of Neurological Disorders and Stroke P50 NS044148 and the Veterans Affairs Medical Research Service.
Disclosures
None.

References

1.
Alberts MJ, Hademenos G, Latchaw RE, Jagoda A, Marler J, Mayberg MR, Starke RD, Todd HW, Viste KM, Girgus M, Shephard T, Emr M, Shwayder P, Walker MD. Recommendations for the establishment of primary stroke centers. Brain Attack Coalition. JAMA. 2000; 283: 3102.
2.
Mohammad YM, Divani AA, Jradi H, Hussein HM, Hoonjan A, Qureshi AI. Primary stroke center: basic components and recommendations. South Med J. 2006; 99: 749–752.
3.
Lyden P, Lu M, Jackson C, Marler J, Kothari R, Brott T, Zivin J. Underlying structure of the National Institutes of Health Stroke Scale: results of a factor analysis. NINDS tPA Stroke Trial Investigators. Stroke. 1999; 30: 2347–2354.
4.
Goldstein L, and Samsa, G. Reliability of the National Institutes of Health Stroke Scale. Stroke. 1997; 28: 307.
5.
Goldstein LB, Bartels C, Davis JN. Interrater reliability of the NIH Stroke Scale. Arch Neurol. 1989; 46: 660.
6.
Albanese MA, Clarke WR, Adams HP Jr, Woolson RF. Ensuring reliability of outcome measures on multicenter clinical trials of treatments for acute ischemic stroke: the program developed for the Trial of ORG 10172 in Acute Stroke treatment (TOAST). Stroke. 1994; 25: 1746.
7.
Lyden P, Brott T, Tilley B, Welch KM, Mascha EJ, Levine S, Haley HC, Grotta J, Marler J. Improved reliability of the NIH Stroke Scale using video training. NINDS tPA Stroke Study Group. Stroke. 1994; 25: 2220–2226.
8.
Lyden P, Raman R, Liu L, Grotta J, Broderick J, Olson S, Shaw S, Spilker S, Meyer B, Emr M, Warren M, Marler J. NIHSS training and certification using a new digital video disk is reliable. Stroke. 2005; 36: 2446–2449.
9.
Fleiss JL. Statistical Methods for Rates and Proportions. New York: John Wiley and Sons; 1981.
10.
Efron B, Tibshirani RJ. An Introduction to the Bootstrap. New York: Chapman & Hall/CRC; 1993: 436.
11.
McKinzie DP, Mackinnon AJ, Peladeau N, Onghena P, Bruce PC, Clarke DM, Harrigan S, McGorry PD. Comparing correlated kappas by resampling: is one level of agreement significantly different from another? J Psychiatr Res. 1996; 30: 483.
12.
Zar JH. Biostatistical Analysis, IV ed. 1999. Princeton, NJ: Prentice Hall; 1999: 390–392.

eLetters(0)

eLetters should relate to an article recently published in the journal and are not a forum for providing unpublished data. Comments are reviewed for appropriate use of tone and language. Comments are not peer-reviewed. Acceptable comments are posted to the journal website only. Comments are not published in an issue and are not indexed in PubMed. Comments should be no longer than 500 words and will only be posted online. References are limited to 10. Authors of the article cited in the comment will be invited to reply, as appropriate.

Comments and feedback on AHA/ASA Scientific Statements and Guidelines should be directed to the AHA/ASA Manuscript Oversight Committee via its Correspondence page.

Information & Authors

Information

Published In

Go to Stroke
Go to Stroke

On the cover: The illustration is taken from an article in this issue, “β-Amyloid, Blood Vessels, and Brain Function” by Smith and Greenberg (Stroke. 2009; 40:2601–2606).

Stroke
Pages: 2507 - 2511
PubMed: 19520998

Versions

You are viewing the most recent version of this article.

History

Received: 23 July 2008
Accepted: 8 August 2008
Published online: 11 June 2009
Published in print: 1 July 2009

Permissions

Request permissions for this article.

Keywords

  1. clinimetrics
  2. reliability
  3. scales
  4. stroke

Authors

Affiliations

Patrick Lyden, MD
From the Departments of Neurosciences (P.L., R.R.) and Family and Preventive Medicine (R.R., L.L.), University of California–San Diego School of Medicine, San Diego, Calif; the Department Neurology (P.L.), Veterans Administration Medical Center, San Diego, Calif; and the National Institute of Neurological Disorders and Stroke (M.E., M.W., J.M.), Bethesda, Md.
Rema Raman, PhD
From the Departments of Neurosciences (P.L., R.R.) and Family and Preventive Medicine (R.R., L.L.), University of California–San Diego School of Medicine, San Diego, Calif; the Department Neurology (P.L.), Veterans Administration Medical Center, San Diego, Calif; and the National Institute of Neurological Disorders and Stroke (M.E., M.W., J.M.), Bethesda, Md.
Lin Liu, PhD
From the Departments of Neurosciences (P.L., R.R.) and Family and Preventive Medicine (R.R., L.L.), University of California–San Diego School of Medicine, San Diego, Calif; the Department Neurology (P.L.), Veterans Administration Medical Center, San Diego, Calif; and the National Institute of Neurological Disorders and Stroke (M.E., M.W., J.M.), Bethesda, Md.
Marian Emr
From the Departments of Neurosciences (P.L., R.R.) and Family and Preventive Medicine (R.R., L.L.), University of California–San Diego School of Medicine, San Diego, Calif; the Department Neurology (P.L.), Veterans Administration Medical Center, San Diego, Calif; and the National Institute of Neurological Disorders and Stroke (M.E., M.W., J.M.), Bethesda, Md.
Margo Warren
From the Departments of Neurosciences (P.L., R.R.) and Family and Preventive Medicine (R.R., L.L.), University of California–San Diego School of Medicine, San Diego, Calif; the Department Neurology (P.L.), Veterans Administration Medical Center, San Diego, Calif; and the National Institute of Neurological Disorders and Stroke (M.E., M.W., J.M.), Bethesda, Md.
John Marler, MD
From the Departments of Neurosciences (P.L., R.R.) and Family and Preventive Medicine (R.R., L.L.), University of California–San Diego School of Medicine, San Diego, Calif; the Department Neurology (P.L.), Veterans Administration Medical Center, San Diego, Calif; and the National Institute of Neurological Disorders and Stroke (M.E., M.W., J.M.), Bethesda, Md.

Notes

Correspondence to Patrick Lyden, UCSD Stroke Center, OPC Third Floor, Suite #3, 200 W Arbor Drive, San Diego CA 92103. E-mail [email protected]

Metrics & Citations

Metrics

Citations

Download Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Select your manager software from the list below and click Download.

  1. Radiographic Horizontal Conjugate Gaze Deviation, Neurology Clinical Practice, 15, 1, (2025).https://doi.org/10.1212/CPJ.0000000000200375
    Crossref
  2. A Retrospective Analysis of the Underlying Health Status of Patients Treated for Stroke in the Emergency Department of a Community Hospital Situated in a Health Professional Shortage Area, Cureus, (2024).https://doi.org/10.7759/cureus.68150
    Crossref
  3. Current Trends in Stroke Biomarkers: The Prognostic Role of S100 Calcium-Binding Protein B and Glial Fibrillary Acidic Protein, Life, 14, 10, (1247), (2024).https://doi.org/10.3390/life14101247
    Crossref
  4. Impact of Frailty on Healthcare Outcomes after Cardioembolic Ischaemic Stroke Due to Atrial Fibrillation, International Journal of Environmental Research and Public Health, 21, 3, (270), (2024).https://doi.org/10.3390/ijerph21030270
    Crossref
  5. Improvements in upper extremity isometric muscle strength, dexterity, and self-care independence during the sub-acute phase of stroke recovery: an observational study on the effects of intensive comprehensive rehabilitation, Frontiers in Neurology, 15, (2024).https://doi.org/10.3389/fneur.2024.1442120
    Crossref
  6. Stroke metric changes pre- vs. postroutine anesthesiologist involvement for endovascular treatment of acute ischemic stroke, Frontiers in Anesthesiology, 3, (2024).https://doi.org/10.3389/fanes.2024.1388407
    Crossref
  7. Left ventricular systolic dysfunction predicts clinical prognosis in patients with acute ischemic stroke after intravenous thrombolysis, Aging, (2024).https://doi.org/10.18632/aging.205786
    Crossref
  8. A retrospective analysis of the social determinants of health affecting stroke outcomes in a small hospital situated in a health professional shortage area (HPSA), PLOS Global Public Health, 4, 1, (e0001933), (2024).https://doi.org/10.1371/journal.pgph.0001933
    Crossref
  9. The Neurocritical Care Examination and Workup, CONTINUUM: Lifelong Learning in Neurology, 30, 3, (556-587), (2024).https://doi.org/10.1212/CON.0000000000001438
    Crossref
  10. Enhanced versus standard hydration in acute ischemic stroke: REVIVE—A randomized clinical trial, International Journal of Stroke, 19, 9, (1010-1019), (2024).https://doi.org/10.1177/17474930241259940
    Crossref
  11. See more
Loading...

View Options

View options

PDF and All Supplements

Download PDF and All Supplements

PDF/EPUB

View PDF/EPUB
Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Personal login Institutional Login
Purchase Options

Purchase this article to access the full text.

Purchase access to this article for 24 hours

National Institutes of Health Stroke Scale Certification Is Reliable Across Multiple Venues
Stroke
  • Vol. 40
  • No. 7

Purchase access to this journal for 24 hours

Stroke
  • Vol. 40
  • No. 7
Restore your content access

Enter your email address to restore your content access:

Note: This functionality works only for purchases done as a guest. If you already have an account, log in to access the content to which you are entitled.

Figures

Tables

Media

Share

Share

Share article link

Share

Comment Response