Stroke Quality Metrics: Systematic Reviews of the Relationships to Patient-Centered Outcomes and Impact of Public Reporting
Abstract
Background and Purpose—
Stroke quality metrics play an increasingly important role in quality improvement and policies related to provider reimbursement, accreditation, and public reporting. We conducted 2 systematic reviews examining the relationships between compliance with stroke quality metrics and patient-centered outcomes, and public reporting of stroke metrics and quality improvement, quality of care, or outcomes.
Methods—
MEDLINE and EMBASE databases were searched to identify studies that evaluated the relationship between stroke quality metric compliance and patient-centered outcomes in acute hospital settings and public reporting of stroke quality metrics and quality improvement activities, quality of care, or patient outcomes. We specifically excluded studies that evaluated the effect of stroke units or hospital certification.
Results—
Fourteen studies met eligibility criteria for the review of stroke quality metric compliance and patient-centered outcomes; 9 found mostly positive associations, whereas 5 found no or very limited associations. Only 2 eligible studies were found that directly addressed the public reporting of stroke quality metrics.
Conclusions—
Some studies have found positive associations between stroke metric compliance and improved patient-centered outcomes. However, high-quality studies are lacking and several methodological difficulties make the interpretation of the reported associations challenging. Information on the impact of public reporting of stroke quality metric data is extremely limited. Legitimate questions remain as to whether public reporting of stroke metrics is accurate, effective, or has the potential for unintended consequences. The generation of high-quality data examining quality metrics and stroke outcomes as well as the impact of public reporting should be given priority.
Introduction
Despite substantial progress since the release of the Institute of Medicine's Crossing the Quality Chasm report,1 significant work remains in terms of defining, measuring, and improving the quality of health care in the United States. We recently reported on the methodological development and implementation of stroke performance measures (or quality metrics) in the United States.2 Endorsed stroke quality metrics play central roles in ongoing stroke quality improvement (QI) programs, including the Primary Stroke Center Certification Program,3 the Paul Coverdell National Acute Stroke Registry,4 and the Get With The Guidelines (GWTG)-Stroke Program.5 Although randomized studies have yet to be conducted, there is good evidence from observational studies of QI programs that the systematic collection, evaluation, and feedback of stroke quality metric data can result in rapid improvements in the quality of care.6 For example, analysis of >322 000 ischemic stroke or transient ischemic attack patients discharged from 790 hospitals participating in the GWTG-Stroke program over a 5-year period found clinically meaningful improvements in the quality of stroke care that were independent of underlying secular trends.6 Subsequent analyses of the GWTG-Stroke data strongly suggest that the increased compliance with stroke quality metrics resulted from an increase in the proportion of eligible subjects treated (ie, improvements in the quality of care), rather than an increase in the number of patients excluded from measure denominators.7
However, currently, several important challenges remain, limiting the potential for stroke quality metrics to be fully incorporated into broader programmatic and regulatory policies.2 First, the degree to which greater compliance with stroke quality metrics results in direct improvements in patient-centered outcomes remains to be proven for stroke. Second, although public reporting of performance data from health care providers resonates with many, several concerns remain, including the validity of the measures reported, whether public reporting directly improves quality of care,8 or whether it leads to inaccurate characterizations of hospitals and providers.9,10
The objectives of this article are to conduct systematic reviews of observational studies examining the relationship between stroke quality metrics and patient-centered outcomes and public reporting of stroke quality metrics and QI activity, quality of care, or patient-centered outcomes.
Materials and Methods
Studies included in the systematic review of stroke quality metrics and patient outcomes were limited to observational studies that evaluated the relationship between compliance with ≥2 quality metrics and patient-centered outcomes in a broad population of acute stroke admissions. Additional criteria excluded studies that were conducted in a single hospital setting or that were limited to specific subpopulations of age, race, sex, or stroke subtype. Eligible quality metrics were not limited to those advocated by any specific accrediting body, government institution, or nonprofit corporation. Patient-centered outcomes included measures of mortality, disability, physical function, complications, quality of life, and patient satisfaction.11 The literature search was conducted using MEDLINE and EMBASE and included studies published before December 31, 2010. We conducted several searches by first combining the term “stroke” with the following key words or phrases: “quality of care,” “performance measure*,” “quality improvement,” “quality metric,” “quality measure,” and “process* of care.” These searches were then combined with the following outcomes-related terms: “outcome*,” “complications,” “mortality,” and “patient satisfaction.” We supplemented the search by screening the bibliographies of the articles that underwent full review. The search was limited to articles with an English language abstract.
For the relevancy screen, 2 authors (CP, MR) independently reviewed each title and abstract. Articles identified as potentially relevant by either reviewer underwent a full independent review to determine if the article met inclusion criteria. Final determination of study eligibility was performed by consensus. We excluded studies that evaluated the efficacy of stroke units because an extensive evidence base including meta-analyses of randomized and nonrandomized studies already exists.12,13 Similarly, we excluded studies that examined the impact of stroke center certification14 if they did not directly evaluate individual stroke metrics. Because of heterogeneity of the studies, we were unable to assess study quality.
A similar approach was used to identify studies that addressed the public reporting of stroke quality metrics and the relationship to QI activity, quality of care, or patient-centered outcomes. Adapting terms used in a previous systematic review,15 we conducted several searches by combining the term “stroke” with the following key words or phrases: “public report,” “public reporting,” “provider profile,” “report card,” “public information,” “consumer information,” “patient information,” “consumer report,” “public performance report,” and “public performance.” We also supplemented the search by screening the bibliographies of the articles that underwent full review.
Results
Systematic Review of Relationship Between Stroke Quality Metric Compliance and Patient-Centered Outcomes
Figure 1 displays the results of the literature search that identified 352 unique articles. After reviewing titles and abstracts for relevancy, 27 studies underwent full review and 14 were deemed eligible. The 14 studies' characteristics and major conclusions are summarized in the Table. Eligible studies incorporated a wide variety of study populations, quality metrics, and patient-centered outcomes. Of the 14 studies, 6 were undertaken in the United States,16–21 2 were performed in Denmark,22,23 and 1 each was performed in England,24 Holland,25 New Zealand,26 Italy,27 Scotland,28 and Taiwan.29 Of the 6 U.S. studies, 4 utilized the Veteran's Health Administration system,16,18–20 1 used national Medicare data,21 and the other used academic medical centers.17 Whereas the majority involved relatively small patient populations selected from a limited number of acute care hospitals, 5 studies (2 from the United States,17,21 2 from Denmark,22–23 and 1 from Taiwan29) involved >10 000 patients each.
Author (Y) | Location | Study Population | N | Structure/Process Measures | Outcome Measures | Conclusions |
---|---|---|---|---|---|---|
Mostly positive relationships | ||||||
Ingeman (2011)23 | Denmark | Stroke unit admissions from 10 hospitals in 2 counties (Danish National Indicator Project 2003–2008) | 11 757 | 9 processes of acute stroke care | 7 medical complications | After risk adjustment, study found a significant inverse dose–response relationship between the number of processes of care received and the risk of 5 medical complications; 6 of the 9 processes were associated with a lower risk for >1 complication (OR range, 0.43–0.97), although not all were statistically significant |
Hsieh (2010)29 | Taiwan | Patients in 39 academic and community hospitals across Taiwan | 30 599 | 5 Get With The Guidelines-Stroke performance measures and 1 safety indicator | mRS and risk of cardiovascular events/death at 1, 3, and 6 mo after stroke | After risk adjustment, mRS scores at 1, 3, and 6 mo were better for those who received IV tPA (adjusted OR range, 0.47–0.52; P<0.0012); risk of cardiovascular events/death at 1, 3, and 6 mo was lower for antithrombotics at discharge (adjusted OR=0.19, 0.33, and 0.41, respectively; P<0.0001), and anticoagulation at discharge (OR=0.28, 0.51, and 0.59, respectively; P<0.0005), but not statins at discharge (P>0.20) |
Bravata (2010)16 | United States | Patients in 3 U.S. Department of VA Hospitals and 2 non-VA Hospitals | 1487 | 7 process measures of acute stroke care identified by a national expert stroke panel | Combination of in-hospital mortality, discharge to hospice, discharge to a SNF | After risk adjustment, the study found swallowing evaluation (OR, 0.64; 95% CI, 0.43–0.94), DVT prophylaxis (OR, 0.60; 95% CI, 0.37–0.96), and treatment of hypoxia (OR, 0.26; 95% CI, 0.09–0.73) were independently associated with improvements in the combined outcome measure |
Ingeman (2008)22 | Denmark | Nationwide sample of admissions from 40 hospitals (Danish National Indicator Project 2003–2005 | 29 573 | 7 process measures of acute stroke care identified by a national expert panel | 30-d and 90-d mortality | After adjustment, 6 of 7 measures were associated with significantly lower 30-d and 90-d mortality; strong dose–response relationship between the number of care processes received and lower mortality was identified |
Micieli (2002)27 | Italy | 4 acute care hospitals | 386 | 47 process measures derived from AHA clinical guidelines for acute stroke care | Mortality, stroke recurrence, and BI at D/C, 3-mo and 6-mo | Greater noncompliance with processes of care was associated with higher mortality (RR, 3.0; 95% CI, 1.75–5.23; for subjects with ≥ 5 missed guideline recommendations compared to <5 missed recommendations); higher noncompliance was also associated with less improvement in disability (BI) at discharge (P=0.04), but had no relationship at 3 or 6 mo |
Kahn (1990)21 | United States | National sample of Medicare patients (297 hospitals) | 14 012 | ≈100 process measures based on expert opinion of stroke care | 30-d mortality | After adjustment for sickness at admission, better care was associated with lower 30-d mortality; RR of mortality was 1.36 (P<0.01) for stroke patients in lowest quartile of care vs highest quartile; significant relationships between poorer care and higher mortality seen in 3 of 4 process subscales |
Mostly limited or no relationships | ||||||
Lingsma (2008)25 | Holland | Patients in 10 centers in the Netherlands | 579 | 17 process measures that cover acute stroke care and prevention measures | Death or disability (score ≥3 mRS) at 1 y) | After adjustment, the majority of variation in outcome measure was explained by patient characteristics at admission; only a small proportion was attributed to differences in quality of care |
Douglas (2005)17 | United States | Patients in 34 academic medical centers | 16 853 | 11 structural measures used to designate primary stroke centers | In-hospital mortality, discharge home | After adjustment, none of the 11 structural measures was associated with decreased in-hospital mortality or increased discharge home |
Mohammed (2005)24 | England | Patients in 8 acute care hospitals selected with either high or low 30-d mortality | 702 | 12 process measures selected from Intercollegiate Stroke Audit Package (ISAP) | 30-d mortality | After removing 1 outlier hospital, there was no association between mortality and quality of care; the only significant difference in care between hospitals with low vs high mortality was screening for dysphagia (58% of patients in low-mortality hospitals vs 41% in high-mortality hospitals; P=0.0057); case-mix adjustment could not be performed because of missing data |
McNaughton(2003)26 | New Zealand | 3 acute care hospitals | 181 | 60 process measures from RCPSAP plus 4 additional measures | FIM, mRS, SF-36 and mortality at discharge, and 1-y mortality | After case-mix adjustment, there were only weak relationships between RCPSAP process measures and discharge outcomes and no significant relationships by 12 mo |
Weir (2001)28 | Scotland | 5 acute care hospitals | 2724 | 60 process measures from RCPSAP, plus 5 measures of organized stroke care | 6-mo mortality | After adjustment for case-mix, variation in hospital mortality was reduced, although an outlier hospital was still identifiable; moderate differences in process measures among the remaining 4 hospitals were not related to differences in adjusted mortality |
Rehabilitation measures—mostly positive relationships | ||||||
Duncan (2002)18 | United States | 11 U.S. Department of VA hospitals | 288 | 19 AHCPR process measures assessing acute (n=8) and postacute (n=11) rehabilitation care | 6-mo FIM, SF-36 PFM, IADL, SIS | After adjustment, postacute care measures but not acute measures were associated with improved FIM, AIDL, and SIS outcomes at 6 mo |
Hoenig (2002)19 | United States | 11 U.S. Department of VA hospitals | 128 | 10 structural measures organized into 3 domains and 11 AHCPR postacute rehabilitation process measures | 6-mo FIM | After adjustment, the structural measures were significantly and positively associated with process measures (P<0. 01), but not 6-mo FIM; after adjustment, process of care measures were significantly and positively associated with improved 6-mo FIM (P<0.05) |
Reker (2002)20 | United States | 11 U.S. Department of VA hospitals | 288 | 3 structural measures and 20 acute (n=9) and postacute (n=11) AHRQ guideline dimensions | Patient satisfaction score at 6-mo | After adjustment, postacute care measures but not acute measures were associated with improved patient satisfaction at 6 mo |
AHA indicates American Heart Association; AHCPR, Agency for Health Care Policy and Research; BI, Barthel Index; CI, confidence interval; D/C, discharge; DVT, deep venous thrombosis; FIM, functional independence measure; IADL, instrumental activities of daily living; ISAP, Intercollegiate Stroke Audit Package; IV, intravenous; mRS, Modified Rankin Scale; OR, odds ratio; RCPSAP, Royal College of Physicians Stroke Audit Package; RR, relative risk; SF-36 PFM, Medical Outcomes Study Short Form (SF-36) physical function module; SIS, stroke impact scale; SNF, skilled nursing facility; tPA, tissue plasminogen activator; VA, Veterans Affairs.
Although all studies included process measures, measures varied in number and substance. Six of the 14 studies examined ≤12 measures,16–17,22–24,29 with the remaining studies using between 17 and almost 100 measures.18–21,25–28 Only 3 studies included structural measures.17,19–20 Most of the studies selected measures based on published guidelines, accreditation needs, or other QI efforts, including the Royal College of Physicians Stroke Audit Package,26,28 Agency for Health Care Policy and Research Clinical Guidelines,18–19 American Hospital Association clinical guidelines,27 Intercollegiate Stroke Audit Package,24 and Brain Attack Coalition primary stroke center guidelines.17
All but 4 of the studies18–20,23 included mortality as an outcome measure, although the time frame varied from in-hospital mortality to 1, 3, 6, or 12 months postdischarge. Six studies included ≥1 measures of functional capacity, including Functional Independence Measure,18–20,26 Short Form-36,18,20,26 modified Rankin Scale,26,29 Instrumental Activities of Daily Living,18,20 Stroke Impact Scale,18,20 and Barthel Index.27 Other outcome measures included medical complications,23 stroke recurrence,27 and patient satisfaction.20
Given the variation in study characteristics, quality measures, and outcomes used, it was not possible to generate summary estimates describing the relationship between quality metric compliance and patient-centered outcomes. Six studies found mostly positive relationships between increased compliance with stroke care quality metrics and improved patient-centered outcomes.16,21–23,27,29 However, 5 studies reported mostly limited or no significant relationships between quality metrics and outcomes.17,24–26,28 Three Veteran's Health Administration-based studies examined the associations between rehabilitation-related structure and process measures and stroke outcomes measured at 6 months; 2 of the studies found that postacute care measures but not acute measures were associated with better functional outcomes18 and patient satisfaction.20 The third study found that process measures, rather than structural measures, were directly associated with improved 6-month outcomes.19 Finally, most studies noted significant limitations, including data based on observational study designs that could be influenced by selection bias, measurement error, and limited generalizability.16–19,22,26,28 Several studies also described difficulty in implementing risk-adjustment methods.17–19,22,24–26,28
Systematic Review of Relationship Between Public Reporting of Stroke Metrics and QI Activity, Quality of Care, and Patient-Centered Outcomes
Results of the literature search identified 70 unique articles, 4 of which passed the relevancy screen and underwent full review, and 2 were ultimately deemed eligible (Figure 2). In the first of study, Kelly et al30 compared rankings of 157 New York state hospitals generated by 2 different organizations. Using risk-adjusted models, the 2 organizations classified hospitals as below average, average, or above average with respect to in-hospital stroke mortality. However, only 61% (n=96) of hospitals were classified as having the same level of in-hospital mortality by the 2 organizations. The authors concluded that these data illustrate the difficulties associated with the reliability of publically reported data on quality of stroke care, and suggested that a national standard for public reporting be developed that included explicit rules to reduce bias and ensure minimum standards.30
In the second study, Hollenbeak et al31 conducted an analysis of Pennsylvania hospitals to examine the impact of variations in the intensity of public reporting in the state on in-hospital mortality for 6 acute conditions, including hemorrhagic and ischemic stroke. Using propensity score methods, they found that stroke patients treated at Pennsylvania hospitals during a period of intensive public reporting (from 2000–2003) had a significantly lower mortality (odds ratio, 0.57 for hemorrhagic stroke; odds ratio, 0.77 for ischemic stroke) than patients treated at non-Pennsylvania hospitals with limited or no public reporting requirements during the same time period. A caveat, however, was that all 200 Pennsylvania hospitals were required to submit data, whereas the reporting for non-Pennsylvania hospitals was voluntary and based on only 34 centers.
Discussion
Our systematic review confirmed that most stroke quality metrics primarily document process-centered aspects of acute hospital-based stroke care, and that most were founded on evidence-based guidelines or accreditation needs. However, we found that there is a limited evidence base addressing the relationships between quality metric compliance and stroke-related outcomes. Of the 14 studies included in this review, 9 reported mostly positive associations between stroke quality metric compliance and stroke outcomes. Although we found only a relatively modest evidence base supporting the relationship between stroke quality metric compliance and improved patient outcomes, it could be argued that this evidence base is stronger than that present in the literature for other cardiovascular diseases (ie, acute myocardial infarction and heart failure), in which the number of significant associations indicating a positive relationship between higher quality of care and better patient outcomes is surprisingly limited.32–35
In addition to the limited number of studies identified, the authors of these studies have cautioned against overinterpreting their results because of the fundamental methodological challenges associated with determining causal relationships between stroke metrics and patient outcomes.36 There are at least 5 distinct methodological challenges associated with determining these causal associations. First, statistical risk adjustment methods for stroke outcomes are not sufficiently developed to satisfactorily account for differences in patient populations (eg, age, stroke severity, comorbidities) across health care providers and settings.28,37,38 Unfortunately, there is wide variability in the number and scope of patient-level variables included in risk adjustment models and the statistical methods used to develop and validate them. In addition, many organizations generating data for public reporting use proprietary and undisclosed methods, highlighting the need for greater oversight and transparency.39 Second, analyses of facility-level data are often hampered by small sample sizes, both in the number of subjects who are eligible for a given metric, as well as the number of subjects experiencing a given outcome, eg, mortality or complications. The resulting imprecision translates to findings that vary substantially across sites or over time because of random error alone.
Third, confounding by indication, which describes the phenomenon whereby medical treatment is directly influenced by the patient's prognosis,40 is a fundamental limitation of observational studies examining the association between quality metrics and patient outcomes. Confounding by indication can result in paradoxical associations between quality of care and outcomes. For example, patients with severe stroke have inherently poor prognosis but may be treated more aggressively and therefore may receive higher-quality care. Similarly, better care may be delivered to patients who have a higher burden of risk factors or comorbidities,41 whereas, conversely, patients with mild disease or few comorbidities are not treated as aggressively. The net impact of these different confounding mechanisms may be impossible to discern.
A fourth challenge is determining the relevant time window for measuring outcomes relative to when care was provided. Outcomes measured soon after the provision of care (ie, ambulatory status at discharge) are unlikely to accurately reflect the quality or effectiveness of the care provided. Equally, outcomes measured many months after care was provided are unlikely to be a valid reflection of the care provided. The American Hospital Association/American College of Cardiology task force on performance measures has recommended that stroke outcomes be measured 1 month after hospital discharge.42 Finally, although in-hospital mortality is readily measured, it is not a good reflection of the quality of care provided.43–45 In-hospital mortality varies according to length of stay, is impacted by hospital policies related to hospice use, and may be influenced by patient/family preferences. Patients with care directives that limit the aggressiveness of medical care after a disabling stroke who receive compassionate or hospice care resulting in death are treated in many mortality models as having experienced a poor or adverse outcome rather than high-quality care. Given these limitations, there is a critical need to expand stroke mortality measures to include longer time periods (ie, 30 days, 90 days, and 1 year) and to incorporate other patient-centered outcomes, such as care preferences, functional outcomes, and quality of life.
We found only 2 studies that directly addressed the relationship between public reporting of stroke quality metrics and either QI activity, quality of care, or patient-orientated stroke outcomes. Several other reports have been motivated by the concerns raised about public reporting of stroke metrics,24,28 but these did not meet our inclusion criteria for the review. Public reporting of performance data has been a primary objective of many consumer advocates, insurance industry, and government agencies, including the Centers for Medicare and Medicaid Services.46 The central premise is that public reporting will lead to higher quality of care because providers will undertake efforts to improve care and consumers will choose plans or hospitals that provide better care.47 Solid evidence is still lacking.
Outside of stroke research, results of studies examining the effects of public reporting on quality or patient outcomes have been quite variable. There have been at least 2 systematic reviews; in 2000, Marshall et al15 conducted a review of the impact of public disclosure of performance data but could only find 7 United States-based peer-reviewed studies. They concluded that hospitals were most responsive to the publication of performance data, but found limited evidence that this was associated with improved health outcomes.15 In 2008, a systematic review of 11 studies also found that public reporting stimulated QI activity in hospitals but did not affect hospital selection by patients.48 The authors found mixed evidence for an association between public reporting and better patient outcomes, but commented that the included studies were mostly descriptive in nature and had limited strength of evidence.48 In more recent studies, a follow-up analysis of the Hospital Compare program,46 a public reporting initiative by the Centers for Medicare and Medicaid Services, found that public reporting was associated with increased performance, especially in hospitals that were low performers at baseline.49 This increased performance was shown to positively affect by reducing mortality, length of stay, and readmission rates for acute myocardial infarction patients.49 A recent randomized study of 86 Ontario hospitals evaluated the effect of public reporting on quality of care provided to acute myocardial infarction and heart failure patients.50 Public report cards documenting baseline care performance were either released early for 44 randomly selected hospitals or delayed (by 18 months) for 42 remaining hospitals. Hospitals randomized to early data release were significantly more likely to start QI initiatives compared to hospitals receiving delayed feedback (73.2% versus 46.7% and P=0.003 for acute myocardial infarction care; 61.0% versus 50.0% and P=0.04 for heart failure care); however, they did not show any significant improvements in composite process-of-care measures.
Whether mandatory public reporting of stroke quality metric data at the hospital or physician level will induce greater or more widespread quality gains compared to what has already been achieved through the current voluntary, confidential reporting approach remains to be determined.6 It is noteworthy that, as evidenced by the widespread participation in national-level stroke QI registries,51 there is currently widespread tacit support for using quality metrics for QI purposes that does not rely on public reporting.
Our report has several potential limitations. First, because we excluded research specific to evaluation of stroke units or hospital stroke center certification, our conclusions do not generalize to these 2 areas. Second, it is likely that our search strategies missed relevant articles, and that our exclusion criteria may have prevented consideration of all pertinent reports. Third, the limited number of peer-reviewed reports on public reporting of stroke metrics likely reflects the impact of publication bias in that relevant research conducted on this topic has, for whatever reason, failed to become peer-reviewed literature. Fourth, heterogeneity of study designs precluded any formal quality scoring assessment efforts or the generation of quantitative summary estimates of effect. Finally, we note that some articles included in our review were based on the same underling data source (ie, the Veteran's Health Administration18–20 and the Danish Indicator Project22,23). Although some of these data may represent duplicate cases, each study met our inclusion criteria because it reported on different outcome measures.
In summary, although much work has been performed regarding validating individual stroke quality metrics and increasing data reliability, there remain challenging methodological difficulties—including inadequate risk adjustment, small sample sizes, confounding by indication, appropriate time window, and use of in-hospital mortality—to proving a causal relationship between quality metrics and stroke outcomes. Because of the fundamental problems in measuring and interpreting outcome-based measures from observational studies, several authors have cautioned against using outcome measures to gauge quality of care, advocating instead that process measures should remain the primary method of comparing quality across hospitals and providers.26,36,37,42,44 The challenges identified in this review should be addressed before quality metrics are incorporated widely into programmatic and regulatory policies, particularly those associated with provider reimbursement and institutional accreditation. There is great need for the generation of valid evidence demonstrating the relationship between stroke quality metric compliance and patient-centered outcomes, as well as the public reporting of these metrics, so that the current considerable investments undertaken to improve stroke care can be sustained.
References
1.
Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
2.
Reeves MJ, Parker C, Fonarow GC, Smith EE, Schwamm LH. Development of stroke performance measures: Definitions, methods, and current measures. Stroke. 2010;41:1573–1578.
3.
The Joint Commission. Primary Stroke Centers—Stroke Performance Measurement. Available at: http://www.jointcommission.org/certification/primary_stroke_centers.aspx. Accessed July 14, 2011.
4.
Centers for Disease Control. The Paul Coverdell National Acute Stroke Registry. 3/2/11. Available at: http://www.cdc.gov/dhdsp/programs/stroke_registry.htm. Accessed June 12, 2011.
5.
American Stroke Association. Get With The Guidelines-Stroke. Available at: http://www.heart.org/HEARTORG/HealthcareProfessional/GetWithTheGuidelinesHFStroke/GetWithTheGuidelinesStrokeHomePage/Get-With-The-Guidelines-Stroke-Home-Page_UCM_306098_SubHomePage.jsp. Accessed July 7, 2011.
6.
Schwamm LH, Fonarow GC, Reeves MJ, Pan W, Frankel MR, Smith EE, et al. Get With The Guidelines–Stroke is associated with sustained improvement in care for patients hospitalized with acute stroke or transient ischemic attack. Circulation. 2009;119: 107–115.
7.
Reeves MJ, Grau-Sepulveda MV, Fonarow GC, Olson D, Smith EE, Schwamm LH. Are improvements in quality in the Get With The Guidelines–Stroke Program related to better care or better documentation? Circ Cardiovasc Qual Outcomes. 2011;4: 503–511.
8.
Steinbrook R. Public report cards–cardiac surgery and beyond. N Engl J Med. 2006;355: 1847–1849.
9.
Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet. 2004;363: 1147–1154.
10.
Werner RM, Asch DA. The unintended consequences of publicly reporting quality information. JAMA. 2005;293: 1239–1244.
11.
Guyatt GM, Devereaux V, Schunemann PJ, Bhandari M. Patients at the center: in our practice, and in our use of language. ACP J Club. 2004;140: A11–A12.
12.
How do stroke units improve patient outcomes? A collaborative systematic review of the randomized trials. Stroke Unit Trialists Collaboration. Stroke. 1997;28: 2139–2144.
13.
Seenan P, Long M, Langhorne P. Stroke units in their natural habitat: systematic review of observational studies. Stroke. 2007;38: 1886–1892.
14.
Xian Y, Holloway RG, Chan PS, Noyes K, Shah MN, Ting HH, et al. Association between stroke center hospitalization for acute ischemic stroke and mortality. JAMA. 2011;305: 373–380.
15.
Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data: What do we expect to gain? A review of the evidence. JAMA. 2000;283: 1866–1874.
16.
Bravata DM, Wells CK, Lo AC, Nadeau SE, Melillo J, Chodkowski D, et al. Processes of care associated with acute stroke outcomes. Arch Intern Med. 2010;170: 804–810.
17.
Douglas VC, Tong DC, Gillum LA, Zhao S, Brass LM, Dostal J, et al. Do the Brain Attack Coalition's criteria for stroke centers improve care for ischemic stroke? Neurology. 2005;64: 422–427.
18.
Duncan PW, Horner RD, Reker DA, Samsa GP, Hoenig H, Hamilton B, et al. Adherence to postacute rehabilitation guidelines is associated with functional recovery in stroke. Stroke. 2002;33: 167–177.
19.
Hoenig H, Duncan PW, Horner RD, Reker DM, Samsa GP, Dudley TK, et al. Structure, process, and outcomes in stroke rehabilitation. Med Care. 2002;40: 1036–1047.
20.
Reker DM, Duncan PW, Horner RD, Hoenig H, Samsa GP, Hamilton BB, et al. Postacute stroke guideline compliance is associated with greater patient satisfaction. Arch Phys Med Rehabil. 2002;83: 750–756.
21.
Kahn KL, Rogers WH, Rubenstein LV, Sherwood MJ, Reinisch EJ, Keeler EB, et al. Measuring quality of care with explicit process criteria before and after implementation of the DRG-based prospective payment system. JAMA. 1990;264: 1969–1973.
22.
Ingeman A, Pedersen L, Hundborg HH, Petersen P, Zielke S, Mainz J, et al. Quality of care and mortality among patients with stroke: a nationwide follow-up study. Med Care. 2008;46: 63–69.
23.
Ingeman A, Andersen G, Hundborg HH, Svendsen ML, Johnsen SP. Processes of care and medical complications in patients with stroke. Stroke. 2011;42: 167–172.
24.
Mohammed MA, Mant J, Bentham L, Raftery J. Comparing processes of stroke care in high- and low-mortality hospitals in the West Midlands, UK. Int J Qual Health Care. 2005;17: 31–36.
25.
Lingsma HF, Dippel DW, Hoeks SE, Steyerberg EW, Franke CL, van Oostenbrugge RJ, et al. Variation between hospitals in patient outcome after stroke is only partly explained by differences in quality of care: results from the Netherlands Stroke Survey. J Neurol Neurosurg Psychiatry. 2008;79: 888–894.
26.
McNaughton H, McPherson K, Taylor W, Weatherall M. Relationship between process and outcome in stroke care. Stroke. 2003;34: 713–717.
27.
Micieli G, Cavallini A, Quaglini S, Guideline Application for Decision Making in Ischemic Stroke (GLADIS) Study Group. Guideline compliance improves stroke outcome: a preliminary study in 4 districts in the Italian region of Lombardia. Stroke. 2002;33: 1341–1347.
28.
Weir N, Dennis MS, Scottish Stroke Outcomes Study Group. Towards a national system for monitoring the quality of hospital-based stroke services. Stroke. 2001;32: 1415–1421.
29.
Hsieh FI, Lien LM, Chen ST, Bai CH, Sun MC, Tseng HP, et al. Get With The Guidelines-Stroke Performance Indicators: surveillance of stroke care in the Taiwan Stroke Registry: Get With The Guidelines-Stroke in Taiwan. Circulation. 2010;122: 1116–1123.
30.
Kelly A, Thompson JP, Tuttle D, Benesch C, Holloway RG. Public reporting of quality data for stroke: is it measuring quality? Stroke. 2008;39: 3367–3371.
31.
Hollenbeak CS, Gorton CP, Tabak YP, Jones JL, Milstein A, Johannes RS. Reductions in mortality associated with intensive public reporting of hospital outcomes. Am J Med Qual. 2008;23: 279–286.
32.
Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, et al. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA. 2006;295: 1912–1920.
33.
Fonarow GC, Abraham WT, Albert NM, Stough WG, Gheorghiade M, Greenberg BH, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007;297: 61–70.
34.
Bradley EH, Herrin J, Elbel B, McNamara RL, Magid DJ, Nallamothu BK, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality. JAMA. 2006;296: 72–78.
35.
Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA. 2006;296: 2694–2702.
36.
Walsh K, Gompertz PH, Rudd AG. Stroke care: how do we measure quality? Postgrad Med J. 2002;78: 322–326.
37.
Hinchey JA, Furlan AJ, Frank JI, Kay R, Misch D, Hill C. Is in-hospital stroke mortality an accurate measure of quality of care? Neurology. 1998;50: 619–625.
38.
Hammermeister KE, Shroyer AL, Sethi GK, Grover FL. Why it is important to demonstrate linkages between outcomes of care and processes and structures of care. Med Care. 1995;33:OS5–OS16.
39.
Krumholz HM, Brindis RG, Brush JE, Cohen DJ, Epstein AJ, Furie K, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: Cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113: 456–462.
40.
MacMahon S, Collins R. Reliable assessment of the effects of treatment on mortality and major morbidity, II: observational studies. Lancet. 2001;357: 455–462.
41.
de Koning JS, Klazinga NS, Koudstaal PJ, Prins A, Borsboom GJ, Mackenbach JP. The role of “confounding by indication” in assessing the effect of quality of care on disease outcomes in general practice: results of a case-control study. BMC Health Serv Res. 2005;5: 10.
42.
Measuring and improving quality of care: a report from the American Heart Association/American College of Cardiology First Scientific Forum on Assessment of Healthcare Quality in Cardiovascular Disease and Stroke. Circulation. 2000;101: 1483–1493.
43.
Dubois RW, Rogers WH, Moxley JH, Draper D, Brook RH. Hospital inpatient mortality. Is it a predictor of quality? N Engl J Med. 1987;317: 1674–1680.
44.
Mant J, Hicks N. Detecting differences in quality of care: the sensitivity of measures of process and outcome in treating acute myocardial infarction. Br Med J. 1995;311: 793–796.
45.
Pitches DW, Mohammed MA, Lilford RJ. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res. 2007;7: 91.
46.
U.S. Department of Health and Human Services. Hospital Compare - a Quality Tool Provided by Medicare. Available at: http//:www.hospitalcompare.hhs.gov. Accessed June 8, 2011.
47.
Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care. 2003;41: I30–I38.
48.
Fung CH, Lim YW, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med. 2008;148: 111–123.
49.
Werner RM, Bradlow ET. Public reporting on hospital process improvements is linked to better patient outcomes. Health Aff (Millwood). 2010;29: 1319–1324.
50.
Tu JV, Donovan LR, Lee DS, Wang JT, Austin PC, Alter DA, et al. Effectiveness of public report cards for improving the quality of cardiac care: the EFFECT study: a randomized trial. JAMA. 2009;302: 2330–2337.
51.
Fonarow GC, Reeves MJ, Smith EE, Saver JL, Zhao X, Olson DW, et al. Characteristics, performance measures, and in-hospital outcomes of the first one million stroke and transient ischemic attack admissions in Get With The Guidelines-Stroke. Circ Cardiovasc Qual Outcomes. 2010;3: 291–302.
Information & Authors
Information
Published In
Copyright
© 2012 American Heart Association, Inc.
Versions
You are viewing the most recent version of this article.
History
Received: 8 August 2011
Accepted: 15 August 2011
Published online: 6 October 2011
Published in print: January 2012
Keywords
Subjects
Authors
Disclosures
L.H.S. serves as chair of the American Heart Association (AHA) Get With The Guidelines (GWTG) Steering Committee, is a consultant to the Massachusetts Department of Public Health, and provided expert medical opinions in malpractice lawsuits regarding stroke treatment and prevention. G.C.F. is a member of the AHA GWTG Steering Committee, is a consultant to Novartis, and receives research support from the National Institutes of Health and the Agency of Healthcare Research and Quality. E.E.S. serves as chair of the AHA GWTG-Stroke Science Subcommittee and reports research funding from the National Institutes of Health, Canadian Institutes for Health Research, Canadian Stroke Network, and Heart and Stroke Foundation of Canada. M.J.R. serves as a member of the AHA GWTG Quality Improvement and Stroke Science Subcommittees and receives salary support from the Michigan Paul Coverdell Stroke Registry.
Metrics & Citations
Metrics
Citations
Download Citations
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Select your manager software from the list below and click Download.
- Quality Metrics in Acute Stroke: Time to Own, Indian Journal of Critical Care Medicine, 27, 11, (786-787), (2023).https://doi.org/10.5005/jp-journals-10071-24584
- Temporal Changes in Quality Indicators in a Regional System of Care After Surgical and Transcatheter Aortic Valve Replacement, CJC Open, 5, 7, (508-521), (2023).https://doi.org/10.1016/j.cjco.2023.03.015
- Quality measurement for cardiovascular diseases and cancer in hospital value-based healthcare: a systematic review of the literature, BMC Health Services Research, 22, 1, (2022).https://doi.org/10.1186/s12913-022-08347-x
- Quality of acute ischemic stroke care at a tertiary Hospital in Ghana, BMC Neurology, 22, 1, (2022).https://doi.org/10.1186/s12883-021-02542-9
- Measuring Stroke Quality: Methodological Considerations in Selecting, Defining, and Analyzing Quality Measures, Stroke, 53, 10, (3214-3221), (2022)./doi/10.1161/STROKEAHA.122.036485
- Utilization of Telestroke Prior to and Following the COVID-19 Pandemic, Seminars in Neurology, 42, 01, (003-011), (2022).https://doi.org/10.1055/s-0041-1742181
- Measuring Quality of Public Hospitals in Croatia Using a Multi-Criteria Approach, International Journal of Environmental Research and Public Health, 18, 19, (9984), (2021).https://doi.org/10.3390/ijerph18199984
- Setting performance benchmarks for stroke care delivery: Which quality indicators should be prioritized in quality improvement; an analysis in 500,331 stroke admissions, International Journal of Stroke, 16, 6, (727-737), (2020).https://doi.org/10.1177/1747493020958608
- Association Between Adherence to Quality Indicators and 7-Day In-Hospital Mortality After Acute Ischemic Stroke, Stroke, 51, 12, (3664-3672), (2020)./doi/10.1161/STROKEAHA.120.029968
- Acute ischemic stroke: improving access to intravenous tissue plasminogen activator, Expert Review of Cardiovascular Therapy, 18, 5, (277-287), (2020).https://doi.org/10.1080/14779072.2020.1759422
- See more
Loading...
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Personal login Institutional LoginPurchase Options
Purchase this article to access the full text.
eLetters(0)
eLetters should relate to an article recently published in the journal and are not a forum for providing unpublished data. Comments are reviewed for appropriate use of tone and language. Comments are not peer-reviewed. Acceptable comments are posted to the journal website only. Comments are not published in an issue and are not indexed in PubMed. Comments should be no longer than 500 words and will only be posted online. References are limited to 10. Authors of the article cited in the comment will be invited to reply, as appropriate.
Comments and feedback on AHA/ASA Scientific Statements and Guidelines should be directed to the AHA/ASA Manuscript Oversight Committee via its Correspondence page.