Patient Satisfaction and Its Relationship With Clinical Quality and Inpatient Mortality in Acute Myocardial Infarction
- Other version(s) of this article
You are viewing the most recent version of this article. Previous versions:
Abstract
Background— Hospitals use patient satisfaction surveys to assess their quality of care. A key question is whether these data provide valid information about the medically related quality of hospital care. The objective of this study was to determine whether patient satisfaction is associated with adherence to practice guidelines and outcomes for acute myocardial infarction and to identify the key drivers of patient satisfaction.
Methods and Results— We examined clinical data on 6467 patients with acute myocardial infarction treated at 25 US hospitals participating in the CRUSADE initiative from 2001 to 2006. Press Ganey patient satisfaction surveys for cardiac admissions were also available from 3562 patients treated at these same 25 centers over this period. Patient satisfaction was positively correlated with 13 of 14 acute myocardial infarction performance measures. After controlling for a hospital’s overall guideline adherence score, higher patient satisfaction scores were associated with lower risk-adjusted inpatient mortality (P=0.025). One-quartile changes in both patient satisfaction and guideline adherence scores produced similar changes in predicted survival. For example, a 1-quartile change (75th to 100th) in either the patient satisfaction score or the guideline adherence score yielded the same change in predicted survival (odds ratio, 1.24; 95% CI, 1.02 to 1.49; and odds ratio, 1.24; 95% CI, 1.08 to 1.41, respectively). Satisfaction with nursing care was the most important determinant of overall patient satisfaction (P<0.001).
Conclusions— Higher patient satisfaction is associated with improved guideline adherence and lower inpatient mortality rates, suggesting that patients are good discriminators of the type of care they receive. Thus, patients’ satisfaction with their care provides important incremental information on the quality of acute myocardial infarction care.
A large number of hospitals now routinely use patient satisfaction survey instruments and data to assess their quality of care.1–4 In addition, the Centers for Medicare and Medicaid Services (CMS) recently developed a national, standardized survey instrument and data collection methodology for measuring patients’ perceptions of their hospital experiences; this instrument is called the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey.5–7 The first set of HCAHPS data were made publicly available in March 2008 to enable consumers to make comparisons of patient experiences across hospitals.
Despite the popularity of these survey instruments, important questions remain about the use of satisfaction data to assess healthcare quality. Do these data provide valid information about the medically related quality of hospital care, and if so, do these data provide independent information on the overall quality of patient care beyond that obtained from the more accepted clinical performance measures? Are hospitals that have higher levels of patient satisfaction more likely also to produce better health outcomes? Which hospital experiences best account for patients’ overall satisfaction?
This article explores the relationship between a hospital’s overall patient satisfaction score, its overall clinical quality score, and its risk-adjusted inpatient mortality rate for patients with acute myocardial infarction (AMI) using data from a clinical quality improvement initiative coupled with patient satisfaction survey data collected by an independent third party. Specifically, we examine whether (1) patient satisfaction is associated with the quality of cardiac care as measured by adherence to practice guideline recommendations, (2) whether patient satisfaction is an independent predictor of a hospital’s inpatient mortality rate for AMI, and (3) which aspects of a patient’s interactions with a hospital’s facilities and staff are the most important determinants of their overall satisfaction.
WHAT IS KNOWN
The Institute of Medicine has identified patient-centered care, or care that is “respectful of and responsive to individual patient preferences, needs, and values and ensures that patient values guide all clinical decisions,” as a key quality domain.
Hospitals routinely use patient satisfaction surveys to assess the quality of care, although it remains unclear whether patient satisfaction data provide valid information about the medically related quality of hospital care.
WHAT THE STUDY ADDS
Higher patient satisfaction is associated with lower inpatient mortality rates for acute myocardial infarction, even after controlling for hospital adherence to evidence-based practice guidelines, suggesting that patients are good discriminators of the type of care they receive.
Patients seem to differentiate between the technical (eg, quality of nurses and physicians) and nontechnical aspects (room décor, quality of food) of medical care.
Patients’ satisfaction with their care provides important incremental information on the quality of acute myocardial infarction care beyond clinical performance measures.
Methods
Data Sources
Quarterly clinical process-of-care and patient characteristic information were obtained from the Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes with Early Implementation of the ACC/AHA Guidelines (CRUSADE) quality improvement registry.8–12 CRUSADE centers collected and submitted clinical information regarding in-hospital care and outcomes of patients with non–ST-segment acute coronary syndrome with high-risk clinical features, including positive cardiac biomarkers or ischemic ST-segment ECG changes.
Quarterly patient satisfaction data were obtained from patient surveys administered by Press Ganey Associates (South Bend, Ind). Patients eligible to receive a survey included those discharged alive from the hospital, with the exception of patients transferred to another hospital using Press Ganey surveys and patients who had already been surveyed within the prior 30 days. Patients were surveyed within 1 week of hospital discharge. Only surveys for patients with cardiac diagnosis-related groups (DRG) were used for this study (including DRGs 121, 122, 124, 125, 140, and 143).
Study Population
Of the 568 hospitals that participated in CRUSADE between January 2001 and December 2006, we identified and contacted 110 hospitals that also collected Press Ganey survey data sometime during the same period. Forty-five of these hospitals granted permission to use their patient satisfaction data for this study. Using the hospital quarter as our unit of analysis, we first eliminated any quarterly patient satisfaction data from a given hospital for which we did not have at least 3 patient responses. Next, we matched the remaining quarterly observations across the 2 data sources and eliminated hospital quarters for which we did not have both clinical and satisfaction data. This yielded a total of 207 matched hospital quarter observations from 29 hospitals. Finally, because we wanted to control for individual hospital effects in our analysis, we eliminated 4 hospitals for which we did not have at least 2 quarters of matched CRUSADE and patient satisfaction data. These procedures reduced our relevant dataset to 203 quarterly observations at 25 hospitals.
Data Definitions
We calculated quarterly hospital-level adherence scores from the CRUSADE database for 14 different Class I evidence-based guidelines from the American College of Cardiology (ACC) and American Heart Association (AHA) guidelines for the treatment of AMI. We calculated hospital-level adherence scores for each measure using the same scoring method as used by CMS in the Hospital Quality Incentive Demonstration pay-for-performance program.13 That is, we calculated scores for AMI by summing the number of times each therapy was administered and dividing this amount by the sum of total eligible opportunities for all patients at the hospital. We then divided the 14 clinical processes into 3 categories (acute, discharge, and secondary prevention) and calculated separate composite scores for each category using the CMS scoring method. We also calculated an overall hospital-level composite using all 14 measures. Patient eligibility for relevant measures was determined according to defined ACC/AHA guideline indications and reported contraindications. Patients who died anytime during their hospital stay or who were transferred to another hospital were excluded from discharge care assessment. In-hospital mortality was defined as death from any cause during a patient’s hospital stay within the relevant quarter. Inpatient mortality was adjusted for a patient risk score that was calculated by a logistic model which included demographic and clinical characteristics previously identified to predict risk in a cohort of patients with acute coronary syndrome without persistent ST-segment elevation.14
The underlying patient satisfaction data comprised patient satisfaction scores on 9 different dimensions of the hospital experience (nurses, personal issues, admission, physicians, visitors and family, discharge, meals, room, and tests and treatments) and 1 overall patient assessment of this experience. Each of these 10 satisfaction scores was based on multiple questions for that aspect of the experience (supplemental Appendix 1). The overall patient assessment score was the average of 3 questions: “How well staff worked together to care for you”; “Likelihood of your recommending this hospital to others”; and “Overall rating of care given in a hospital.” All patient satisfaction questions were scored on a 5-point scale anchored by the words “very poor” and “very good” and then converted to a 100-point scale where zero represented “very poor” and 100 represented “very good.” Quarterly averages for each hospital were obtained by averaging over all of the obtained surveys on that particular score.
Statistical Analysis
The hospital quarter was the unit of study for all analyses. Pairwise Pearson product-moment correlation coefficients were computed between quarterly hospital patient overall satisfaction scores and the 14 individual quarterly hospital clinical process scores and risk-adjusted inpatient mortality for AMI.
We used multivariable logistic regression to investigate whether patient overall satisfaction was associated with risk-adjusted mortality after controlling for clinical quality. In each of these analyses, the dependent variable was based on risk-adjusted inpatient survival (1−mortality) for the particular hospital quarter. Consequently, hospital quarters with more outcome opportunities were weighted more heavily. The independent variables were based on the overall patient satisfaction score and composite guideline score for each hospital quarter. We also used weighted least squares (WLS) linear regression, in which the dependent variable was the proportion of surviving AMI patients, and obtained almost identical results. However, because the logistic regression results provide an easy way to compare the relative magnitude of improvement in survival due to changes in both patient satisfaction scores and performance scores, we only report the logistic regression findings. We also performed a random mixed effects model to account for correlation of quarterly observations within hospitals. The results of the mixed model were similar—both in direction and magnitude of effect—to the main analyses, so we only report the logit results.
Next, we conducted the Durbin-Wu-Hausman test15 to determine whether the patient overall satisfaction measure was correlated with fixed but unobserved hospital effects such as hospital size and facilities, administrative expertise, and academic affiliation. We performed this test to determine whether it was necessary to control for such fixed effects in our analysis or if we could use the more efficient estimator obtained from an analysis excluding fixed effects variables (ie, 25 hospital dummy variables). The Durbin-Wu-Hausman analysis was conducted by running a multivariate logistic regression with mortality as the dependent variable with the following 3 independent variables: the quarterly overall clinical composite score, the quarterly patient overall satisfaction score, and the residual errors from an analysis of quarterly patient overall satisfaction. The residuals come from an equation with overall satisfaction as the dependent variable and 25 hospital dummy variables and quarterly overall clinical performance as independent variables.
Next, we used a WLS model to determine the association of average answers to each of the individual survey sections (ie, nurses, physicians, meals, etc) with overall patient satisfaction. The unit of analysis was the hospital quarter, and the weights reflected the number of patient surveys in the given quarter.
Finally, we performed analyses to ascertain whether our study population was representative of the larger Press Ganey and CRUSADE populations that were excluded from the study because we could not match data between the hospitals. We repeated the analysis for the relationship of overall satisfaction and the 9 different dimensions of patient satisfaction for the 262 hospital quarters of patient data that were excluded because we did not have equivalent hospital quarter clinical data. Additionally, we ran logistic regression where the dependent variable was risk-adjusted inpatient mortality and the independent variable was overall clinical performance for the excluded sample of 6082 hospital quarters for those CRUSADE hospitals for which we did not have matched patient satisfaction data. We compared the coefficients from these additional models with our study data using the Chow F test or the Wald test, depending on whether we used WLS or logistic regression.16
All analyses were performed using JMP version 7.0.2 (SAS Institute, Inc, Cary, NC). P<0.05 was considered statistically significant.
Results
The hospital quarterly observations from 25 hospitals are based on a total of 3562 completed patient satisfaction surveys (average number of surveys/observation=18) and clinical data on 6467 patients in the CRUSADE registry (average number of patients/observation=32). Table 1 shows the diversity of our hospital sample on 4 different dimensions, including academic affiliation, size, geography, and structural resources. We have also included the total population of CRUSADE hospitals and CRUSADE patients for comparison. Overall our study population has similar characteristics. The median number of quarters per hospital in our final dataset was 8 (interquartile range, 2 to 20), and the median number of patients surveyed per hospital quarter was 18 (interquartile range, 4 to 51).
Characteristic | Study Population (n=25 Hospitals), n (%)* | CRUSADE Registry (n=568 Hospitals), n (%) |
---|---|---|
IQR indicates interquartile range. | ||
*Unless otherwise indicated. | ||
Academic affiliation | ||
Teaching | 7 (28) | 144 (25) |
Community | 18 (72) | 424 (75) |
Size, No. of beds, median (IQR) | 372 (204–522) | 318 (210–462) |
Region | ||
West | 2 (8) | 79 (14) |
Northeast | 8 (32) | 133 (23) |
Midwest | 7 (28) | 144 (25) |
Southeast | 8 (32) | 211 (37) |
Cardiology resources (highest level) | ||
No services | 0 (0) | 56 (10) |
Diagnostic catheterization | 4 (16) | 68 (12) |
Percutaneous coronary intervention | 2 (8) | 48 (8.5) |
Cardiac surgery | 19 (76) | 396 (70) |
Patient characteristics | n=6949 | n=179 073 |
Age, mean | 67.1 | 67.3 |
Female sex | 2696 (38.8) | 70 555 (39.3) |
Nonwhite race | 931 (13.4) | 31 696 (17.7) |
Diabetes mellitus | 2210 (31.8) | 59 273 (33.1) |
Prior myocardial infarction | 1855 (26.7) | 52 468 (29.3) |
Dyslipidemia | 3607 (51.9) | 88 820 (49.6) |
Current or recent smoker | 2022 (29.1) | 48 887 (27.3) |
Family history of coronary heart disease | 2759 (39.1) | 61 422 (34.3) |
Table 2 shows the variation of quarterly hospital-level guideline adherence scores and risk-adjusted inpatient mortality for AMI. Table 3 displays the median and interquartile quarterly hospital-level patient satisfaction scores for cardiac admissions for each of the 9 dimensions, as well as the overall satisfaction measure. As can be seen from these tables, there is substantial diversity in our sample of hospitals and scores. Moreover, there is more variation among the clinical scores than patient satisfaction scores.
25% | Median | 75% | Mean No. of Patients per Observation | |
---|---|---|---|---|
ACEi indicates angiotensin-converting enzyme inhibitor; ARB, angiotensin receptor blocker; and LVSD, left ventricular systolic dysfunction. | ||||
*Weighted mean. | ||||
†Total of all patient opportunities. | ||||
Acute measures | ||||
Aspirin within 24 h | 94.0 | 97.8 | 100 | 33 |
β-Blocker within 24 h | 84.6 | 93.8 | 98.3 | 31 |
Heparin, any | 84.2 | 90.8 | 97.6 | 32 |
Glycoprotein IIb/IIIa inhibitor | 41.2 | 55.9 | 69.6 | 27 |
Cardiac catheterization within 48 h | 58.1 | 75.3 | 84.7 | 30 |
ECG within 10 min | 26.0 | 37.5 | 50.3 | 25 |
Acute composite | 70.4 | 75.5 | 80.9 | 177† |
Discharge measures | ||||
Aspirin at discharge | 92.7 | 97.1 | 100 | 28 |
β-Blocker at discharge | 87.8 | 95.2 | 100 | 28 |
ACEi or ARB for LVSD | 66.7 | 80.0 | 100 | 5 |
Clopidogrel at discharge | 60.0 | 75.9 | 89.7 | 27 |
Lipid-lowering agent | 78.6 | 88.9 | 95.3 | 21 |
Discharge composite | 81.2 | 87.5 | 92.0 | 109† |
Secondary prevention measures | ||||
Smoking cessation | 75.0 | 93.9 | 100 | 9 |
Dietary modification | 71.3 | 92.2 | 100 | 31 |
Cardiac rehabilitation | 25.0 | 63.3 | 88.9 | 27 |
Secondary prevention composite | 58.3 | 77.9 | 91.9 | 67 |
Overall clinical composite score | 71.5 | 80.0 | 84.6 | 353† |
Risk-adjusted inpatient mortality rate | 0 | 3.60* | 5.33 | 32 |
25% | Median | 75% | Mean No. of Patient Surveys per Observation | |
---|---|---|---|---|
Patient satisfaction measures | ||||
Admissions | 81.8 | 85.9 | 89.2 | 17 |
Discharge | 80.0 | 83.3 | 86.9 | 17 |
Meals | 75.0 | 79.2 | 83.1 | 17 |
Nurses | 85.1 | 88.4 | 91.5 | 18 |
Personal issues | 81.3 | 84.3 | 87.6 | 17 |
Physicians | 83.2 | 87.0 | 90.0 | 17 |
Rooms | 76.3 | 79.7 | 83.7 | 18 |
Tests and treatments | 82.4 | 85.0 | 87.5 | 17 |
Visitors and family | 82.4 | 85.8 | 89.3 | 16 |
Overall satisfaction | 86.2 | 89.2 | 92.4 | 18 |
Table 4 reports the correlations between the quarterly hospital-level patient overall satisfaction scores for cardiac admissions and adherence to the 14 quality measures. Overall satisfaction was positively correlated with 13 of these 14 measures, although only 4 measures were significant at the P=0.05 level. However, at a more aggregate level, we found that patient satisfaction was significantly and positively correlated with the acute, discharge, and overall composite clinical measures. In addition, higher satisfaction scores were associated with lower risk-adjusted inpatient mortality rates (R=−0.216, P=0.002).
Variable | Correlation Coefficient With Overall Patient Satisfaction | P Value |
---|---|---|
ACEi indicates angiotensin-converting enzyme inhibitor; ARB, angiotensin receptor blocker; and LVSD, left ventricular systolic dysfunction. | ||
*P<0.05. | ||
Acute clinical measures | ||
Aspirin at arrival | 0.114 | 0.106 |
β-Blocker at arrival | 0.117 | 0.097 |
Heparin, any | 0.086 | 0.221 |
Glycoprotein IIb/IIIa inhibitor | 0.054 | 0.45 |
Cardiac catheterization within 48 h | 0.183 | 0.009* |
ECG within 10 min | 0.014 | 0.845 |
Acute composite | 0.148 | 0.035* |
Discharge clinical measures | ||
Aspirin at discharge | 0.13 | 0.07 |
β-blocker at discharge | 0.147 | 0.04* |
ACEi or ARB for LVSD | 0.101 | 0.176 |
Clopidogrel at discharge | 0.161 | 0.023* |
Lipid-lowering agent | 0.199 | 0.005* |
Discharge composite | 0.215 | 0.002* |
Secondary prevention clinical measures | ||
Smoking cessation | 0.114 | 0.118 |
Dietary modification | 0.119 | 0.091 |
Cardiac rehabilitation | −0.003 | 0.965 |
Secondary prevention composite | 0.080 | 0.255 |
Overall clinical composite score | 0.163 | 0.021* |
Risk-adjusted inpatient mortality rate | −0.216 | 0.002* |
The regression associated with the Durbin-Wu-Hausman analysis was significant at the P=0.01 level. More importantly, the coefficient on the residual variable was not significant (P=0.29). This indicates that the patient overall satisfaction score is not correlated with any omitted fixed hospital effects and thus is not biased by not including fixed hospital effects in our analyses.
Table 5 presents the logistic regression estimates for both the univariate and multivariate analyses when the dependent variable is (1, risk-adjusted mortality), for example, survival. As can be seen from these results, both the overall clinical performance score and the patient overall satisfaction score for cardiac admissions are significantly and positively associated with survival for AMI even after controlling for the other factor, with probability values of 0.001 and 0.025, respectively.
Univariable | Multivariable | |||||||
---|---|---|---|---|---|---|---|---|
Estimate | SE | χ2 Statistic | P Value | Estimate | SE | χ2 Statistic | P Value | |
Composite guideline adherence score | 2.37 | 0.64 | 13.77 | <0.001 | 2.09 | 0.65 | 10.38 | 0.001 |
Overall patient satisfaction | 3.51 | 1.22 | 8.22 | 0.004 | 2.82 | 1.26 | 5.02 | 0.025 |
To better interpret the managerial significance of these results, we performed sensitivity analyses to determine the change in predicted survival associated with 1-quartile changes in either patient satisfaction score, while keeping the clinical composite score fixed or the converse. Each 1-quartile change was made in reference to the previous quartile (ie, 0 to 25, 25 to 50, 50 to 75, and 75 to 100). One-quartile changes in patient satisfaction scores were associated with higher risk-adjusted survival over all 4 quartiles of change (odds ratio, 1.87, 1.09, 1.09, 1.24, respectively; all P<0.05) (Figure). One-quartile changes in patient satisfaction scores produced very similar increases in predicted survival compared with 1-quartile changes in composite guideline adherence scores. For example, a 1-quartile change (75th to 100th) in either the patient satisfaction score or the guideline adherence score yielded the same change in predicted survival (odds ratio, 1.24). As might be expected, larger changes in survival were observed from moving from the lowest scoring hospital to the 25% percentile and from the 75% percentile to the highest scoring hospital. Also, changes in clinical performance had more impact in hospitals below the median, whereas little to no differences between the 2 scores were observed in terms of changes in survival for hospitals above the median.
Figure. Change in predicted risk-adjusted inpatient survival associated with 1-quartile improvements in scores while keeping the composite guideline adherence composite score fixed and vice versa. One-quartile changes in patient satisfaction scores were associated with higher risk-adjusted inpatient survival over all 4 quartiles of improvement (odds ratio, 1.87, 1.09, 1.09, and 1.24, respectively; all P<0.05). In multivariable analysis, 1-quartile improvements in patient satisfaction scores produced very similar increases in predicted inpatient survival compared with 1-quartile improvements in composite guideline adherence scores.
Table 6 presents the WLS results in which the independent measures are the average quarterly scores from the patients’ evaluations of the 9 different dimensions of their hospital experience and the dependent variable is the quarterly patient overall satisfaction score. Significant predictors of patient satisfaction, in descending order, were nursing care, physicians, personal issues, the admission process, and visitors and family.
Term | Estimate | Standard Error | t Ratio | P Value |
---|---|---|---|---|
Nurse section | 0.393 | 0.061 | 6.49 | <0.001 |
Physician section | 0.176 | 0.056 | 3.14 | 0.002 |
Personal issues section | 0.202 | 0.071 | 2.85 | 0.005 |
Admission section | 0.106 | 0.044 | 2.40 | 0.017 |
Visitors and family section | 0.124 | 0.061 | 2.04 | 0.043 |
Discharge section | 0.082 | 0.058 | 1.40 | 0.163 |
Room section | −0.057 | 0.050 | −1.14 | 0.254 |
Tests and treatments section | −0.075 | 0.077 | −0.98 | 0.329 |
Meals section | 0.041 | 0.045 | 0.92 | 0.360 |
There was no significant difference in the coefficients obtained for the relationship of overall satisfaction and the 9 different dimensions of patient satisfaction between our study population and the 262 hospital quarters of patient data that were excluded because we did not have equivalent hospital quarter clinical data (Chow test: [F(10,443)]=0.548; P=0.85), nor was there any difference in the coefficients obtained for the regression between mortality and hospital-level clinical performance between our study population and the excluded sample of 6082 hospital quarters for those CRUSADE hospitals for which we did not have matched patient satisfaction data (Wald χ2=0.96; P=0.99). These findings suggest that our results generalize to at least the population of excluded hospital quarters.
Discussion
The Institute of Medicine has identified patient-centered care, or care that is “respectful of and responsive to individual patient preferences, needs, and values and ensures that patient values guide all clinical decisions,” as a key quality domain.17 Consistent with this notion, when we controlled for a hospital’s clinical performance, higher hospital-level patient satisfaction scores were independently associated with lower hospital inpatient mortality rates. This suggests that patients’ assessment of their care provides important and valid information to consumers and hospital managers about the overall quality of hospital care beyond clinical process measures. We believe this finding is new to the literature and has important implications not only for how to measure quality but also how to manage it.
To our knowledge, this is the first study to evaluate the association between patient satisfaction and mortality after adjusting for clinical quality. Jha et al,18 using data from 2429 hospitals reporting CMS-obtained patient satisfaction data for the year 2007, found a strong positive correlation between patient overall satisfaction and clinical performance. Our study confirms and extends these findings, and we found that patient satisfaction was an independent predictor of risk-adjusted inpatient mortality. Jaipaul and Rosenthal19 previously reported a negative correlation between patient overall satisfaction and unadjusted mortality rates in a study of 29 hospitals in Northeast Ohio. That study, however, was limited to a cohort of hospitals in a small geographic area and did not adjust for clinical quality or patient risk factors when evaluating the relationship between patient satisfaction and outcomes.
To gain deeper insights into what experiences patients were using when responding to the overall satisfaction questions, we found that hospitals that score high on questions such as “skill of nurses (physician),” “how well the nurses (physician) kept you informed,” “amount of attention paid to your special or personal needs,” “how well your pain was controlled,” “the degree to which the hospital staff addressed your emotional needs,” “physician’s concern for your questions and worries,” “time physician spent with you,” and “staff efforts to include you in decisions about your treatment” also tended to score high on patient overall satisfaction. In contrast, there was no association with scoring high on questions concerned with the room (eg, “room temperature and pleasantness of room décor”), meals (eg, “quality of food, temperature of food”), tests (eg, “waiting time for tests or treatment”), and discharge (eg, “speed of discharge process”) and the patient overall satisfaction score. Moreover, patient satisfaction with nursing care was the most important determinant of patient overall satisfaction, thus highlighting an important area for further quality improvement efforts and underscoring the role of the entire health care team in the in-hospital treatment of patients with AMI.
We believe these results have implications for measuring and managing the quality of medical care. First, these results give support to the premise that patients are a credible source of valid information when assessing and managing the quality of medical care and that this information represents a different view of quality than a hospital’s adherence to clinical performance measures. Second, this source of information should be very useful in helping managers identify ways to improve the overall quality level of the hospital. Our results imply that the association of changes in patient satisfaction with mortality was almost as large as those associated with changes in process performance.
Our findings also imply that increasing patient overall satisfaction will require attention to specific aspects of the patient’s experience. Thus, patients seem to differentiate between the technical and nontechnical aspects of medical care. Consistent with this observation, early invasive management (catheterization) was the clinical practice guideline most strongly associated with patient satisfaction and has previously been associated with a lower risk of inpatient mortality.20 Consequently, increasing the patient overall satisfaction score is less about making the patients “happy” (eg, improving the food, room decor, etc) and more about increasing the quality of care and the interactions between the patients and staff, particularly the nurses and the physician.
Our results also highlight that the quality of care includes actions other than those measured by clinical performance measures. This is particularly true for actions associated with nurses, an area that is not well captured by current clinical performance measures.21 In this study, the largest independent predictor of patient overall satisfaction was patient satisfaction with nursing care. A growing body of evidence supports a robust relationship between the quality of nursing care and patient safety and outcomes,22,23 and continued efforts are needed to measure and improve the quality of nursing care.24 We surmise that it may be efficient to capture specific aspects of patient satisfaction with nursing care (eg, quality of discharge planning) by asking patients for feedback. A similar process could be used to assess the quality of discharge planning in an effort to reduce readmission rates and outpatient mortality.25 These applications highlight the potential value of patient satisfaction data, not only to provide consumers with more information about patient experiences, but also to help managers evaluate hospital actions aimed at improving the quality of care.
The present study has several potential limitations. First, our sample was limited to hospitals that participated in CRUSADE and collected patient satisfaction data. This sample, however, included a diverse group of hospitals with respect to size, academic affiliation, and geography but was biased toward hospitals with full invasive and revascularization capabilities; thus, our results may not be generalizable to hospitals without revascularization capabilities. In addition, although one could argue that these hospitals have higher motivation for quality improvement than the average hospital via their participation in CRUSADE, we do not have a plausible explanation for why the interrelationship between quality, satisfaction, and outcomes is fundamentally different in these hospitals in comparison with a national cohort.
Second, although our study population is smaller than some previously published reports of patient satisfaction,18 a smaller sample should actually bias against finding a significant association between satisfaction and outcomes. Moreover, as discussed above, whenever we were able to compare our results with larger samples of Press Ganey and CRUSADE hospitals, we found a strong correspondence. Similarly, our univariate results are similar to those reported elsewhere.19 We take these findings to suggest that our sample is representative of a more general population of hospitals and that although our sample sizes are not large, our findings are not caused by random error.
Third, our study is limited to AMI, so the results are not necessarily generalizable to other medical or surgical conditions. Fourth, there is potentially an issue with censored sample bias because we obviously could not obtain patient satisfaction data for patients who died. This phenomenon, however, actually created a bias against finding an association between hospital satisfaction and hospital outcomes.
Finally, it is important to note that by testing for endogeneity, we are able to address the possibility that patient satisfaction scores are related to some fixed hospital effect such as managerial competence or hospital facilities, and it is this (unobserved) fixed effect that is affecting mortality and not patient satisfaction scores. In addition, when we performed models that included hospital structural characteristics (eg, size, academic affiliation, geography, cardiology services), we obtained nearly identical results. Our results provide us with assurance that we probably are observing the true association between patient satisfaction and mortality rather than an association occurring as the result of other unmeasured factors.
Conclusion
Higher patient satisfaction is associated with lower inpatient mortality rates even after controlling for performance guideline adherence, suggesting that patients are good discriminators of the type of care they receive. Thus, patients’ satisfaction with their care provides important incremental information on the quality of their care and care providers.
The online-only Data Supplement is available at http://circoutcomes.ahajournals.org/cgi/content/full/CIRCOUTCOMES.109.900597/DC1.
Sources of Funding
CRUSADE is funded by the Schering-Plough Corporation. Bristol-Myers Squibb/Sanofi-Aventis Pharmaceuticals Partnership provides additional funding support. Millennium Pharmaceuticals, Inc, also funded this work. There was no direct funding for this analysis. Dr Glickman is supported by a Physician Faculty Scholar award from the Robert Wood Johnson Foundation.
Disclosures
Drs Roe, Ohman, Peterson, and Schulman have made available detailed listings of disclosure information at: http://www.dcri.duke.edu/research/coi.jsp. No other authors reported financial disclosures. All analyses were performed independently at Duke University. Press Ganey had no direct role in the data analysis or drafting of the manuscript.
Footnotes
References
- 1 Press I. Patient Satisfaction: Understanding and Managing the Experience of Care. 2nd ed. Ann Arbor, Mich: Health Administration Press; 2006.Google Scholar
- 2 Turnbull JE, Hembree WE. Consumer information, patient satisfaction surveys, and public reports. Am J Med Qual. 1996; 11: S42–S45.MedlineGoogle Scholar
- 3 Barr JK, Giannotti TE, Sofaer S, Duquette CE, Waters WJ, Petrillo MK. Using public reports of patient satisfaction for hospital quality improvement. Health Serv Res. 2006; 41: 663–682.CrossrefMedlineGoogle Scholar
- 4 Barr JK, Boni CE, Kochurka KA, Nolan P, Petrillo M, Sofaer S, Waters W. Public reporting of hospital patient satisfaction: the Rhode Island experience. Health Care Financ Rev. 2002; 23: 51–70.MedlineGoogle Scholar
- 5 HCAHPS (Hospital Consumer Assessment of Healthcare Providers and Systems) facts. Centers for Medicare and Medicaid Services Web site. Available at: http://www.cms.hhs.gov/apps/media/press/factsheet. asp?Counter=3007&intNumPerPage=10&checkDate=&checkKey=&srch Type=1&numDays=3500&srchOpt=0&srchData=&keywordType=All&chkNewsType=6&intPage=&showAll=&pYear=&year=&desc=false&cboOrder=date. Updated March, 28, 2008. Accessed March 23, 2009.Google Scholar
- 6 Darby C, Hays RD, Kletke P. Development and evaluation of the CAHPS hospital survey. Health Serv Res. 2005; 40: 1973–1976.CrossrefMedlineGoogle Scholar
- 7 Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients’ perspective: an overview of the CAHPS Hospital Survey development process. Health Serv Res. 2005; 40: 1977–1995.CrossrefMedlineGoogle Scholar
- 8 Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC Jr, Pollack CV Jr, Newby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA. 2006; 295: 1912–1920.CrossrefMedlineGoogle Scholar
- 9 Glickman SW, Ou FS, DeLong ER, Roe MT, Lytle BL, Mulgund J, Rumsfeld JS, Gibler WB, Ohman EM, Schulman KA, Peterson ED. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007; 297: 2373–2380.CrossrefMedlineGoogle Scholar
- 10 Staman KL, Roe MT, Fraulo ES, Lytle BL, Gibler WB, Ohman EM, Peterson ED. Quality improvement tools designed to improve adherence to ACC/AHA guidelines for the care of patients with non–ST-segment acute coronary syndromes: the CRUSADE quality improvement initiative. Crit Pathw Cardiol. 2003; 2: 34–40.MedlineGoogle Scholar
- 11 Shah BR, Glickman SW, Liang L, Gibler WB, Ohman EM, Pollack CV Jr, Roe MT, Peterson ED. The impact of for-profit hospital status on the care and outcomes of patients with non–ST-segment elevation myocardial infarction: results from the CRUSADE Initiative. J Am Coll Cardiol. 2007; 50: 1462–1468.CrossrefMedlineGoogle Scholar
- 12 Hoekstra JW, Pollack CV Jr, Roe MT, Peterson ED, Brindis R, Harrington RA, Christenson RH, Smith SC, Ohman EM, Gibler WB. Improving the care of patients with non–ST-elevation acute coronary syndromes in the emergency department: the CRUSADE initiative, Acad Emerg Med. 2002; 9: 1146–1155.CrossrefMedlineGoogle Scholar
- 13 Premier Inc. Centers for Medicare and Medicaid Services (CMS)/Premier Hospital Quality Improvement Demonstration (HQID) project: findings from year two. Available at: http://www.premierinc.com/quality-safety/tools-services/p4p/hqi/resources/hqi-whitepaper-year2.pdf. Accessed November 18, 2007.Google Scholar
- 14 Boersma E, Pieper KS, Steyerberg EW, Wilcox RG, Chang WC, Lee KL, Akkerhuis KM, Harrington RA, Deckers JW, Armstrong PW, Lincoff AM, Califf RM, Topol EJ, Simoons ML. Predictors of outcome in patients with acute coronary syndromes without persistent ST-segment elevation: results from an international trial of 9461 patients. Circulation. 2000; 101: 2557–2567.CrossrefMedlineGoogle Scholar
- 15 Davidson R, MacKinnon JG. Estimation and Inference in Econometrics. New York: Oxford University Press; 1993.Google Scholar
- 16 Chow GC. Tests of equality between sets of coefficients in two linear regressions. Econometrica. 1960; 28: 591–605.CrossrefGoogle Scholar
- 17 Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.Google Scholar
- 18 Jha AK, Orav EJ, Zheng J, Epstein AM. Patients’ perceptions of hospital care in the United States. N Eng J Med. 2008; 359: 1921–1931.CrossrefMedlineGoogle Scholar
- 19 Jaipaul CK, Rosenthal GE. Do hospitals with lower mortality have higher patient satisfaction? A regional analysis of patients with medical diagnoses. Am J Med Qual. 2003; 18: 59–65.CrossrefMedlineGoogle Scholar
- 20 Bhatt DL, Roe MT, Peterson ED, Li Y, Chen AY, Harrington RA, Greenbaum AB, Berger PB, Cannon CP, Cohen DJ, Gibson CM, Saucedo JF, Kleiman NS, Hochman JS, Boden WE, Brindis RG, Peacock WF, Smith SC Jr, Pollack CV Jr, Gibler WB, Ohman EM. CRUSADE Investigators. Utilization of early invasive management strategies for high-risk patients with non–ST-segment elevation acute coronary syndromes: results from the CRUSADE Quality Improvement Initiative. JAMA. 2004; 292: 2096–2104.CrossrefMedlineGoogle Scholar
- 21 Kurtzman ET, Dawson EM, Johnson JE. The current state of nursing performance measurement, public reporting, and value-based purchasing. Policy Polit Nurs Pract. 2008; 9: 181–191.CrossrefMedlineGoogle Scholar
- 22 Needleman J, Kurtzman ET, Kizer KW. Performance measurement of nursing care: state of the science and the current consensus. Med Care Res Rev. 2007; 64 (2 Suppl): 10S–43S.CrossrefMedlineGoogle Scholar
- 23 Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse-staffing levels and the quality of care in hospitals. N Engl J Med. 2002; 346: 1715–1722.CrossrefMedlineGoogle Scholar
- 24 Kurtzman ET, Corrigan JM. Measuring the contribution of nursing to quality, patient safety, and health care outcomes. Policy Polit Nurs Pract. 2007; 8: 20–36.CrossrefMedlineGoogle Scholar
- 25 Ross JS, Mulvey GK, Stauffer B, Patlolla V, Bernheim SM, Keenan PS, Krumholz HM. Statistical models and patient predictors of readmission for heart failure: a systematic review. Arch Intern Med. 2008; 168: 1371–1386.CrossrefMedlineGoogle Scholar
Submit a Response to This Article