Skip main navigation

Closing the Care Gap

A Primer on Quality Improvement for Heart Failure Clinicians
Originally published Heart Failure. 2017;10:e003722


    Quality improvement (QI) initiatives have become an integral part of patient-centered care. In this primer, we outline 6 steps for initiating, implementing, and monitoring improvement in heart failure care. These steps include acknowledging that improvement is needed and setting a culture for improvement; forming a QI team; understanding the local problem; generating improvement strategies that will fit with the local problem; monitoring; testing; and refining improvements, analysis of data, and interpretation of run charts. This primer provides tools and resources for clinicians who want to learn how to perform QI specifically in the field of heart failure. We will illustrate the application of these steps using a hypothetical example for a congestive heart failure postdischarge clinic.

    Hypothetical Clinical Scenario

    You are in charge of the heart failure (HF) program at your institution. Your day starts with reviewing admissions from the night before. You notice that Mr X, a 70-year-old man with ischemic cardiomyopathy and a left ventricular ejection fraction of 23%, is readmitted. He has New York Heart Association class III–IV symptoms, on guideline-directed medical therapy including bisoprolol 10 mg QD, ramipril 10 mg QD, spironolactone 25 mg QD, digoxin 0.0625 mg QD, acetylsalicylic acid 81 mg QD, and furosemide 80 mg BID. He was discharged 7 days before with a HF exacerbation precipitated by ibuprofen use for his gout symptoms. During this previous admission, his furosemide dose was increased from 60 mg QD to 80 mg BID. He is now readmitted to your service with syncope, volume depletion, and prerenal acute kidney injury. You move on to your service chief meeting and raise Mr X’s case with the hospital administration team. An internal audit of your institution shows a HF 30-day readmission rate of 36% with an average readmission time of 12 days post-discharge. Your hospital’s readmission rate is significantly higher than the national average of 22%. There has been a growing interest in QI with several initiatives underway at your institution. You have been asked to launch a QI intervention to reduce hospital readmissions for patients like Mr X. How do you proceed?

    Quality indicators in HF are essential in the era of healthcare funding reform outlined by the Centers of Medicare and Medicaid Services. Simple reporting of quality indicators is no longer sufficient; there is now an expectation for sustained improvement.1 Beyond targeting readmissions as a focus of pay-for-performance incentives, key motivators of change include providing excellent patient care in a pleasant, safe, efficient, and effective environment. This QI primer will outline 6 initial steps to improve HF care, focusing on ways to keep patients like Mr. X well and out of hospital:

    1. Acknowledging that improvement is needed and setting a culture for improvement

    2. Forming a QI team

    3. Understanding the local problem

    4. Generating improvement strategies that will fit with the local problem

    5. Monitoring, testing, and refining improvements

    6. Analysis of data collected and interpretation of run charts

    This primer does not address strategies for maintaining sustainability and scaling of QI initiatives. This primer uses a hypothetical case and hypothetical data to highlight principles of QI, outlined in bold throughout the primer.

    Step 1: Acknowledging That Change Is Needed and Setting a Culture

    HF is the most common cause of hospitalization in patients >65 years of age with >6.5 million hospital days annually.2 Since 2009, HF rehospitalization has become a US national priority with targeted QI campaigns and mandatory reporting of 30-day readmission rates.1 In the recent American Heart Association’s Get With The Guidelines-Heart Failure registry, all cause readmission rates have declined only slightly from 20% in 2009 to 19% in 2012, and only a few hospitals have experienced significant success.3 Although there are risk factors for hospitalization, there are no robust or actionable risk models that predict rehospitalization.4,5 Criticisms have been directed at the 30-day readmission metric as it does not take into account the competing mortality risk, as patients who die cannot be readmitted; also, given the natural history of HF, some patients are appropriately readmitted for evaluation of advanced HF therapies. Studies suggest that 25% to 50% of postdischarge adverse events resulting in readmissions are potentially preventable or ameliorable, emphasizing a need for a multidisciplinary approach, patient and family engagement, improving transitions of care, and the postdischarge period to keep patients like Mr X well.6,7

    An organization committed to improving quality must establish a culture of improvement that filters from the top down. The clinicians invested in Mr X’s care may feel uncomfortable when the HF Chief and highly regarded hospital administrators call a meeting to ask questions about Mr X’s care. Analogous to implementing patient safety, the administrators and chief need to ensure a just culture that encourages the most responsible staff involved in the patient’s care can freely discuss issues and opportunities to improve.8

    Step 2: Forming a QI Team

    QI depends on a collaborative effort with the engagement of many health professionals. Frontline clinicians are frequently the ones who can implement change. Physician involvement is essential for a successful quality and safety intervention.9 Unfortunately, only a minority of physicians participates in QI initiatives; barriers include a lack of dedicated time, a lack of QI skillset, and insufficient financial compensation.9,10 Suggested members of the QI team are outlined in the Table. The effectiveness of a QI team depends on opinion leaders who are typically knowledgeable, humanistic, excellent communicators, and champions who are good at addressing organizational barriers to change but do not necessarily have to be knowledgeable about the specific clinic topic or local workflow issues. Identifying an opinion leader is crucial—the study by Sharara et al11 summarizes methods used to identify an opinion leader through social networks and acquisition of survey information.

    Table 1. Members of a Quality Improvement Initiative Team With the Following Roles

    Job TitleRole
    QI champion*A well-respected clinician who will enable change
    Opinion leader*Influential person, a well-regarded colleague who can exert influence on decision making and practice patterns
    QI advisorExpert on QI methodology
    Day-to-day leadersResidents, pharmacists, nurses who provide leadership in the completion of tasks and disseminating change plans, collecting data, analysis, and implementation of change
    Patient representativeAllows for better understanding of patient experiences and improvement of communication between healthcare staff and patients

    QI indicates quality improvement.

    *For an in-depth distinction between QI champion and opinion leader, see the study by Greenhalgh et al.12

    Mr X’s case prompted the formation of a QI taskforce that included members as outlined in the Table. As specific funding was not available for this QI initiative, resources had to be repurposed and staff expressing an interest in QI were assigned to specific tasks. The gratification associated with QI is not instant, and as such, it can be difficult to have fully committed team members. Providing personal recognition to participating staff, career promotion, and allowing for skill development opportunities have facilitated staff involvement in our current QI initiatives.

    Step 3: Understanding the Problem: Identifying Root Causes

    It is difficult to fix a problem that is not well understood. Improvement efforts must be based on a deep understanding of the local system of care and its problems. It is important for the QI team to seek many perspectives to ensure that the local system is well understood. A cause and effect diagram (also called an Ishikawa or fishbone diagram) is a useful method for summarizing and allows a better understanding of the various root causes that may be contributing to the problem at hand. Although we recognize that there is limited evidence to effectively reduce hospital readmissions, having patients with HF assessed by a cardiologist within 7 days of discharge and implementing comprehensive discharge summaries are some of the best practice strategies that may contribute to reducing HF readmissions.1318

    The QI team starts the brainstorming process by analyzing the care provided to Mr X. The discharging junior physician and the nurse were initially concerned about the attention being paid to Mr X’s readmission process. They were both under the impression that they would get criticized for the care they provided. After reassurance by the HF chief that the hospital operates under a “just” culture, they actively participate in the root cause analysis. When reviewing Mr X’s care, a few causes were discovered: daily weights were inconsistent, and a discharge weight was not mentioned on the discharge summary. Furthermore, Mr X speaks limited English, but at no point during his hospitalization was an interpreting service used to communicate with Mr X. There was a clear medication error in the discharge summary, with a doubling of the ramipril dose. Additionally, discharge instructions around dose adjustments and what symptoms to look out for were unclear. Finally, although he did have a follow-up appointment with his cardiologist, it was scheduled for 3 months after his initial discharge date. Using these details, the team generated a preliminary cause and effect diagram(Figure 1). This preliminary cause and effect diagram was presented by the service chief at rounds 2 weeks later for analysis and feedback by the rest of the HF team.

    Figure 1.

    Figure 1. Ishikawa diagram depicting causes of readmission based on Mr X’s case review. A cause and effect diagram (also called an Ishikawa or fishbone diagram) is a useful method for summarizing and allows a better understanding of the various root causes that may be contributing to the problem at hand. In this diagram, root causes are categorized into process, providers, environment, patient, materials, and equipment. Red star depicted as plan–do–study–act (PDSA) ramp throughout text.

    Step 4: Generating and Testing Improvement Strategies That Will Fit With the Local Problem

    While conducting QI initiatives, an overarching framework that tests ideas that are expected to lead to an improvement is needed. Based on the original cause and effect diagram, change concepts addressing root causes can stimulate improvement ideas.

    In Mr X’s case, the team hypothesized that initiating a Quality Heart Failure (QHF) clinic could address some fundamental causes for readmission outlined in Figure 1. The team decided to focus on 2 overarching root causes: predischarge education and follow-up within 72 hours of discharge at the QHF clinic. One of the core elements of the QHF clinic is to see patients within 72 hours of discharge. Based on Mr X’s case, rapid reassessment might allow reinforcement of the predischarge education, correction of medication discrepancies, and assessment medication adherence. The QHF clinic delivers multidisciplinary support including a dietitian to review salt and fluid management. The patients will also have self-management education and will be supplied with a scale and vitals booklets where they would need to record day-to-day changes.16

    It is unclear whether any of these core elements of the QHF clinic would reduce readmission rates; however, each root cause would need to be first assessed and then tested through a framework of change with a continuous assessment of outcomes such as readmission and unscheduled healthcare utilization.

    The Model for Improvement is a framework for iterative implementation of local improvements. This strategy allows changes to be tested in a low-risk environment to determine efficacy before large-scale adoption. This model begins with 3 questions:

    1. What are we trying to accomplish?

    2. How will we know that change is an improvement?

    3. What changes can we take that will result in a meaningful improvement?

    The process for generating an improvement strategy begins by defining a concise aim statement. The team might make the following statement: by August 2018, we will reduce HF hospital readmissions from 36% to 22%. The outcome of readmissions will be monitored for the duration of the improvement project.

    The second step is to ensure that the changes are having the desired impact. QI teams need to measure processes that show that the system has changed as intended. If teams do not see these desired process changes, teams cannot reasonably expect to see any changes in the outcome of readmissions. For example, if the improvement is predischarge education to improve transition of care, then a process measure would be the number of patients using a salt shaker with each meal. Fidelity is the degree to which a change is implemented as intended. QI teams measure fidelity of implementation of QHF referrals with the process measure of proportion of patients using their salt shaker. Teams might arbitrarily set a minimum acceptable fidelity of 70%. If fidelity is <70%, then the probability of impact on readmissions is significantly attenuated, and broader dissemination is difficult to justify. Achieving high fidelity of implementation is a major challenge in QI.

    Any change can lead to unintended consequences; for example, the emphasis of medication compliance may lead to tight adherence to their prescribed furosemide regimen even during sick days, overdiuresis with subsequent acute kidney injury and falls; alternatively, patients, through clinic-based education, may be erroneously inspired to increase their potassium supplementation, which could lead to hyperkalemia. Therefore, a balanced set of measures that incorporates and assesses intended and unintended consequences is important to determine whether the QI team’s changes are resulting in actual improvement.

    Plan–Do–Study–Act Cycles

    The starting point for plan–do–study–act (PDSA) cycles begins with the change idea and a series of hypotheses. These small-scale tests of change are central to iterative improvement. PDSA cycles form the foundation for implementation of a successful intervention and increase the chances of long-term success.

    The plan step includes planning the details of the change to be tested and making predictions about the outcome. Do involves executing the change and collecting appropriate data. Study compares how the actual data and measurements collected compare to the initial predictions. This step involves appropriate data collection over time, which we will discuss in a later section. Act is deciding, based on this newly acquired knowledge, whether to adopt, revise, or discard the change idea that was tested in the PDSA cycle. In improvement work, these small cycles of change can be done quickly without significant investment of time, resources, or sample sizes, and the findings of each cycle are used to inform the process until the intended goal is reached.

    The QI team decided that the first ramp of PDSA cycle should focus on Predischarge HF Education. The team first tackled education onlow-sodium diet. The team chose the process measure for the patient to report they are no longer cooking with salt or consuming commercially prepared foods when they are seen in the QHF clinic.19If the first patient does not report this, then the hypothesis has failed and the following actions should be taken: investigate why, adjust the teaching, then begin a second cycle on this ramp.

    There is a misconception that large sample size is needed in PDSA cycles. A small sample can provide a meaningful and statistically significant experience. For further exact numbers related to sample size when performing PDSA cycles refer to,21

    Multiple PDSA ramps can run concurrently and may represent a multifaceted approach to achieving the aim of reduced congestive heart failure readmissions (Figure 2). Each ramp should address root causes that were identified in step 2. Other examples of PDSA cycles could include whether the clinic is able to see patients within 72 hours of discharge with a target of 70%; the number of patients on guideline-directed medical therapy with a target of 80%; and number of patients capable of following furosemide instructions with daily weight with a target of 70%.

    Figure 2.

    Figure 2. Rapid cycle change plan–do–study–act (PDSA) process. The starting point for PDSA cycles begins with the change idea and a series of hypotheses. These small-scale informative and reflective tests of change contribute to the iterative improvement of an intervention. Based on the QI team’s illustrative root cause analysis, a ramp of successive PDSAs addressing predischarge education including sodium restriction with the avoidance of a salt shaker, diet education, logging fluid and salt intake, and daily weights.

    The required number of PDSA cycles is not known. One recent review found that the duration of a single cycle ranged from <1 day to 16 days, and the total duration for a series of PDSA cycles ranged from 1 day to 4 years.22 The overarching goal is to ensure that each facet of the change can be successfully implemented.

    Step 5: Monitoring

    A run chart displays the frequency of a quality measure’s occurrence on the y axis against a unit of time on the x axis. Run charts are a useful method for presenting and analyzing data over time.23 Charts can be created in a simple spreadsheet with the use of add-in programs and be updated in real time on a weekly basis and used to inform the efforts of the team in optimizing PDSA cycles. This has benefits as a communication tool and keeps the team focused and on track.

    In our model, we now have a family of process measures such as percentage of patients on low-salt diet, percentage of patients on guideline-directed medical therapy, percentage of patients seen within 72 hours, percentage of patients with daily recordings of weight, and percentage of patients following furosemide education.

    Step 6: Analysis of Run Charts

    A run chart can be used to monitor both process and outcome measures. The run charts for process measures should show consistent stable performance at or above the minimum acceptable fidelity (70%). If this process measure target is achieved, then the change in outcome may be because of the implemented change in process. If results are <70%, then the intervention may need to be refined or abandoned. During this monitoring phase, other barriers to implementation can be identified and overcome.

    When QI initiatives generate data that is plotted on a run chart, it is important to understand whether any changes that occur are truly a result of the intervention, rather than from secular trends or random variation. Two rules that help identify a nonrandom change within a run chart are illustrated in Figure 3. A trend requires ≥5 points all going in the same direction. A shift refers to 6 consecutive points lying on one side of the median. Typically, a shift depicts that a change is not related to chance.

    Figure 3.

    Figure 3. A hypothetical run chart displaying 30-d readmissions after implementation of Quality Heart Failure (QHF) clinic. This diagram is significant for special cause variation (nonrandom variation) with 1 trend and 1 shift (2). The x axis is the time period, whereas the y axis shows the percentage of patients with heart failure (HF) readmitted to hospital within 30 d—with a target of 22% highlighted in purple and the median highlighted in green. The quality improvement (QI) team’s first series of plan–do–study–act (PDSA) was related to predischarge education as per the root cause analysis. Once this process was optimized, the initial readmission rate drop from 36% to 34%; the next PDSA ramp addressed patients to be seen in a timely manner, within 72 h, which prompted a further reduction from 34% to 28%. The pharmacist-led guideline-directed medical therapy (GDMT) medication initiative did not make a significant difference, while implementing cardiac rehabilitation before discharge resulted in the most significant decrease in readmission rates.

    The QI team displays their run chart where the x axis is the time period (August 2016 to August 2018), whereas the y axis shows the percentage of patients with HF readmitted to hospital within 30 days—our target of 22% is highlighted in purple, and the median is highlighted in orange. In this illustrative model, the first series of PDSAs were related to predischarge HF education as seen in the ramp. Once this process was optimized, the team saw the initial readmission rate drop from 36% to 34%; the next series of PDSAs addressed a 72-hour clinic reassessment that prompted a further reduction from 34% to 28%. The team’s next PDSA series involved introducing pharmacist-led guideline-directed medical therapy medication assessment, which was not associated with any change in our readmission rate. Over the next few months, as the team’s nurse practitioner went on holiday, there was a significant increase in readmission to hospital resulting in a higher readmission rate of 30%. The team’s fourth series of PDSAs focused on referring patients to cardiac rehab, which reinforced predischarge education and encouraged daily exercise.

    It is important to note that QI initiatives can be frustrating where a series of PDSAs may not lead to the target goal. Many QI initiatives do not achieve success, as the execution stage can be the most challenging; however, reviewing steps 1 through 4 and allowing rigorous process may result in a successful initiative.


    QI initiatives have become an integral part of patient-centered care. Mr X’s case allowed the chief of HF and the hospital administrators to recognize the need for improvement strategies and engaging frontline clinicians to stimulate change. In this primer, we have outlined a method for initiating, implementing, and monitoring improvement. We have not addressed techniques to build in sustainability or spread success change beyond the local environment and how to conduct more formal evaluative studies.


    All authors had access to the data and contributed to the preparation of this article.


    Correspondence to Heather J. Ross, MD, MHSc, UHN PMB 137 585 University Ave, Toronto, Ontario, Canada M5G 2C4. E-mail


    • 1. Bergethon KE, Ju C, DeVore AD, Hardy NC, Fonarow GC, Yancy CW, Heidenreich PA, Bhatt DL, Peterson ED, Hernandez AF. Trends in 30-day readmission rates for patients hospitalized with heart failure: findings from the get with the guidelines-heart failure registry.Circ Heart Fail. 2016; 9:e002594. doi: 10.1161/CIRCHEARTFAILURE.115.002594.LinkGoogle Scholar
    • 2. Hunt SA, Abraham WT, Chin MH, Feldman AM, Francis GS, Ganiats TG, Jessup M, Konstam MA, Mancini DM, Michl K, Oates JA, Rahko PS, Silver MA, Stevenson LW, Yancy CW; American College of Cardiology Foundation; American Heart Association. 2009 focused update incorporated into the ACC/AHA 2005 guidelines for the diagnosis and management of heart failure in adults: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines Developed in Collaboration With the International Society for Heart and Lung Transplantation.J Am Coll Cardiol. 2009; 53:e1–e90. doi: 10.1016/j.jacc.2008.11.013.CrossrefGoogle Scholar
    • 3. Suter LG, Li SX, Grady JN, Lin Z, Wang Y, Bhat KR, Turkmani D, Spivack SB, Lindenauer PK, Merrill AR, Drye EE, Krumholz HM, Bernheim SM. National patterns of risk-standardized mortality and readmission after hospitalization for acute myocardial infarction, heart failure, and pneumonia: update on publicly reported outcomes measures based on the 2013 release.J Gen Intern Med. 2014; 29:1333–1340. doi: 10.1007/s11606-014-2862-5.CrossrefMedlineGoogle Scholar
    • 4. Desai AS, Stevenson LW. Rehospitalization for heart failure: predict or prevent?Circulation. 2012; 126:501–506. doi: 10.1161/CIRCULATIONAHA.112.125435.LinkGoogle Scholar
    • 5. Gheorghiade M, Vaduganathan M, Fonarow GC, Bonow RO. Rehospitalization for heart failure: problems and perspectives.J Am Coll Cardiol. 2013; 61:391–403. doi: 10.1016/j.jacc.2012.09.038.CrossrefMedlineGoogle Scholar
    • 6. van Walraven C, Jennings A, Forster AJ. A meta-analysis of hospital 30-day avoidable readmission rates.J Eval Clin Pract. 2012; 18:1211–1218. doi: 10.1111/j.1365-2753.2011.01773.x.CrossrefMedlineGoogle Scholar
    • 7. Forster AJ, Clark HD, Menard A, Dupuis N, Chernish R, Chandok N, Khan A, van Walraven C. Adverse events among medical patients after discharge from hospital.CMAJ. 2004; 170:345–349.MedlineGoogle Scholar
    • 8. Etchells E, Lester R, Morgan B, Johnson B. Striking a balance: who is accountable for patient safety?Healthc Q. 2005; 8 Spec No:146–150.CrossrefMedlineGoogle Scholar
    • 9. Taitz JM, Lee TH, Sequist TD. A framework for engaging physicians in quality and safety.BMJ Qual Saf. 2012; 21:722–728. doi: 10.1136/bmjqs-2011-000167.CrossrefMedlineGoogle Scholar
    • 10. Silver SA, Harel Z, McQuillan R, Weizman AV, Thomas A, Chertow GM, Nesrallah G, Bell CM, Chan CT. How to begin a quality improvement project.Clin J Am Soc Nephrol. 2016; 11:893–900. doi: 10.2215/CJN.11491015.CrossrefMedlineGoogle Scholar
    • 11. Sharara H, Getoor L, Norton M. Active Surveying:A Probabilistic Approach for Identifying Key Opinion Leaders. International Joint Conference on Artificial Intelligence2011. Accessed April 4, 2017.Google Scholar
    • 12. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations.Milbank Q. 2004; 82:581–629. doi: 10.1111/j.0887-378X.2004.00325.x.CrossrefMedlineGoogle Scholar
    • 13. Bradley EH, Curry L, Horwitz LI, Sipsma H, Wang Y, Walsh MN, Goldmann D, White N, Piña IL, Krumholz HM. Hospital strategies associated with 30-day readmission rates for patients with heart failure.Circ Cardiovasc Qual Outcomes. 2013; 6:444–450. doi: 10.1161/CIRCOUTCOMES.111.000101.LinkGoogle Scholar
    • 14. Fonarow GC, Abraham WT, Albert NM, Stough WG, Gheorghiade M, Greenberg BH, O’Connor CM, Pieper K, Sun JL, Yancy C, Young JB; OPTIMIZE-HF Investigators and Hospitals. Association between performance measures and clinical outcomes for patients hospitalized with heart failure.JAMA. 2007; 297:61–70. doi: 10.1001/jama.297.1.61.CrossrefMedlineGoogle Scholar
    • 15. Ryan J, Kang S, Dolacky S, Ingrassia J, Ganeshan R. Change in readmissions and follow-up visits as part of a heart failure readmission quality improvement initiative.Am J Med. 2013; 126:989–994.e1. doi: 10.1016/j.amjmed.2013.06.027.CrossrefMedlineGoogle Scholar
    • 16. Metra M, Gheorghiade M, Bonow RO, Dei Cas L. Postdischarge assessment after a heart failure hospitalization: the next step forward.Circulation. 2010; 122:1782–1785. doi: 10.1161/CIRCULATIONAHA.110.982207.LinkGoogle Scholar
    • 17. Lee DS, Stukel TA, Austin PC, Alter DA, Schull MJ, You JJ, Chong A, Henry D, Tu JV. Improved outcomes with early collaborative care of ambulatory heart failure patients discharged from the emergency department.Circulation. 2010; 122:1806–1814. doi: 10.1161/CIRCULATIONAHA.110.940262.LinkGoogle Scholar
    • 18. Salim Al-Damluji M, Dzara K, Hodshon B, Punnanithinont N, Krumholz HM, Chaudhry SI, Horwitz LI. Association of discharge summary quality with readmission risk for patients hospitalized with heart failure exacerbation.Circ Cardiovasc Qual Outcomes. 2015; 8:109–111.LinkGoogle Scholar
    • 19. Arcand J, Ivanov J, Sasson A, Floras V, Al-Hesayen A, Azevedo ER, Mak S, Allard JP, Newton GE. A high-sodium diet is associated with acute decompensated heart failure in ambulatory heart failure patients: a prospective follow-up study.Am J Clin Nutr. 2011; 93:332–337. doi: 10.3945/ajcn.110.000174.CrossrefMedlineGoogle Scholar
    • 20. Etchells E, Ho M, Shojania KG. Value of small sample sizes in rapid-cycle quality improvement projects.BMJ Qual Saf. 2016; 25:202–206. doi: 10.1136/bmjqs-2015-005094.CrossrefMedlineGoogle Scholar
    • 21. VassarStats. Accessed April 4, 2017.Google Scholar
    • 22. Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare.BMJ Qual Saf. 2014; 23:290–298. doi: 10.1136/bmjqs-2013-001862.CrossrefMedlineGoogle Scholar
    • 23. Perla RJ, Provost LP, Murray SK. The run chart: a simple analytical tool for learning from variation in healthcare processes.BMJ Qual Saf. 2011; 20:46–51. doi: 10.1136/bmjqs.2009.037895.CrossrefMedlineGoogle Scholar