ACC/AHA Special Report: Clinical Practice Guideline Implementation Strategies: A Summary of Systematic Reviews by the NHLBI Implementation Science Work Group
In 2008, the National Heart, Lung, and Blood Institute convened an Implementation Science Work Group to assess evidence-based strategies for effectively implementing clinical practice guidelines. This was part of a larger effort to update existing clinical practice guidelines on cholesterol, blood pressure, and overweight/obesity.
Review evidence from the published implementation science literature and identify effective or promising strategies to enhance the adoption and implementation of clinical practice guidelines.
This systematic review was conducted on 4 critical questions, each focusing on the adoption and effectiveness of 4 intervention strategies: (1) reminders, (2) educational outreach visits, (3) audit and feedback, and (4) provider incentives. A scoping review of the Rx for Change database of systematic reviews was used to identify promising guideline implementation interventions aimed at providers. Inclusion and exclusion criteria were developed a priori for each question, and the published literature was initially searched up to 2012, and then updated with a supplemental search to 2015. Two independent reviewers screened the returned citations to identify relevant reviews and rated the quality of each included review.
Audit and feedback and educational outreach visits were generally effective in improving both process of care (15 of 21 reviews and 12 of 13 reviews, respectively) and clinical outcomes (7 of 12 reviews and 3 of 5 reviews, respectively). Provider incentives showed mixed effectiveness for improving both process of care (3 of 4 reviews) and clinical outcomes (3 reviews equally distributed between generally effective, mixed, and generally ineffective). Reminders showed mixed effectiveness for improving process of care outcomes (27 reviews with 11 mixed and 3 generally ineffective results) and were generally ineffective for clinical outcomes (18 reviews with 6 mixed and 9 generally ineffective results). Educational outreach visits (2 of 2 reviews), reminders (3 of 4 reviews), and provider incentives (1 of 1 review) were generally effective for cost reduction. Educational outreach visits (1 of 1 review) and provider incentives (1 of 1 review) were also generally effective for cost-effectiveness outcomes. Barriers to clinician adoption or adherence to guidelines included time constraints (8 reviews/overviews); limited staffing resources (2 overviews); timing (5 reviews/overviews); clinician skepticism (5 reviews/overviews); clinician knowledge of guidelines (4 reviews/overviews); and higher age of the clinician (1 overview). Facilitating factors included guideline characteristics such as format, resources, and end-user involvement (6 reviews/overviews); involving stakeholders (5 reviews/overviews); leadership support (5 reviews/overviews); scope of implementation (5 reviews/overviews); organizational culture such as multidisciplinary teams and low-baseline adherence (9 reviews/overviews); and electronic guidelines systems (3 reviews).
The strategies of audit and feedback and educational outreach visits were generally effective in improving both process of care and clinical outcomes. Reminders and provider incentives showed mixed effectiveness, or were generally ineffective. No general conclusion could be reached about cost effectiveness, because of limitations in the evidence. Important gaps exist in the evidence on effectiveness of implementation interventions, especially regarding clinical outcomes, cost effectiveness and contextual issues affecting successful implementation.
Table of Contents
1. Introduction e124
2. Methods e124
3. Results e126
4. Discussion e127
5. Perspectives e129
Figures and Tables
Appendix 1. Author Relationships With Industry and Other Entities e135
Appendix 2. Reviewer Relationships With Industry and Other Entities e137
The National Heart, Lung, and Blood Institute (NHLBI) began to sponsor development of clinical practice guidelines (CPGs) in the 1970s to promote application of research findings for prevention, detection, and treatment of cardiovascular, lung, and blood diseases. In 2008, the NHLBI established expert panels to update the guidelines for high blood cholesterol, high blood pressure, and overweight/obesity using rigorous, systematic evidence reviews. Concurrently, 3 crosscutting work groups were formed to address risk assessment, lifestyle, and implementation. In 2013, the NHLBI initiated collaboration with the American College of Cardiology (ACC) and American Heart Association (AHA) to work with other organizations to complete, publish, and widely disseminate these guidelines. Beginning in 2014, the ACC/AHA Task Force on Clinical Practice Guidelines began updating these guidelines with collaborating organizations as an ongoing process to incorporate emerging evidence.
The uneven implementation of evidence-based CPGs is widely recognized as a continuing challenge to improving public health.1,2 Consistent with the new collaborative partnership model for developing guidelines based on NHLBI-sponsored systematic evidence reviews,3 the Implementation Science Work Group (ISWG) systematically reviewed the evidence from translation research to identify strategies shown to be effective or promising for improving the delivery of evidence-based care. The ISWG focused on healthcare delivery at both clinician and systems levels, while considering various intervention approaches, settings, contexts, and barriers commonly seen in healthcare systems. Although patient adherence to guideline recommendations is essential to achieve meaningful clinical outcomes, in this report, the NHLBI focused on the critical first steps of provider adoption and adherence. The NHLBI commissioned this report to advance the field of implementation science and inform the knowledge translation process.
The ISWG developed a conceptual framework—based on the Multilevel Approaches Toward Community Health (MATCH) model4—to define 4 levels where guideline implementation strategies can be initiated: the policy level, clinical institution level, provider level, and patient level. This conceptual framework is illustrated in Figure 1. Superimposed onto the strategies derived from the MATCH model is the current taxonomy of interventions aimed at achieving practice change used by the Cochrane Effective Practice and Organisation of Care (EPOC) Group5: Professional Interventions, Financial Interventions, Organizational Interventions (with subcategories for Provider-oriented, Patient-oriented, and Structural interventions), and Regulatory Interventions. In Figure 1, this taxonomy is denoted in parentheses next to extant elements of the model.
The ISWG used the existing Rx for Change database of systematic reviews on healthcare intervention strategies, compiled by the Canadian Agency for Drugs and Technologies in Health6 for its initial scoping review to identify promising guideline implementation interventions aimed at providers. The results clearly identified 3 intervention strategies aimed at providers with some evidence of effectiveness: academic detailing, audit and feedback, and provider reminders. A fourth intervention strategy—provider incentives—was also selected because of evidence of effectiveness in Europe and its increasing use in US healthcare systems. Evaluation was limited to these 4 interventions because, beyond the intervention strategies themselves, ISWG was keenly interested in cost effectiveness, effect on clinical outcomes, and contextual issues affecting the success of the interventions. Additionally, given the practical considerations (eg, cost, time, training) associated with implementation interventions, the 4 strategies also likely vary in the resources and infrastructure required to make them both viable and successful in applied settings. Such considerations are likely to be of interest to stakeholders interested in supporting widespread adoption of the guidelines. The 4 strategies were mapped to their EPOC equivalent as defined in Table 1. Hereafter, the EPOC terminology will be used.
|Intervention||EPOC Equivalent||EPOC Definition|
|Provider Reminders||Reminders||Patient or encounter specific information, provided verbally, on paper or on a computer screen, which is designed or intended to prompt a health professional to recall information. This would usually be encountered through their general education; in the medical records or through interactions with peers, and so remind them to perform or avoid some action to aid individual patient care. Computer-aided decision support and drugs dosage are included.|
|Academic Detailing||Educational Outreach Visits||Use of a trained person who met with providers in their practice settings to give information with the intent of changing the provider’s practice. The information given may have included feedback on the performance of the provider(s).|
|Audit and Feedback||Audit and Feedback||Any summary of clinical performance of health care over a specified period of time. The summary may also have included recommendations for clinical action. The information may have been obtained from medical records, computerized databases, or observations from patients.|
|Pay for Performance||Provider Incentives||Provider received direct or indirect financial reward or benefit for doing specific action. (Provider here means an individual. This is distinct from the EPOC term “institution incentives,” which is defined as: institution or group of providers received direct or indirect financial rewards or benefits for doing specific action.)|
As shown in Figure 1, beyond the clinical institution, the clinician, and the patient, policy-level factors and the social, cultural, and physical environment influence guideline implementation. Three of the interventions that are the focus of this report are strategies classified by EPOC as Professional Interventions (ie, educational outreach visits, audit and feedback, and reminders), all falling into the “clinical institution” box of the MATCH model. The fourth intervention—provider incentives—represents an EPOC Financial Intervention, but it too falls into the clinical institution box of the MATCH model. Thus, the scope of this report is limited to a subset of interventions intended to affect providers through the clinical institution. Most of the evidence assessed the impact of the interventions on “Clinician Intermediate Outcomes” (Figure 1), although several reviews reported “Patient Intermediate Outcomes” (particularly patient risk factors) and some “Patient Hard Outcomes” (eg, mortality).
In early 2012, with the adult cardiovascular disease (CVD) risk reduction guidelines in the final stages and over budget, the NHLBI decided to use systematic reviews (SRs) instead of primary studies for the implementation science systematic report. Accordingly, the NHLBI contracted with the American Institutes for Research to conduct the initial SR, which included 48 reviews published through 2012. A supplemental search to 2015 identified 7 additional reviews. This evaluation of SRs and overviews of synthesized evidence incorporated information that focused primarily on 3 distinct outcome categories: process-of-care; clinical effectiveness; and cost effectiveness. Although less-frequently reported, patient satisfaction and clinician satisfaction also were explored. The report focused on the 4 intervention strategies selected by the ISWG.
See the Online Data Supplement for additional details on the process and methods.
2.1. Critical Questions
Directed by the NHLBI, and with support from the SR contractor, the ISWG constructed critical questions (CQs) most relevant to identifying effective strategies to improve the delivery of evidence-based care. The 4 critical questions were:
CQ1. Does the evidence support the effectiveness of the selected intervention strategies (ie, educational outreach visits, reminders, audit and feedback, and provider incentives) in particular practice settings or for specific categories of health professionals?
CQ2. What are the cost considerations of implementing the selected intervention strategies (ie, educational outreach visits, reminders, audit and feedback, and provider incentives)?
CQ3. What are the contextual barriers—financial, organizational, and regulatory—that hinder or limit clinician adherence to and the adoption of CPGs, as encouraged by the selected intervention strategies?
CQ4. What policy or regulatory, organizational, and financial characteristics or factors influence the success of the selected clinical-institution level intervention strategies (ie, educational outreach visits, reminders, audit and feedback, and provider incentives) in achieving the implementation of guidelines and affecting professional practice behaviors?
2.2. Inclusion and Exclusion Criteria
Inclusion and exclusion criteria were developed (a priori) for each CQ. Reviews were excluded if they did not focus on CPGs or on the implementation of a clinical practice that directly affected patient care. Reviews were also excluded if they did not include interventions aimed at clinicians or focused on the implementation of administrative practices.
For CQs 1 and 2, the ISWG selected SRs that focused on the implementation of CPGs or a clinical practice directly affecting patient care and aimed at clinicians. For CQs 3 and 4, we selected both SRs and overviews of SRs that focused on contextual issues affecting guideline implementation.
The ISWG included any health condition or disease, setting, outcome, or population. Studies could include process-of-care (eg, medication ordering, lab ordering), clinical effectiveness (eg, blood pressure reduction), or other types of outcomes (eg, cost and utilization and clinician satisfaction). Studies that focused solely on interventions targeting patients, such as those examining patient education or patient reminders, were excluded.
The search was limited to English-language resources.
2.3. The Process
The ISWG maintained a separation of the collection and compilation of the evidence and the final conclusions. The NHLBI contractor conducted the initial systematic search of the published literature up to 2012 from relevant bibliographic databases (ie, the Cochrane Library, PubMed, and other National Library of Medicine sources, such as the Health Services and Technology Assessment Texts and research summaries, reviews, and reports from the Agency for Healthcare Research and Quality evidence-based practice centers) for each critical question. Two independent reviewers (G.C., J.S.) screened the returned citations to identify relevant SRs and overviews, and the rigorous validation procedures were applied to ensure that the selected articles met the preestablished inclusion and exclusion criteria. Pairs of independent raters (G.C., J.S., J.J.V., and S.H.) determined the quality of each included SR, using the Assessment of Multiple Systematic Reviews (AMSTAR) tool.7 With oversight from a paired senior researcher (G.C. or J.J.V.), 2 research analysts abstracted relevant information from the included SRs. A second senior researcher (JS) examined 20% of the abstractions to ensure consistency and quality. A senior researcher (G.C.) constructed summary evidence tables with review by a principal researcher (S.H.) for quality control. The tables display the evidence in a manageable format to answer specific parts of the CQ. The contractor also prepared a draft analytic report.
The supplemental search (2012–2015), study selection, and study quality rating was conducted by an independent contractor procured by the ACC and the AHA. The lead NHLBI staff (G.C.B.) extracted relevant information from the included SRs and constructed summary evidence tables.
Using the draft report and summary evidence tables, the ISWG reviewed the consistency of the findings with the strength of the evidence and finalized the report.
2.4. Data Analysis
For CQs 1 and 2, the ISWG used an approach that determined the effectiveness of interventions in each SR based on a count of studies with positive outcomes regardless of statistical significance.8 They used these following 3 categories to characterize the effectiveness of the interventions on each outcome in each review:
Generally effective: More than two thirds of the reviewed studies had positive intervention effects.
Mixed effectiveness: One third to two thirds of the reviewed studies showed positive intervention effects.
Generally ineffective: Less than one third of the reviewed studies showed positive intervention effects.
The assessment of overall effectiveness was derived from the preponderance of effectiveness estimates in the individual reviews. Statistical significance of the effect is not implied in this categorization. This classification scheme is used to provide a sense of the proportion of studies showing a positive effect.
For CQs 3 and 4, conclusions are drawn from the contractor’s qualitative coding of included reviews during article abstraction for a variety of categories of contextual factors identified a priori. Themes were identified and summarized in post hoc analyses to develop general observations about the contextual factors that might support or hinder the implementation of guidelines.
Two independent reviewers screened 826 articles and 55 were selected and were abstracted for this report. Included were 39 SRs, and 16 overviews of SRs. The SRs were rated using the 11-point AMSTAR tool—23 received a score of ≥8 and were considered good-quality and 16 received a score of 7 to 4 and were consider fair-quality. Seven other SRs were rated “poor” with scores ≤3 and were excluded and not used for answering the critical questions. Figure 2 illustrates the selection process.
3.1. Critical Question 1
Does the evidence support the effectiveness of the selected intervention strategies (ie, educational outreach visits, reminders, audit and feedback, and provider incentives) in particular practice settings or for specific categories of health professionals?
SRs rated “good” and “fair” were used to answer CQ 1. Table 2 shows the classification of the overall effectiveness of each intervention for process-of-care outcomes and clinical effectiveness outcomes across the full set of included reviews. Table 3 provides expanded detail, summarized from available information by study quality of the effectiveness of each intervention for process-of-care outcomes and clinical effectiveness outcomes.
|Intervention||Process-of-Care Outcomes||Clinical Effectiveness Outcomes|
|Educational outreach visits||Generally effective||Generally effective|
|Audit and feedback||Generally effective||Generally effective|
|Reminders||Mixed effectiveness||Generally ineffective|
|Provider incentives||Mixed effectiveness||Mixed effectiveness|
|Intervention||Process of Care||Clinical Effectiveness|
|Generally Effective||Mixed Effectiveness||Generally Ineffective||Generally Effective||Mixed Effectiveness||Generally Ineffective|
|Educational Outreach Visits||Good9,21,28,29 Fair10,12,23,25,30–33||Fair11||N/A||Good9 Fair12,25,33||Fair10||Good28|
|Audit and Feedback||Good9,29,34–36 Fair12,23–26,30–33,37||Good28,38 Fair10,11,13,39||N/A||Good9,28,40 Fair23,25,30,37||Good36 Fair13,26,33||Good41|
|Reminders||Good20,29,36,42 Fair10,17,24–26,30,32,33,37||Good9,15,18,22,35,38,43–45 Fair13,14||Good16,28 Fair11||Good36 Fair25,30||Good9,20,45 Fair13,26,37||Good16,18,22,43,44 Fair14,17,24,33|
In summary, educational outreach visits showed general effectiveness in 12 of 13 SRs for process-of-care outcomes, particularly in prescribing behaviors. Five SRs reported clinical effectiveness outcomes for educational outreach visits. Three of 5 SRs and 14 of the 19 included studies showed clinical effectiveness. A good-quality SR on hypertension9 found that educational outreach visits improved both process of care and clinical outcomes (reductions in median systolic and diastolic blood pressure). When only the included CVD risk reduction studies were considered, 1 fair-quality SR10 showed general effectiveness, and 1 fair-quality SR11 showed mixed effectiveness for process of care outcomes.
Audit and feedback interventions were considered in 23 SRs (9 good quality) and showed general effectiveness for both process-of-care outcomes, particularly in clinician adherence to guidelines, and for clinical outcomes. Audit and feedback showed improved process of care and clinical outcomes for the management of hypertension.9 Four fair-quality SRs also included some studies on CVD risk reduction and 3 of these reviews10–12 showed general effectiveness for process of care. Conversely, the fourth fair-quality SR13 showed general ineffectiveness in improving CVD process of care outcomes.
Reminders were considered in 27 SRs—15 were good quality. These SRs showed mixed effectiveness for process-of-care outcomes overall but general effectiveness for prescribing behaviors. However, reminders were generally ineffective for clinical outcomes. The results were similar when only the CVD risk reduction studies were considered in 8 SRs.9,11,13–18 However, reminders were generally effective in improving clinical outcomes for hypertension.9
Provider incentive interventions were included in 5 good-quality SRs and showed mixed effectiveness for both process-of-care and clinical outcomes—most of the positive outcomes were related to diabetes mellitus and asthma. When CVD risk reduction studies were analyzed separately, 1 good-quality SR19 found general effectiveness for both process of care and clinical outcomes. However, provider incentives were generally ineffective for improving clinical outcomes for hypertension in another good-quality SR.9
3.2. Critical Question 2
What are the cost considerations of implementing the selected intervention strategies (ie, educational outreach visits, reminders, audit and feedback, and provider incentives)?
SRs rated “good” and “fair” were also used to answer CQ 2. Cost considerations refer to cost reduction and cost-effectiveness outcomes based on utilization measures resulting from implementing the selected intervention strategies. The studies in the SRs differ in the way they examined cost. Some calculated the amount saved per physician, cost per prescription, prescribing costs, per-patient cost avoidance, patient out-of-pocket costs, and hospitals’ return on investment. The SRs also differed in the utilization measures they examined. Some measured length of stay, the use of preventive services, or visits to health professionals. Most of the cost-effectiveness assessments consisted of >1 intervention versus a nonintervention control, or they compared interventions. In combination, all these factors made it difficult to reach conclusions about the cost effectiveness of different interventions.
Five good-quality SRs18–22 and 3 fair-quality SRs23–25 provided information about intervention costs or cost reductions. Four good-quality SRs19–22 included studies that reported cost-effectiveness outcomes but none conducted a cost-effectiveness study as a main component of the review (often because of a lack of data).
Educational outreach visits were generally effective in reducing costs in 2 reviews21,25 and showed cost effectiveness in 1 good-quality review.21 Two fair-quality reviews23,25 reported cost-reduction findings (length of stay and lab costs) for audit and feedback interventions and the results showed mixed effectiveness. Reminders were generally effective in reducing cost in 3 reviews18,24,25 and showed mixed effectiveness in another.20 Reminders were also cost effective in 1 review22 and the results showed mixed effectiveness in another.20 Although based only on 1 good-quality review,19 provider incentive interventions reduced costs and were cost effective.
3.3. Critical Question 3
What are the contextual barriers—financial, organizational, and regulatory—that hinder or limit clinician adherence to and the adoption of CPGs, as encouraged by the selected intervention strategies?
Table 4 summarizes several barriers that were reported to influence clinician adoption or adherence to CPGs.
|Clinician Knowledge, Attitudes, and Beliefs||Skepticism—concern about evidence base of guidelines, lack of universal acceptance of recommendations, implied rationing of services, fear of litigation24,47,49–51|
|Lack of knowledge of guidelines24,32,49,50|
|Age—older or more experienced clinicians less inclined to use48|
|Workflow and Timing||Timing and effectiveness—barrier to effectiveness if further away from point of decision making42,52–55|
3.4. Critical Question 4
What policy or regulatory, organizational, and financial characteristics or factors influence the success of the selected clinical-institution level intervention strategies (ie, educational outreach visits, reminders, audit and feedback, and provider incentives) in achieving the implementation of guidelines and affecting professional practice behaviors?
Table 5 presents the evidence for several factors that appear to facilitate the success of the intervention strategies. Three reviews21,24,26 assessed the effect of various interventions alone compared with combinations of interventions. These reviews concluded that multifaceted interventions are more likely to be effective than single interventions in influencing process of care outcomes.
|Characteristics of Guidelines||Short and simple format47|
|Provide patient pamphlets47|
|Easy to understand and use48|
|Minimal resources needed to implement48|
|Involving end-users in guidelines development, implementation, and testing15,48,50,52|
|Use of computerized guidelines in practice settings15,16|
|Involving Stakeholders||Involvement in planning, developing, or leading interventions designed to influence practice patterns and clinical outcomes19,30,34,40,56|
|Leadership||Leader’s social influence is recognized30|
|Local management support and enthusiasm24,51|
|Adequate time to promote new practice24|
|Scope of Implementation||Provider incentives—more broadly implemented in the United Kingdom with more consistent results than in the United States19|
|Multifaceted interventions are more likely to be effective than single interventions24,26,53,54|
|Organizational Culture||Multidisciplinary teams, coordination of care, pace of change, a blame-free culture, and a history of quality improvement9,19,28,38|
|Workflow and Timing||Electronic guidelines systems|
|Integration with computers used in practice16,17|
|Reminders automatic—clinicians not required to seek information42|
This summary of SRs and overview of reviews found general effectiveness for 2 of the 4 selected implementation interventions (educational outreach visits and audit and feedback) for improving process of care and clinical outcomes. Regarding the impact of characteristics of the interventions, multifaceted interventions appeared to be more effective. However, the paucity of controlled head-to-head comparisons and limitations in the evidence allowed only an estimate of general effectiveness, without the ability to determine whether the overall effects of the interventions were statistically significant, or more importantly, clinically meaningful.
No conclusions can be drawn regarding the effectiveness of the intervention strategies to improve process of care and clinical outcomes related to the treatment of CVD risk factors since most reviews did not focus on or include studies on these conditions. However, 1 good-quality review focused on hypertension and 4 fair-quality reviews included some studies on hypertension and dyslipidemia. The results from these few reviews suggest that implementation interventions are potentially as effective in CVD risk reductions as in other areas.
No general conclusion could be reached about the cost of implementing the selected intervention strategies. Although good-quality reviews generally reported cost-savings associated with an intervention, many of the interventions were multifaceted in nature; thus, the total cost associated with any component of an intervention was difficult to discern. Furthermore, cost effectiveness was not explicitly evaluated.
4.1. Common Themes in the Evidence and Practice Implications
The evidence generally showed greater increase in CPG adherence in practices with low-baseline adherence. Given the success of multifaceted interventions, and the beneficial impact of stakeholder involvement in developing the intervention and a priori assessment of local needs, implementation efforts should emphasize the need for implementers to understand their current practices and how their organizations’ practices may vary from forthcoming CPG recommendations. A self-assessment toolkit could be an important aid to practices when determining which of several implementation strategies might best suit their particular needs, context, and goals.
4.2. Report Limitations
Data used in this report were not retrieved from the primary studies, thus limiting information on the details of the interventions and results to that reported by the review authors. Second, this report used a qualitative synthesis of the evidence, which does not allow an assessment of the size of any expected benefits from the implementation of an implementation strategy. The report also relied heavily on the judgments of the authors of the SRs and the quality of the reviews. Third, analysis in this report is limited to 4 interventions aimed at providers and did not explore systems-level implementation. Other interventions might have shown effectiveness if they had been included. Fourth, the implementation of the 4 intervention strategies varied within reviews. Some reviews assessed single interventions, whereas others assessed multifaceted interventions. Fifth, many evaluations did not report sufficient contextual information to assess their potential influence on implementation efforts (eg, patient demographics, comorbid conditions, insurance coverage). Another major concern is that only a small number of the included studies provided information about clinical effectiveness and cost outcomes, and only a few provided comparisons of cost effectiveness.
Finally, in reviews of SRs, there is always the risk that an included study may appear in multiple reviews and the overlap presents the potential for double counting the results from individual studies. The ISWG addressed this potential risk in answering CQ 1 (process and clinical outcomes) and CQ 2 (cost) primarily by using only SRs where the included studies were clearly referenced and could be checked across reviews and did not include SRs that were updated by more recent reviews. For reviews with overlapping studies, the ISWG first considered whether counting or not counting the overlap would change the assessment of effectiveness of the interventions in this report. If it would not change the effectiveness, we counted the study in both reviews. However, if counting the overlap would change the effectiveness, we first considered the quality of the reviews, and if the overlapping reviews were of equal quality, counted the study in the most recent review. For example, if a study appeared in a good-quality review and a fair-quality review, we counted the study in the good-quality review and not in the fair-quality review. Finally, in SRs that updated a component (ie, interventions aimed at people with diabetes mellitus) of an SR, we counted the studies from the latest review and the studies minus the updated component from the older SR. The overlap was substantial for CQ 3 (barriers) and CQ 4 (facilitators), where SRs were combined with overviews of SRs. However, this overlap was inconsequential because the findings for CQs 3 and 4 were not based on study counts.
4.3. Research Gaps
Future research in CPG implementation interventions should address important design limitations in current studies and key gaps in the evidence base (Table 6). An important design limitation is the lack of explicit declaration or standardized terminology for the implementer and target of the interventions. Evidence is sorely needed on more tangible outcomes, such as clinical outcomes and cost effectiveness, in addition to intermediate or process outcomes. Simply demonstrating an effective implementation in one setting is not a guarantee that the same results will be found in other settings. Thus, additional SRs and empirical research are needed to better understand the effectiveness of implementation strategies with differing characteristics, in a variety of settings, with different types of clinicians, and targeting specific types of diseases or conditions—especially the control of CVD risk factors. Although multifaceted interventions rather than single interventions appear to be effective strategies for increasing CPG implementation, identifying the combinations of strategies that are most effective and in which contexts is important.
|Suggested Actions||Research Needs|
|Address Study Design Issues||Clear descriptions of study methods and the interventions|
|Explicit implementer and target of intervention|
|Standardized measures of outcomes and descriptions of practice settings|
|Conduct New Research to Test the Effectiveness of Interventions||Effect on clinical outcomes, rather than intermediate outcomes|
|Effect of multicomponent interventions, including specific combinations of interventions|
|Effect of policy-level interventions, for example:|
|Publicly reported quality metrics|
|Effect of interventions targeting varieties of:|
|Settings, including baseline workflows|
|Types of diseases and conditions|
|Focus Evaluations on Contextual Factors||Organizational and practice context|
|Involvement of stakeholders and leadership|
|Integration with workflow|
|Leverage EHR Data and Tools||Mine data for observational studies|
|Platform for pragmatic prospective studies|
|Access longer-term data than RCTs|
|Aggregate data and/or interventions by key factors, for example:|
|Healthcare delivery system|
|Conduct Qualitative and Observational Research||Effectiveness in diverse populations|
|Drivers of success in real-world implementations|
|Contextual issues not amenable to RCTs|
Innovative research methods and study designs are needed to leverage electronic health records (EHRs) as they might bolster implementation science in many ways. Specifically, electronic clinical data may improve the ability to target patients (eg, by diagnosis) for appropriate CPGs. Clinic and health system EHRs may have the ability to efficiently provide feedback on progress in achieving relevant CPG measures (eg, biomarkers) for an entire clinic or healthcare system, not strictly at the patient or clinician level. And for implementation research, EHRs may streamline planning and conducting other aspects of implementation trials (eg, more accurately determine event rates, eligible patients). EHRs might also be able to follow patient health outcomes on a long-term basis, beyond the typical length of clinical trials. The evolution of EHRs will likely include the development and embedding of risk models capable to enable targeting people with specific risk profiles. The use of networks of EHRs, such as those in the PCORI (Patient-Centered Outcomes Research Institute) Clinical Data Research Networks, could provide remarkable opportunities to study implementation strategies or even exploit the natural variation in strategies across centers. With many large and diverse patient populations now receiving care that is documented in EHRs, large population-based studies are becoming increasingly practical. Such pragmatic studies have the advantage of including the general population of patients and not just a carefully selected set of participants in a randomized controlled trial. Although such studies may not have the precision of measurement commonly seen in rigorous trials, their benefit comes in the assessment of important clinical outcomes for entire populations of patients.
Finally, the good-quality reviews in this report are largely based on evidence from randomized controlled-trial study designs. Traditional randomized controlled trials are quite different from the context in which real-world implementation and behavior change occur. An observational, more qualitative approach may be needed to better understand how the preceding contextual issues and other drivers affect the success of an implementation intervention. An example of a qualitative approach is The Dartmouth Institute for Health Policy and Clinical Practice benchmarking study of how “best-in-class” health systems use clinical decision support.27 Such an outcomes-oriented approach would allow better evaluation of provider incentives, audit and feedback, educational outreach visits, reminders, and other interventions chosen to advance the implementation of CPGs.
In summary, there is some evidence that guideline implementation interventions are effective for both process of care and clinical outcomes. Limited evidence suggests that implementation interventions are generally effective at reducing costs, and in even more-limited evidence, that they are cost effective. Qualitative analysis suggests recurring themes regarding barriers and facilitators of success. Given the mixed results seen in many implementation studies, additional research focused on intervention effectiveness is needed, with special emphasis on improving methods and study designs, increasing the use of pragmatic trials, and determining how to enhance the utility of electronic clinical data. Also, more studies are needed on clinical outcomes, cost effectiveness and the influence of contextual factors on effectiveness of interventions. Studies done in real-world healthcare delivery systems and qualitative research may help address some of these important gaps in current evidence.
5.1. Translation Outlook 1
Audit and feedback and educational outreach visits were generally effective for improving both process of care and clinical outcomes while provider incentives showed mixed effectiveness. Reminders showed mixed effectiveness for process of care and were generally ineffective for improving clinical outcomes.
5.2. Translation Outlook 2
Multifaceted interventions were more effective than a single intervention strategy.
5.3. Translation Outlook 3
Additional research is needed on intervention effectiveness, with special emphasis on improving methods and study designs, increasing the use pragmatic trials, leveraging electronic clinical data, and evaluating cost effectiveness of interventions.
ACC/AHA Task Force Members
Jonathan L. Halperin, MD, FACC, FAHA, Chair; Glenn N. Levine, MD, FACC, FAHA, Chair-Elect; Sana M. Al-Khatib, MD, MHS, FACC, FAHA; Kim K. Birtcher, PharmD, MS, AACC; Biykem Bozkurt, MD, PhD, FACC, FAHA; Ralph G. Brindis, MD, MPH, MACC; Joaquin E. Cigarroa, MD, FACC; Lesley H. Curtis, PhD, FAHA; Lee A. Fleisher, MD, FACC, FAHA; Federico Gentile, MD, FACC; Samuel Gidding, MD, FAHA; Mark A. Hlatky, MD, FACC; John Ikonomidis, MD, PhD, FAHA; José Joglar, MD, FACC, FAHA; Susan J. Pressler, PhD, RN, FAHA; Duminda N. Wijeysundera, MD, PhD
Presidents and Staff
American College of Cardiology
Richard A. Chazal, MD, FACC, President
Shalom Jacobovitz, Chief Executive Officer
William J. Oetgen, MD, MBA, FACC, Executive Vice President, Science, Education, Quality, and Publications
Amelia Scholtz, PhD, Publications Manager, Science, Education, Quality, and Publications
American College of Cardiology/American Heart Association
Lisa Bradfield, CAE, Director, Guideline Methodology and Policy
Abdul R. Abdullah, MD, Associate Science and Medicine Advisor
Allison Rabinowitz, MPH, Project Manager, Science and Clinical Policy
American Heart Association
Steven R. Houser, PhD, FAHA, President
Nancy Brown, Chief Executive Officer
Rose Marie Robertson, MD, FAHA, Chief Science and Medicine Officer
Gayle R. Whitman, PhD, RN, FAHA, FAAN, Senior Vice President, Office of Science Operations
Comilla Sasson, MD, PHD, FACEP, Vice President, Science and Medicine
Jody Hundley, Production Manager, Scientific Publications, Office of Science Operations
NHLBI Staff: Janet M. de Jesus, MS, RD; Kathryn McMurry, MS; Susan T. Shero, MS, RN.
Former NHLBI Staff: Denise Simons-Morton, MD, PhD; Mary Hand, MSPH, RN.
This project was funded in whole or in part with federal funds from the National Heart, Lung, and Blood Institute, National Institutes of Health, US Department of Health and Human Services, under GSA contract No. GS-10F-0112J, Order No. HHSN2268201100098U.
- 1. National Heart, Lung, and Blood Institute. NHLBI Cardiovascular Disease Thought Leaders Meeting Report: Research Translation, Dissemination, and Application - Moving Toward a New Vision and Strategic Framework.2005. Available at: http://www.nhlbi.nih.gov/health-pro/resources/heart/cardiovascular-disease-thought-leaders-meeting-report. Accessed January 1, 2012.Google Scholar
Cabana MD, Rand CS, Powe NR,. Why don’t physicians follow clinical practice guidelines? A framework for improvement.JAMA. 1999; 282:1458–65.CrossrefMedlineGoogle Scholar
Gibbons GH, Shurin SB, Mensah GA,. Refocusing the agenda on cardiovascular guidelines: an announcement from the National Heart, Lung, and Blood Institute.Circulation. 2013; 128:1713–15.LinkGoogle Scholar
Simons-Morton DG, Simons-Morton BG, Parcel GS,. Influencing personal and environmental conditions for community health: a multilevel intervention model.Fam Community Health. 1988; 11:25–35.CrossrefMedlineGoogle Scholar
- 5. Effective Practice and Organisation of Care (EPOC). EPOC Taxonomy. Available at: https://epoc.cochrane.org/epoc-taxonomy. Accessed March 23, 2012.Google Scholar
- 6. Canadian Agency for Drugs and Technologies in Health. Rx for Change Database. Available at: http://www.cadth.ca/en/resources/rx-for-change. Accessed December 1, 2012.Google Scholar
Shea BJ, Grimshaw JM, Wells GA,. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews.BMC Med Res Methodol. 2007; 7:10.CrossrefMedlineGoogle Scholar
Cheung A, Weir M, Mayhew A,. Overview of systematic reviews of the effectiveness of reminders in improving healthcare professional behavior.Syst Rev. 2012; 1:36.CrossrefMedlineGoogle Scholar
Walsh J, McDonald KM, Shojania KG,. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 3: Hypertension Care).Technical Reviews, No. 9.3. Rockville, MD: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, January 2005. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43920/. Accessed December 1, 2012.Google Scholar
Lu CY, Ross-Degnan D, Soumerai SB,. Interventions designed to improve the quality and efficiency of medication use in managed care: a critical review of the literature - 2001–2007.BMC Health Serv Res. 2008; 8:75.CrossrefMedlineGoogle Scholar
Ostini R, Hegney D, Jackson C,. Systematic review of interventions to improve prescribing.Ann Pharmacother. 2009; 43:502–13.CrossrefMedlineGoogle Scholar
Pearson SA, Ross-Degnan D, Payson A,. Changing medication use in managed care: a critical review of the available evidence.Am J Manag Care. 2003; 9:715–31.MedlineGoogle Scholar
Weingarten SR, Henning JM, Badamgarav E,. Interventions used in disease management programmes for patients with chronic illness-which ones work? Meta-analysis of published reports.BMJ. 2002; 325:925.CrossrefMedlineGoogle Scholar
Bryan C, Boren SAThe use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature.Inform Prim Care. 2008; 16:79–91.MedlineGoogle Scholar
Damiani G, Pinnarelli L, Colosimo SC,. The effectiveness of computerized clinical guidelines in the process of care: a systematic review.BMC Health Serv Res. 2010; 10:2.CrossrefMedlineGoogle Scholar
Heselmans A, Van de Velde S, Donceel P,. Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review.Implement Sci. 2009; 4:82.CrossrefMedlineGoogle Scholar
Robertson J, Walkom E, Pearson SA,. The impact of pharmacy computerised clinical decision support on prescribing, clinical and patient outcomes: a systematic review of the literature.Int J Pharm Pract. 2010; 18:69–87.MedlineGoogle Scholar
Roshanov PS, Misra S, Gerstein HC,. Computerized clinical decision support systems for chronic disease management: a decision-maker-researcher partnership systematic review.Implement Sci. 2011; 6:92.CrossrefMedlineGoogle Scholar
Van Herck P, De Smedt D, Annemans L,. Systematic review: effects, design choices, and context of pay-for-performance in health care.BMC Health Serv Res. 2010; 10:247.CrossrefMedlineGoogle Scholar
Bright TJ, Wong A, Dhurjati R,. Effect of clinical decision-support systems: a systematic review.Ann Intern Med. 2012; 157:29–43.CrossrefMedlineGoogle Scholar
O’Brien MA, Rogers S, Jamtvedt G,. Educational outreach visits: effects on professional practice and health care outcomes.Cochrane Database Syst Rev. 2007;(4):CD000409.MedlineGoogle Scholar
Sahota N, Lloyd R, Ramakrishna A,. Computerized clinical decision support systems for acute care management: a decision-maker-researcher partnership systematic review of effects on process of care and patient outcomes.Implement Sci. 2011; 6:91.CrossrefMedlineGoogle Scholar
Siddiqi K, Newell J, Robinson M. Getting evidence into practice: what works in developing countries?Int J Qual Health Care. 2005; 17:447–54.CrossrefMedlineGoogle Scholar
Tooher R, Middleton P, Pham C,. A systematic review of strategies to improve prophylaxis for venous thromboembolism in hospitals.Ann Surg. 2005; 241:397–415.CrossrefMedlineGoogle Scholar
Cortoos PJ, Simoens S, Peetermans W,. Implementing a hospital guideline on pneumonia: a semi-quantitative review.Int J Qual Health Care. 2007; 19:358–67.CrossrefMedlineGoogle Scholar
Mauger B, Marbella A, Pines E,. Implementing quality improvement strategies to reduce healthcare-associated infections: a systematic review.Am J Infect Control. 2014; 42:S274–83.CrossrefMedlineGoogle Scholar
Caruso D, Kerrigan CL, Mastanduno MP,. Improving Value-Based Care and Outcomes of Clinical Populations in an Electronic Health Record System Environment.A Technical Report. The Dartmouth Institute for Health Policy and Clinical Practice; 2011.Google Scholar
Bravata DM, Sundaram V, Lewis R,. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 5: Asthma Care).Technical Reviews, No. 9.5. Rockville, MD: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, 2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43968/. Accessed September 18, 2016.Google Scholar
Grimshaw JM, Thomas RE, MacLennan G,. Effectiveness and efficiency of guideline dissemination and implementation strategies.Health Technol Assess. 2004; 8:iii-iv;1–72.Google Scholar
Chaillet N, Dube E, Dugas M,. Evidence-based strategies for implementing guidelines in obstetrics: a systematic review.Obstet Gynecol. 2006; 108:1234–45.CrossrefMedlineGoogle Scholar
Lineker SC, Husted JA. Educational interventions for implementation of arthritis clinical practice guidelines in primary care: effects on health professional behavior.J Rheumatol. 2010; 37:1562–9.CrossrefMedlineGoogle Scholar
Menon A, Korner-Bitensky N, Kastner M,. Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review.J Rehabil Med. 2009; 41:1024–32.CrossrefMedlineGoogle Scholar
Gagliardi AR, Alhabib S. Trends in guideline implementation: a scoping systematic review.Implement Sci. 2015; 10:54.CrossrefMedlineGoogle Scholar
Ivers N, Jamtvedt G, Flottorp S,. Audit and feedback: effects on professional practice and healthcare outcomes.Cochrane Database Syst Rev. 2012; 6:CD000259.Google Scholar
Mansell G, Shapley M, Jordan JL,. Interventions to reduce primary care delay in cancer referral: a systematic review.Br J Gen Pract. 2011; 61:e821–35.CrossrefMedlineGoogle Scholar
Okelo SO, Butz AM, Sharma R,. Interventions to Modify Health Care Provider Adherence to Asthma Guidelines.Comparative Effectiveness Reviews, No. 95. Rockville, MD: Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services; 2013. Available at: http://www.ncbi.nlm.nih.gov/books/NBK144097/. Accessed September 16, 2016.Google Scholar
Mahan CE, Spyropoulos AC. Venous thromboembolism prevention: a systematic review of methods to improve prophylaxis and decrease events in the hospitalized patient.Hosp Pract. 2010; 38:97–108.CrossrefGoogle Scholar
Stone EG, Morton SC, Hulscher ME,. Interventions that increase use of adult immunization and cancer screening services: a meta-analysis.Ann Intern Med. 2002; 136:641–51.CrossrefMedlineGoogle Scholar
Collinsworth AW, Priest EL, Campbell CR,. A review of multifaceted care approaches for the prevention and mitigation of delirium in intensive care units.J Intens Care Med. 2016; 31:127–41.CrossrefMedlineGoogle Scholar
Chaillet N, Dumont AEvidence-based strategies for reducing cesarean section rates: a meta-analysis.Birth. 2007; 34:53–64.CrossrefMedlineGoogle Scholar
Khunpradit S, Tavender E, Lumbiganon P,. Non-clinical interventions for reducing unnecessary caesarean section.Cochrane Database Syst Rev. 2011:CD005528.MedlineGoogle Scholar
Kawamoto K, Houlihan CA, Balas EA,. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success.BMJ. 2005; 330:765.CrossrefMedlineGoogle Scholar
Nieuwlaat R, Connolly SJ, Mackay JA,. Computerized clinical decision support systems for therapeutic drug monitoring and dosing: a decision-maker-researcher partnership systematic review.Implement Sci. 2011; 6:90.CrossrefMedlineGoogle Scholar
Tan K, Dear PR, Newell SJ. Clinical decision support systems for neonatal care.Cochrane Database Syst Rev. 2005; 2:CD004211.Google Scholar
Jeffery R, Iserman E, Haynes RB. CDSS Systematic Review Team. Can computerized clinical decision support systems improve diabetes management? A systematic review and meta-analysis.Diabet Med. 2013; 30:739–45.CrossrefMedlineGoogle Scholar
Scott A, Sivey P, Ait Ouakrim D,. The effect of financial incentives on the quality of health care provided by primary care physicians.Cochrane Database Syst Rev. 2011; 9:CD008451.Google Scholar
Carlsen B, Glenton C, Pope C. Thou shalt versus thou shalt not: a meta-synthesis of GPs’ attitudes to clinical practice guidelines.Br J Gen Pract. 2007; 57:971–8.CrossrefMedlineGoogle Scholar
Francke AL, Smit MC, de Veer AJ,. Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review.BMC Med Inform Decis Mak. 2008; 8:38.CrossrefMedlineGoogle Scholar
Schunemann HJ, Cook D, Grimshaw J,. Antithrombotic and thrombolytic therapy: from evidence to application: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy.Chest. 2004; 126:688S–96S.CrossrefMedlineGoogle Scholar
Simpson SH, Marrie TJ, Majumdar SR. Do guidelines guide pneumonia practice? A systematic review of interventions and barriers to best practice in the management of community-acquired pneumonia.Respir Care Clin N Am. 2005; 11:1–13.CrossrefMedlineGoogle Scholar
Salam RA, Lassi ZS, Das JK,. Evidence from district level inputs to improve quality of care for maternal and newborn health: interventions and findings.Reprod Health. 2014; 11 (Suppl 2):S3.MedlineGoogle Scholar
Dulko D. Audit and feedback as a clinical practice guideline implementation strategy: a model for acute care nurse practitioners.Worldviews Evid Based Nurs. 2007; 4 (Suppl 2):200–9.CrossrefMedlineGoogle Scholar
Grimshaw JM, Shirran L, Thomas R,. Changing provider behavior: an overview of systematic reviews of interventions.Med Care. 2001; 39 (Suppl 2):II2–45.CrossrefMedlineGoogle Scholar
Grindrod KA, Patel P, Martin JE. What interventions should pharmacists employ to impact health practitioners’ prescribing practices?Ann Pharmacother. 2006; 40:1546–57.CrossrefMedlineGoogle Scholar
Jamtvedt G, Young JM, Kristoffersen DT,. Audit and feedback: effects on professional practice and health care outcomes.Cochrane Database Syst Rev. 2006; 2:CD000259.Google Scholar
Brouwers MC, Garcia K, Makarski J,. The landscape of knowledge translation interventions in cancer control: what do we know and where to next? A review of systematic reviews.Implement Sci. 2011; 6:130.CrossrefMedlineGoogle Scholar
Appendix 1. Author Relationships with Industry and Other Entities (Comprehensive)—ACC/AHA Special Report: Clinical Practice Guideline Implementation Strategies: A Summary of Systematic Reviews by the NHLBI Implementation Science Work Group (2012–2016)
|Committee Member||Employment||Consultant||Speakers Bureau||Ownership/Partnership/Principal||Personal Research||Institutional, Organizational, or Other Financial Benefit||Expert Witness|
|Wiley V. Chan, Co-Chair||Kaiser Permanente Northwest—Director of Guidelines and Evidence-Based Medicine (through 2014 and as a consultant currently)||None||None||None||• PCORI||• Community Clinics Health Network||None|
|Thomas A. Pearson, Co-Chair||University of Rochester Medical Center—Executive Vice President for Research and Education||None||None||None||• NCATS• NHLBI• PCORI||None||None|
|Glen C. Bennett||NHLBI—Program Coordinator||None||None||None||None||None||None|
|Graciela Castillo||American Institutes for Research—Senior Researcher||None||None||None||None||None||None|
|William C. Cushman||The University of Tennessee Health Science Center—Professor, Preventative Medicine||None||None||None||None||None||None|
|Thomas A. Gaziano||Harvard Medical School—Assistant Professor; Brigham and Women’s Hospital—Associate Physician in Cardiovascular Medicine||None||None||None||• NHLBI• NIA||None||None|
|Paul N. Gorman||Oregon Health and Science University—Associate Professor||None||None||None||• AHRQ• National Science Foundation||None||None|
|Joel Handler||Southern California Permanente Medical Group—Staff Physician||None||None||None||None||None||None|
|Susan K.R. Heil||American Institutes for Research—Health and Social Development||None||None||None||None||None||None|
|Julie C. Jacobson Vann||The University of North Carolina at Chapel Hill—Associate Professor||None||None||None||None||None||None|
|Harlan M. Krumholz||Yale-New Haven Hospital—Director||• United Healthcare||None||None||• Medtronic• Johnson & Johnson||None||None|
|Robert F. Kushner||Northwestern University—Professor in Medicine||• Novo Nordisk• Weight Watchers• Zafgen• Retrofit• Takeda||None||None||• Aspire Bariatrics||None||None|
|Thomas D. MacKenzie||Denver Health Foundation—Chief Quality Officer||None||None||None||• NCATS• NHLBI• PCORI||None||None|
|Ralph L. Sacco||University of Miami Health System—Chairman and Professor Department of Neurology||• Boehringer Ingelheim||None||None||• NINDS• AHA• Duke Clinical Research Institute (DSMB)• UCSF (DSMB)||None||None|
|Sidney C. Smith, Jr.||University of North Carolina at Chapel Hill—Professor of Medicine||None||None||None||None||None||None|
|Jennifer Stephens||Johns Hopkins University—Graduate Student||None||None||None||None||None||None|
|Victor J. Stevens||Kaiser Permanente Center for Health Research—Research Scientist||None||None||None||• NIH||None||None|
|Barbara L. Wells||National Institutes of Health—Health Scientist Administrator||None||None||None||None||None||None|
Appendix 2. Reviewer Relationships with Industry and Other Entities (Comprehensive)—ACC/AHA Special Report: Clinical Practice Guideline Implementation Strategies: A Summary of Systematic Reviews by the NHLBI Implementation Science Work Group (March 2016)
|Reviewer||Representation||Employment||Consultant||Speakers Bureau||Ownership/Partnership/Principal||Personal Research||Institutional, Organizational, or Other Financial Benefit||Expert Witness|
|Ralph G. Brindis||Official Reviewer—ACC/AHA Task Force on Clinical Practice Guidelines||University of California—Clinical Professor of Medicine||• ACC*• FDA CV Device Panel†||None||None||• State of California OSHPD†||None||None|
|David C. Goff, Jr.||Official Reviewer—AHA||Wake Forest University Public Health Sciences Internal Medicine—Professor||None||None||None||• NIH*||• AHA*• Colorado School of Public Health†||None|
|Edward P. Havranek||Official Reviewer—AHA||Denver Health Medical Center—Cardiologist||• AHA*• NHLBI*• PCORI*||None||None||None||None||None|
|Robert C. Hendel||Official Reviewer—ACC Board of Trustees||University of Miami Cardiac Imaging and Outpatient Services—Director||• Adenosine Therapeutics• Astellas Pharma*||None||None||None||None||None|
|Srinivas Murali||Official Reviewer—ACC Board of Governors||Allegheny Health Network—Director, Cardiovascular Institute||• Actelion• Bayer||• Actelion||None||• Cardiokinetics• CVRx• Gilead• Actelion• Lung, LLC• Bayer||None||None|
|John A. Spertus||Official Reviewer—AHA||Saint Luke’s Mid America Heart Institute—Director, Health Outcomes Research; University of Missouri-Kansas City—Professor||• Amgen• Bayer• Copyright for SAQ, KCCQ, and PAQ• Janssen Pharmaceuticals• Novartis*• Regeneron†• United Healthcare (Scientific Advisory Board)||None||• Health Outcomes Sciences*||• ACC*• AHA†• Lilly*• Gilead Sciences*||None||None|