×

Are Meta-Analyses a Form of Medical Fake News?

Thoughts About How They Should Contribute to Medical Science and Practice
  • Milton Packer
  • MDBaylor Heart and Vascular Institute, Baylor University Medical Center, Dallas, TX.
Originally publishedCirculation. 2017;136:2097–2099

How many dreadful manuscripts describing the results of a meta-analysis are submitted to and rejected from journals each year? We cannot know, but many published meta-analyses do not use appropriate methods or contribute meaningfully to medical thought or patient care. Some journals avoid all meta-analyses, whereas others pride themselves on publishing only the best; still others are delighted to have anything to print in an era where the number of opportunities to publish greatly exceeds the number of valid observations.

Many have critically examined the methodology of meta-analysis, and others have set standards for their execution. Despite such guidance, meta-analyses continue to proliferate, but we should ask: do they really contribute? Esteemed organizations regard the conclusions of a well-executed meta-analysis as a higher level of evidence than a single well-done clinical trial. This commentary explains why this cannot possibly be true.

A Meta-Analysis is Only an Imperfect Observational Study

Many physicians believe (incorrectly) that there is something magical about a meta-analysis. A meta-analysis is an observational study, but the author does no original work. Someone simply notices that several articles have data that pertain to a common topic and that they might show similar patterns. How can the patterns be described? In the past, the favored approach was to depict these in a narrative, but this task required insight into the details of each trial and a willingness to ask whether differences in design or execution might have contributed to differences in a study’s findings. The current approach to meta-analysis requires no such intellectual effort; little knowledge is needed about any trial, except that it possesses certain minimum features. Advocates of meta-analyses claim that they select trials for inclusion or exclusion based solely on their methodological qualities without awareness of their results, but it is difficult to understand how that could happen. Can the author of a meta-analysis claim to have read only the methods section of an article, but ignored the title, abstract, results, and discussion?

In reality, a meta-analysis is a mathematical method for combining data, which is weighted by the quantity but not the quality of the observations. A trial that recorded many events, but was done imperfectly and was plagued by missing or confusing data, is given more weight than a small trial that was done impeccably but recorded few events. The methodology of meta-analysis increases the precision (but not the truthfulness) of any estimate. Yet, to gain this additional measure of confidence, we must combine trials that used different designs, used different doses for different durations, and made observations with different degrees of care. It is akin to thinking that one can evaluate a baseball team’s seasonal performance by summing the difference in scores at the end of the first inning of a few games, without paying attention to who participated and who was the competing team. Add the uncertainty that certain games were played under special rules, and the scores of some games are ignored. To make matters worse, all games characterized by high-scoring first innings would be given more weight than games where no runs were scored by either team. Try using this approach to making a decision about how a specific baseball team will do at the end of the season. Why would one engage in such an exercise? A full analysis of the team’s total performance over an entire year is far more enlightening than looking at a few innings. That is why a robust finding in a large-scale definitively designed trial is far more reliable than the inferences of most meta-analyses.1 Such a conclusion should not surprise us. Because of their observational nature, meta-analyses are hypothesis-generating. We do not intend for their findings to establish anything; instead, we expect them to be confirmed or refuted in a subsequent definitive trial.

What Types of Meta-Analyses Should Particularly Alarm Us?

Regardless of their strengths or weaknesses, many meta-analyses have features that should alarm us, even if the work comports to conventional standards of design and execution. Here, I highlight a few particularly troublesome examples among the many that plague the meta-analysis of randomized controlled trials; the additional sources of bias that are inherent in meta-analyses of observational studies are beyond the scope of this article.

Conclusions of Meta-Analyses Should Not Rely on Small Numbers of Events

Many physicians believe that a meta-analysis is an excellent tool when the number of events in individual trials is small. Although this premise has some validity, the total number of events in the pooled analysis still matters. If each of 10 trials each collected <5 events, the resulting meta-analysis would be based on <50 events. Such an estimate would be more precise than that provided by any of the individual reports, but it is still too tiny to provide a replicable truth. For a meta-analysis to yield a stable and reliable estimate, the total number of events should exceed 200 to 300.2,3 How many meta-analyses are based on that much information?

Be Wary of Meta-Analyses That Rely on Indirect Comparisons

If no evidence is available from head-to-head trials comparing 2 interventions, but trials comparing each of the interventions with the same comparator have been conducted, one is tempted to use indirect comparisons to estimate the results of the nonexistent head-to-head trial. Sadly, this approach is based on assumptions that are rarely fulfilled. A meta-analysis that incorrectly concluded that naproxen was safer than other nonsteroidal anti-inflammatory drugs relied on indirect comparisons that assumed that selective cyclo-oxygenase inhibitors were identically toxic.4 A subsequent large definitive trial failed to confirm any safety advantage for naproxen.5 We should be wary of meta-analyses that predict the likely results of comparative experiments that have never been performed.

Meta-Analyses Should Not Tell Us What We Already Know or Obscure What We Should Remember

Meta-analyses should provide insights that are superior to those provided by a narrative summary of the data. If large-scale trials of 3 different β-blockers (bisoprolol, carvedilol, and metoprolol) for heart failure each report a nearly identical 35% reduction of all-cause mortality, little purpose would be served by performing a meta-analysis of the 3 trials. Not only would such a meta-analysis not add any new information, but also its results would not necessarily apply to other β-blockers (eg, bucindolol and nebivolol). Many meta-analyses only confirm existing knowledge and may conceal meaningful differences that can best be understood descriptively, rather than mathematically.

What Should We Expect From a Meta-Analysis?

Meta-analyses can be useful if they provide novel findings that reflect the design of their component trials and are based on a meaningful amount of evidence. Under these circumstances, a meta-analysis can yield an answer whose reliability approximates or exceeds that of a single definitive trial. Unfortunately, the vast majority of meta-analyses do not approach such a standard. Many exist only because they easily create a publication record for the authors. We can believe that mathematical modeling of patterns yields more insights than narrative summaries. Yet, our view of the advantages of a meta-analysis must eventually be based on the answer to a simple question: if there had been no meta-analyses over the past 40 years, would our knowledge about cardiovascular medicine be any different than it currently is?

Footnotes

The opinions expressed in this article are not necessarily those of the editors or of the American Heart Association.

Circulation is available at http://circ.ahajournals.org.

Correspondence to: Milton Packer, MD, Baylor Heart and Vascular Institute, Baylor University Medical Center, 621 N Hall St, Dallas TX 75226. E-mail

References

  • 1. LeLorier J, Grégoire G, Benhaddad A, Lapierre J, Derderian F. Discrepancies between meta-analyses and subsequent large randomized, controlled trials.N Engl J Med. 1997;337:536–542. doi: 10.1056/NEJM199708213370806.CrossrefMedlineGoogle Scholar
  • 2. Pereira TV, Ioannidis JP. Statistically significant meta-analyses of clinical trials have modest credibility and inflated effects.J Clin Epidemiol. 2011;64:1060–1069. doi: 10.1016/j.jclinepi.2010.12.012.CrossrefMedlineGoogle Scholar
  • 3. Flather MD, Farkouh ME, Pogue JM, Yusuf S. Strengths and limitations of meta-analysis: larger studies may be more reliable.Control Clin Trials. 1997;18:568–579; discussion 661–666.CrossrefMedlineGoogle Scholar
  • 4. Coxib and traditional NSAID Trialists’ (CNT) Collaboration, Bhala N, Emberson J, Merhi A, Abramson S, Arber N, Baron JA, Bombardier C, Cannon C, Farkouh ME, FitzGerald GA, Goss P, Halls H, Hawk E, Hawkey C, Hennekens C, Hochberg M, Holland LE, Kearney PM, Laine L, Lanas A, Lance P, Laupacis A, Oates J, Patrono C, Schnitzer TJ, Solomon S, Tugwell P, Wilson K, Wittes J, Baigent C. Vascular and upper gastrointestinal effects of non-steroidal anti-inflammatory drugs: meta-analyses of individual participant data from randomised trials.Lancet. 2013;382:769–779. doi: 10.1016/S0140-6736(13)60900-9.CrossrefMedlineGoogle Scholar
  • 5. Nissen SE, Yeomans ND, Solomon DH, Lüscher TF, Libby P, Husni ME, Graham DY, Borer JS, Wisniewski LM, Wolski KE, Wang Q, Menon V, Ruschitzka F, Gaffney M, Beckerman B, Berger MF, Bao W, Lincoff AM; PRECISION Trial Investigators. Cardiovascular safety of celecoxib, naproxen, or ibuprofen for arthritis.N Engl J Med. 2016;375:2519–2529. doi: 10.1056/NEJMoa1611593.CrossrefMedlineGoogle Scholar