Skip to main content
Research Article
Originally Published 9 January 2014
Free Access

Percentile Ranking and Citation Impact of a Large Cohort of National Heart, Lung, and Blood Institute–Funded Cardiovascular R01 Grants

Abstract

Rationale:

Funding decisions for cardiovascular R01 grant applications at the National Heart, Lung, and Blood Institute (NHLBI) largely hinge on percentile rankings. It is not known whether this approach enables the highest impact science.

Objective:

Our aim was to conduct an observational analysis of percentile rankings and bibliometric outcomes for a contemporary set of funded NHLBI cardiovascular R01 grants.

Methods and Results:

We identified 1492 investigator-initiated de novo R01 grant applications that were funded between 2001 and 2008 and followed their progress for linked publications and citations to those publications. Our coprimary end points were citations received per million dollars of funding, citations obtained <2 years of publication, and 2-year citations for each grant’s maximally cited paper. In 7654 grant-years of funding that generated $3004 million of total National Institutes of Health awards, the portfolio yielded 16 793 publications that appeared between 2001 and 2012 (median per grant, 8; 25th and 75th percentiles, 4 and 14; range, 0–123), which received 2 224 255 citations (median per grant, 1048; 25th and 75th percentiles, 492 and 1932; range, 0–16 295). We found no association between percentile rankings and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus nonhuman focus, and institutional funding. An exploratory machine learning analysis suggested that grants with the best percentile rankings did yield more maximally cited papers.

Conclusions:

In a large cohort of NHLBI-funded cardiovascular R01 grants, we were unable to find a monotonic association between better percentile ranking and higher scientific impact as assessed by citation metrics.
The National Heart, Lung, and Blood Institute (NHLBI) looks to peer review to guide its funding decisions for investigator-initiated R01 grants, which make up the largest single component of our extramural portfolio.1 For the most part, successful applications are those that fall below a percentile ranking value of peer review priority scores; the cut-off percentile ranking value, or payline, is determined by budgetary considerations. Despite longstanding tradition and affirmations, many question the ability of peer review, as it is currently practiced, to identify those research proposals most likely to have high impact, and whether it has impact on scientific thought, clinical practice, or public policy.25 Although previous reports have questioned the internal consistency and validity of peer review,5 there is little data regarding the association, if any, with postaward scientific achievement.4 We, therefore, conducted an observational analysis of percentile rankings and bibliometric outcomes for a contemporary set of NHLBI-funded cardiovascular R01 grants.

Methods

Study Sample

We considered all de novo investigator-initiated R01 grants that met the following inclusion criteria: (1) award on or after January 1, 2001, and before September 1, 2008; (2) duration of funding of ≥2 years; (3) assignment to a cardiovascular unit within NHLBI; and (4) receipt of a percentile ranking based on a priority score given by a National Institutes of Health (NIH) peer review study section.

Data Collection

We obtained grant-specific award and funding data from an internal NHLBI Tracking and Budget System, which include information on investigator status (early stage or established), grantee institution, identity of peer review study section, percentile ranking, project start and end dates, involvement of human subjects, and total funding (including direct and indirect costs).

Outcomes

We used the NIH’s electronic scientific portfolio assistant (eSPA) to generate lists of grant-associated publications, along with publication- specific data on publication type (research or other), total citations, and citations received <2 years of publication. Our common censor date was September 23, 2012. Our coprimary outcome measures were number of total citations received per million dollars of NIH funding, number of citations received <2 years, and number of citations received <2 years for each grant’s most highly cited paper (ie, 1 paper per grant that received the most number of 2-year citations). We also calculated each grant’s h-index (and h-index for 2-year citations), where a grant is given an index of h if it includes h papers that have been cited at least h times and none of the grant’s other papers have received more than h citations.6 The eSPA system maps publications to specific grants with the Scientific Publication Information Retrieval and Evaluation System (http://era.nih.gov/nih_and_grantor_agencies/other/spires.cfm) and citation data from ISI Web of Science.
Because many publications were supported by >1 grant, we adjusted the counts for publications and citations by dividing by the number of cited grants. Thus, if a paper has cited 3 grants and garnered 30 citations, each individual grant would be credited with 1/3 of a publication (0.3333…) and with 10 citations. We also performed a supplementary analysis focusing on papers that acknowledged only 1 grant.
For descriptive purposes only, we obtained aggregate data on journals in which publications appeared and on their medical subject heading (MeSH) terms using PubMed PubReminer (http://hgserver2.amc.nl/cgi-bin/miner/miner2.cgi).

Statistical Analyses

For descriptive purposes, we present baseline characteristics of grants with numbers and percentages for categorical variables and mean±SE for continuous variables, stratified by 3 percentile ranking categories: 0% to 10%, 10% to 20%, and 20% to 30%. We also described unadjusted citation statistics by generating a Pareto plot, which shows the sum of total citations received within citation deciles; a classic Pareto plot enables one to demonstrate, for example, that 20% of inputs (eg, employees) generate 80% of outputs (eg, productivity). To describe the association of bibliometric outcomes allocated with percentile rankings, we computed and plotted nonparametric locally weighted scatterplot smoothing estimates. We performed multivariable linear regression analyses to account for associations with study type (human subjects or not), grant duration, new investigator status, calendar year of first award, average number of grants acknowledged per paper, average number of authors per paper, average annual funding (in million dollars per year), and total institutional funding within the portfolio of all grants included in the study sample. Because both publications and citations per million dollars allocated have right- skewed distributions, we performed natural logarithmic transformations of (Publications/$Million+1) and (Citations/$Million+1) and performed goodness-of-fit tests for linear models using the nonparametric analysis of deviance F-tests.7
To further evaluate the independent association of percentile rankings with bibliometric outcomes, we constructed Breiman random forests, which are machine learning–based constructs that allow for robust, unbiased assessment of complex associations. As described previously, we assessed the relative variable importance based on a variable importance value that reflected gain of discrimination by adding a variable as well as by average minimal depth (where 1 is best, and higher values suggest lesser importance).8 Statistical analyses were conducted using SAS 9.2 (SAS Institute Inc), the Spotfire S+, and the R statistical software packages. We used the qcc package to present the distribution of citations graphically, and the randomforestSRC package to construct random forests. We will make available copies of the analysis data sets to interested investigators on request.

Results

Baseline Characteristics

There were 1492 cardiovascular R01 grants that met our inclusion criteria. Table 1 summarizes the baseline characteristics of these grants stratified by the percentile ranking categories of ≤10.0%, 10.1% to 20.0%, and >20.0%. Grants with lower percentile rankings had higher funding levels and longer durations.
Table 1. Characteristics and Bibliometric Outcomes of the 1492 Cardiovascular R01 Grants That Met the Inclusion Criteria
Percentile Ranking<10.0%10.0%–19.9%20.0%–41.8%P Value
Number of grants487574431
Percentile3.2/5.7/7.712.3/14.5/17.122.0/24.7/28.0<0.001
New investigator29% (139)27% (159)34% (144)0.34
Human studies34% (168)31% (180)37% (160)0.16
Costs, $mn1.5/1.8/2.81.3/1.6/2.51.2/1.5/2.6<0.001
Duration, y4.0/5.0/5.04.0/4.0/5.03.9/4.0/5.0<0.001
Annual costs, $mn/y0.32/0.37/0.440.31/0.36/0.430.29/0.35/0.41<0.001
Institutional funding in portfolio, $mn15/30/4513/29/4412/27/410.052
Bibliometric measures
 Number of publications568061344979
 Average authors per paper4.7/5.8/7.24.7/5.6/7.04.5/5.6/7.00.27
 Average grants acknowledged per paper1.9/2.5/3.41.8/2.4/3.31.8/2.4/3.10.064
Bibliometric outcomes for each grantAdjusted P value*
 Number of publications4.0/8.0/14.54.0/8.0/14.04.0/8.5/14.80.84
 Number of research publications4.0/7.0/12.03.5/6.8/12.03.5/7.0/12.00.87
 Citations to all papers484/1059/1874443/958/1829552/1182/21300.61
 Citations per million dollars spent231/537/1003252/573/981289/736/13010.87
 Citations <2 y of publication16/41/9813/35/7715/38/910.22
 Citations <2 y to paper with most 2-y citations7/14/256/12/216/13/200.13
 h-index3/6/113/6/113/7/120.88
 h-index (based on citations <2 y of publication)2/3/62/3/52/3/60.48
Values shown are percentage (number) or 25th/50th/75th quantiles as appropriate.
*
Adjusted P values based on linear additive models that account for average annual funding (except for the model for citations per million dollars spent), project duration, calendar year of initial award, inclusion of human subjects, new investigator status for principle investigator, total within-portfolio institutional funding, average number of authors per paper, and average number of grants acknowledged per paper.

Academic Productivity

In 7654 grant-years of funding that generated $3004 million of total NIH awards, the portfolio of 1492 grants yielded 16 793 publications (median per grant, 8; 25th and 75th percentiles, 4 and 14; range, 0–123), which received in total 2 224 255 citations (median per grant, 1048; 25th and 75th percentiles, 492 and 1932; range, 0–16 295), and 109 305 citations <2 year of publication (median per grant, 38; 25th and 75th percentiles, 15 and 87; range, 0–1302). The median grant h-index was 6 (25th and 75th percentiles, 3 and 11; range, 0–72) when the median h-index for 2-year citations was 3 (25th and 75th percentiles, 2 and 5; range, 0–22).
Table 2 presents the most common journals in which publications appeared and the publications’ most common MeSH terms. The 5 most popular journals were American Journal of Physiology (Heart and Circulatory Physiology), Journal of Biological Chemistry, Circulation, Circulation Research, and Hypertension. The 10 most common MeSH terms were animals, humans, male, female, mice, rats, middle aged, cells (cultured), signal transduction, and myocardium.
Table 2. Most Commonly Used Journals (Along With Other Journals of Interest) in Which the 1492 Investigator-Initiated R01 Grants Were Cited, and Most Common Medical Subject Heading (MeSH) Terms for Papers in Which the 1492 Grants Were Cited
Journal (Number)MeSH Term (Number)
Most commonly used journals Am J Physiol Heart Circ Physiol (955) J Biol Chem (649) Circulation (611) Circ Res (508) Hypertension (303) J Mol Cell Cardiol (295) Proc Natl Acad Sci U S A (252) Arterioscler Thromb Vasc Biol (235) Am J Physiol Regul Integr Comp Physiol (192) Cardiovasc Res (181)Other journals of interest JAMA (50) Nature (21) N Engl J Med (21) Cell (21) Lancet (19) Science (17)Animals (9637)Humans (8095)Male (5454)Female (4007)Mice (3558)Rats (2854)Middle Aged (1940)Cells, Cultured (1805)Signal Transduction (1553)Myocardium (1458)Adult (1429)Aged (1359)Myocytes, Cardiac (1158)Blood Pressure (1129)Mice, Knockout (1121)Disease Models, Animal (1121)Time Factors (1117)Mice, Inbred C57BL (1104)
Data are presented for descriptive purposes only.
The median number of publications per million dollars allocated was 4.6 (25th and 75th percentiles, 2.4 and 7.9; range, 0–55). The median number of citations per million dollars allocated was 600 (25th and 75th percentiles, 259 and 1072; range, 0–7269). The number of citations received per million dollars allocated followed an attenuated Pareto distribution; as shown in Figure 1A, the 40% most productive grants generated 76% of productivity, whereas the 40% least productive grants generated only 5%. Similarly, the number of citations received <2 years of publication followed a Pareto distribution; as shown in Figure 1B, the 40% most productive grants generated 83% of productivity, whereas the 40% least productive grants generated only 3%.
Figure 1. Pareto plots of grant citation metrics for 1492 grants that met the inclusion criteria. The x axis divides the number of grants into deciles according to the number of citations per million dollars spent (A) and number of citations received within 2 years of publication (B). Red bars shows the citation metrics within each decile of grants, whereas the line graph shows cumulative values going from the best to the worst producing deciles of grants. The top 3 deciles (eg, the 30% most productive grants) generated 75% of the citations received within 2 years of publication.

Publications and Citations According to Percentile Ranking

There were no associations between percentile rankings and any of the publication and citation metrics we considered (Table 1, bottom). Figure 2A presents grant-specific data of citations per million dollars allocated according to percentile ranking and grant type (human versus nonhuman); Figure 2B to 2D shows the corresponding data for 2-year citations, 2-year citations for maximally cited papers, and grant h-index. Figure 3 shows the data specific for the 6 study sections that reviewed the most number of funded grants; again, even within each study section, there was no association between percentile rankings and citations received per million dollars allocated.
Figure 2. Bibliometric outcomes according to percentile ranking for 1492 R01 grants that met the inclusion criteria. All plots show values stratified by human or nonhuman study focus; curves were generated by locally weighted scatterplot smoothing fitting. A, Data for citations received per million dollars allocated. B, Data for 2-year citations. C, Data for 2-year citations for each grant’s most highly cited paper (ie, for each grant, we identified which paper generated the most 2-year citations and plotted the number of citations generated by that paper according to percentile ranking). D, Data for each grant’s h-index.
Figure 3. Citations per million dollars allocated according to percentile rankings stratified by the 6 study sections that reviewed the highest number of funded grants. Curves were generated by locally weighted scatterplot smoothing fitting.
In a machine learning Breiman random forest model, which accounted for the same covariates listed in Table 1, the strongest predictor of citations per million dollars allocated was average number of grants acknowledged per paper, whereas percentile ranking was a much weaker predictor (Figure 4A). There was no clear monotonic association between percentile rankings and citations per million dollars allocated (Figure 4B). The association between average number of grants acknowledged per paper and citations per million dollars allocated followed an inverse-V association, with a peak ≈3 to 4 grants (Figure 4C). There was a weak association between percentile rankings and 2-year citations for any given grant’s most highly cited paper, with 2-year citation rates highest for grants with a percentile ranking <10 (Figure 4D).
Figure 4. Random forest findings. A, Relative importance of candidate variables for prediction of citations per million dollars allocated. The y axis value corresponds to the change in model discrimination by the addition of that variable. Hence, mean number of grants acknowledged per paper is the strongest predictor, whereas percentile ranking is much weaker. B, Association between citations per million dollars (after logarithmic transformation) and percentile rankings after accounting for all other variables in the x axis of A. C, Corresponding values of citations per million dollars and mean number of grants acknowledged per paper. D, A different model showing the association between 2-year citations for each grant’s most highly cited paper and percentile rankings after accounting for all other variables in the x axis of A plus average annual funding. In this model, the mean number of grants acknowledged per paper was again the strongest predictor, whereas percentile ranking was much weaker.

Papers That Acknowledged Only 1 Grant

In a supplementary analysis, we identified 927 R01 grants that produced ≥1 paper that acknowledged only 1 grant. The 4122 single-grant papers (representing 25% of all papers) received 548 024 citations, of which 21 615 occurred <2 years of publication. In unadjusted analyses, there was no association between percentile rankings and total number of citations received (Figure 5A) or citations received <2 years (Figure 5B). After accounting for covariates, there was no association between percentile rankings and total citations (adjusted P=0.40) or citations received <2 years (adjusted P=0.59).
Figure 5. Bibliometric outcomes according to percentile rankings for 927 R01 grants that generated ≥1 paper that acknowledged only 1 grant. A, Total citations. B, Two-year citations.

Discussion

We analyzed the bibliometric outcomes of 1492 cardiovascular R01 grants that received initial funding between 2001 and 2008 according to peer review percentile rankings. We found no clear association between percentile rankings and outcomes; as percentile ranking decreased (meaning a better score), we did not observe a corresponding monotonic increase in publications produced or citations received per million dollars spent. The absence of association persisted even after accounting for select confounders, for consideration of human versus nonhuman research, and for actions taken by specific high-volume study sections.
In an exploratory machine learning analysis, we found an intriguing, though admittedly weak, pattern whereby grants scoring in the 10th percentile or better generated more 2-year citations for maximally cited papers. This pattern suggests that peer review may identify grants that generate home runs, that is, individual papers that have unusually high impact. Given that scientific discovery is inherently heavy-tailed,9 our observation is worthy of further exploration in other grant cohorts. Our observation may also be particularly relevant at this time, when we are no longer living in a more generous funding climate.
Our findings are consistent with previous impressions that peer review assessments of grant applications are relatively crude predictors of subsequent productivity.2,4,5,10,11 Critics argue that the current approach to selecting proposals for funding has no evidence base.10,11 Some empirical work suggests that selection mechanisms that focus on researcher track records, instead of peer review assessments of project proposals, may better predict subsequent high-impact publications and willingness to consider innovative ideas.12
If percentile ranking does not predict scientific outcome, there is a rationale for considering other approaches to evaluating proposals and choosing which ones to fund. Kaplan2 suggested that the study section committee structure inherently discriminates against innovative projects and has identified alternative peer review methods, which range from appointing prescient individuals to using highly inclusive Web-based crowd sourcing. Ioannidis10 identifies 6 possible alternative options for choosing projects to fund: egalitarian (fund all but at low amounts), aleatoric (fund at random, an approach being used by some sponsors), career assessment, automated impact indices, scientific citizenship, and projects with broad goals. He and others acknowledge that it is not known which approach (if any) is best, and therefore funding agencies, including NIH, should consider conducting randomized trials.13 The National Cancer Institute recently modified its approach to funding decisions, restricting automatic funding to those grants with percentile rankings of ≥7, while staff scientists undertake initial review to decide which additional grants to fund. Our machine learning exploratory analyses (Figure 4D) might offer support—automatically fund those with topnotch percentile rankings while taking a more deliberative approach for all others.
There are limitations to our analyses. There is no clear gold standard for measuring research success or impact. We focused on the number of citations, and specifically citations according to funding, as our primary end point, an end point consonant with those used by others interested in measuring scientific productivity.14 Citations represent a measure of interest on the part of the scientific community; recent work is focusing on newer Web-based, arguably more sensitive, measures of impact. Work on rare diseases may attract interest from a smaller community, yet still be of substantial scientific value. We deliberately chose not to analyze outcomes according to journal impact factor, because some recently singled out this practice for intense criticism.15 Nature, a journal with one of the highest impact factors in biomedical science, has argued against the use of impact factor for describing the impact of individual papers, noting that only a small proportion of its papers yields the vast majority of its citations.16 Recently, the Editor-in-Chief of Science, another journal with a high impact factor, critiqued the “misuse” as “never intended to be used to evaluate individual scientists.”17 Our analyses focused only on funded R01 cardiovascular grant applications but still covered a diversity of projects, scientists, and scientific institutions. There were many confounders we did not consider, such as detailed preapplication metrics of principle investigators.
Finally, we did not compare the productivity of scientists who successfully secured funding as compared with others who did not; such an analysis would be beyond the scope of our study. One report from the National Bureau of Economic Research suggested a moderate impact of R01 funding on scientific productivity.18 Nonetheless, because we were only able to analyze outcomes for those grants that scored within a relatively narrow and positively received range, our findings are consistent with an argument that NIH funding levels are inadequate to support all potentially fundable high-quality work.
Despite these limitations, we noted a striking lack of association between percentile rankings and bibliometric outcomes in a large cohort of cardiovascular R01 grants. Our findings offer justification for further research and consideration into innovative approaches for evaluating research proposals and for selecting projects for funding.

Acknowledgments

We are grateful to Dr Frank Evans for his invaluable assistance in assembling the analysis data set. We also wish to thank the anonymous peer reviewers for their constructive comments and suggestions, and in particular for their queries on how to handle papers that acknowledge support from multiple grants and on the possibility of calculating grant-specific h-indices.

Footnote

Nonstandard Abbreviations and Acronyms

eSPA
electronic scientific portfolio assistant
MeSH
medical subject heading
NHLBI
National Heart, Lung, and Blood Institute

References

1.
Galis ZS, Hoots WK, Kiley JP, Lauer MS. On the value of portfolio diversity in heart, lung, and blood research. Circ Res. 2012;111:833–836.
2.
Kaplan D. Social choice at NIH: the principle of complementarity. FASEB J. 2011;25:3763–3764.
3.
Mayo NE, Brophy J, Goldberg MS, Klein MB, Miller S, Platt RW, Ritchie J. Peering at peer review revealed high degree of chance associated with funding of grant applications. J Clin Epidemiol. 2006;59:842–848.
4.
Demicheli V, Di Pietrantonj C. Peer review for improving the quality of grant applications. Cochrane Database Syst Rev. 2007:MR000003
5.
Graves N, Barnett AG, Clarke P. Funding grant proposals for scientific research: retrospective analysis of scores by members of grant review panel. BMJ. 2011;343:d4797.
6.
Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102:16569–16572.
7.
Hastie T, Tibshirani RGeneralized Additive Models. London, NY: Chapman and Hall;1990.
8.
Ishwaran H, Kogalur UB, Gorodeski EZ, Minn AJ, Lauer MS. High- dimensional variable selection for survival data. J Am Stat Assoc. 2010;105:205–217.
9.
Press WH. Presidential address. What’s so special about science (and how much should we spend on it?). Science. 2013;342:817–822.
10.
Ioannidis JP. More time for research: fund people not projects. Nature. 2011;477:529–531.
11.
Langer JS. Enabling scientific innovation. Science. 2012;338:171.
12.
Azoulay P, Graff Zivin JS, Manso G. Incentives and creativity: evidence from the academic life sciences. RAND Journal of Economics. 2011;42:527–554.
13.
Dizikes P. How to encourage big ideas. MITnews. Available at: http://web.mit.edu/newsoffice/2009/creative-research-1209.html. Accessed September 1, 2013.
14.
Fortin JM, Currie DJ. Big science vs. little science: how scientific impact scales with funding. PLoS One. 2013;8:e65263.
15.
Balaban RS. Evaluation of scientific productivity and excellence in the NHLBI Division of Intramural Research. J Gen Physiol. 2013;142:177–178.
16.
Not-so-deep impact. Nature. 2005;435:1003–1004.
17.
Alberts B. Impact factor distortions. Science. 2013;340:787.
18.
Jacob BA, Lefgren L. The impact of research grant funding on scientific productivity. J Public Econ. 2011;95:1168–1177.

eLetters(0)

eLetters should relate to an article recently published in the journal and are not a forum for providing unpublished data. Comments are reviewed for appropriate use of tone and language. Comments are not peer-reviewed. Acceptable comments are posted to the journal website only. Comments are not published in an issue and are not indexed in PubMed. Comments should be no longer than 500 words and will only be posted online. References are limited to 10. Authors of the article cited in the comment will be invited to reply, as appropriate.

Comments and feedback on AHA/ASA Scientific Statements and Guidelines should be directed to the AHA/ASA Manuscript Oversight Committee via its Correspondence page.

Information & Authors

Information

Published In

Go to Circulation Research
Go to Circulation Research

Scanning electron micrograph of a corrosion cast of a 25 micron diameter collateral (tortuous vessel in the center) cross-connecting a distal arteriole (out of view) of the middle and anterior cerebral artery trees of a C57Bl/6J three-month-old mouse. Also evident are arterioles branching off of the collateral and penetrating into what was the cerebral cortex before removal by corrosion. Casts of endothelial cell nuclei can also be seen. Image courtesy of Dan Chalothorn, Katy Liu, and James Faber. See related article, page 660.

Circulation Research
Pages: 600 - 606
PubMed: 24406983

History

Received: 19 September 2013
Revision received: 3 January 2014
Accepted: 8 January 2014
Published online: 9 January 2014
Published in print: 14 February 2014

Permissions

Request permissions for this article.

Keywords

  1. bibliometrics
  2. National Heart, Lung, and Blood Institute (U.S.)

Subjects

Authors

Affiliations

Narasimhan Danthi*
From the Advanced Technologies and Surgery Branch (N.D.), the Office of Biostatistics Research (C.O.W., P.S.), and the Office of the Director (M.L.), Division of Cardiovascular Sciences of the National Heart, Lung, and Blood Institute (NHLBI), Bethesda, MD.
Colin O. Wu*
From the Advanced Technologies and Surgery Branch (N.D.), the Office of Biostatistics Research (C.O.W., P.S.), and the Office of the Director (M.L.), Division of Cardiovascular Sciences of the National Heart, Lung, and Blood Institute (NHLBI), Bethesda, MD.
Peibei Shi
From the Advanced Technologies and Surgery Branch (N.D.), the Office of Biostatistics Research (C.O.W., P.S.), and the Office of the Director (M.L.), Division of Cardiovascular Sciences of the National Heart, Lung, and Blood Institute (NHLBI), Bethesda, MD.
Michael Lauer
From the Advanced Technologies and Surgery Branch (N.D.), the Office of Biostatistics Research (C.O.W., P.S.), and the Office of the Director (M.L.), Division of Cardiovascular Sciences of the National Heart, Lung, and Blood Institute (NHLBI), Bethesda, MD.

Notes

In December 2013, the average time from submission to first decision for all original research papers submitted to Circulation Research was 11.66 days.
The views expressed in this article are those of the authors and do not necessarily represent those of the NHLBI, the National Institutes of Health, or the US Department of Health and Human Services.
*
These authors contributed equally to this article.
Correspondence to Michael Lauer, MD, Division of Cardiovascular Sciences of the National Heart, Lung, and Blood Institute, 6701 Rockledge Dr, Room 8128, Bethesda, MD 20824-0105. E-mail [email protected]

Disclosures

None.

Sources of Funding

All authors were full-time employees of the National Heart, Lung, and Blood Institute at the time they worked on this project.

Metrics & Citations

Metrics

Citations

Download Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Select your manager software from the list below and click Download.

  1. Research hotspots and trends of epigenetic therapy in oncology: a bibliometric analysis from 2004 to 2023, Frontiers in Pharmacology, 15, (2024).https://doi.org/10.3389/fphar.2024.1465954
    Crossref
  2. Funding priorities and health outcomes in Danish medical research, Social Science & Medicine, 360, (117347), (2024).https://doi.org/10.1016/j.socscimed.2024.117347
    Crossref
  3. Emerging trends in DNA and RNA methylation modifications in type 2 diabetes mellitus: a bibliometric and visual analysis from 1992 to 2022, Frontiers in Endocrinology, 14, (2023).https://doi.org/10.3389/fendo.2023.1145067
    Crossref
  4. Effectiveness of Cognitive–Behavioral Family Therapy: A Systematic Review of Randomized Controlled TrialsBilişsel-Davranışçı Aile Terapisinin Etkililiği: Randomize Kontrollü Çalışmaların Sistematik Bir İncelemesi, Psikiyatride Güncel Yaklaşımlar, 15, 1, (175-188), (2023).https://doi.org/10.18863/pgy.1115301
    Crossref
  5. Black in Cancer: Two Years of Empowering the Next Generation, Cancer Discovery, 13, 2, (275-277), (2023).https://doi.org/10.1158/2159-8290.CD-22-1408
    Crossref
  6. Fewer Than One in 20 Current Academic Orthopaedic Surgeons Have Obtained National Institutes of Health Funding, Clinical Orthopaedics & Related Research, 481, 7, (1265-1272), (2023).https://doi.org/10.1097/CORR.0000000000002556
    Crossref
  7. Society of Interventional Radiology Foundation Funding to National Institutes of Health Funding: Report from the Society of Interventional Radiology Foundation/National Institutes of Health Task Force, Journal of Vascular and Interventional Radiology, 34, 3, (485-490), (2023).https://doi.org/10.1016/j.jvir.2022.11.010
    Crossref
  8. Does grant funding foster research impact? Evidence from France, Journal of Informetrics, 17, 4, (101448), (2023).https://doi.org/10.1016/j.joi.2023.101448
    Crossref
  9. Association between industry support and the reporting of study outcomes in randomized clinical trials of dental implant research from the past 20 years, Clinical Implant Dentistry and Related Research, 24, 1, (94-104), (2022).https://doi.org/10.1111/cid.13065
    Crossref
  10. National Institutes of Health Funding for Abdominal Organ Transplantation Research Has Declined: A 30-year Analysis, Transplantation, 106, 10, (1909-1911), (2022).https://doi.org/10.1097/TP.0000000000004082
    Crossref
  11. See more
Loading...

View Options

View options

PDF and All Supplements

Download PDF and All Supplements

PDF/EPUB

View PDF/EPUB
Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Personal login Institutional Login
Purchase Options

Purchase this article to access the full text.

Purchase access to this article for 24 hours

Percentile Ranking and Citation Impact of a Large Cohort of National Heart, Lung, and Blood Institute–Funded Cardiovascular R01 Grants
Circulation Research
  • Vol. 114
  • No. 4

Purchase access to this journal for 24 hours

Circulation Research
  • Vol. 114
  • No. 4
Restore your content access

Enter your email address to restore your content access:

Note: This functionality works only for purchases done as a guest. If you already have an account, log in to access the content to which you are entitled.

Media

Figures

Other

Tables

Share

Share

Share article link

Share

Comment Response