Expectations that publicly funded health research should be productive, in terms of both research publication outputs and contributions to better health outcomes, are becoming increasingly explicit.1,2 This has directed attention to methods for tracking research outputs, where scholarly publication metrics — impact factors and citations — are currently the dominant indices.2,3 Publication of research is expected to disseminate new knowledge and facilitate “real-world” policy and practice impacts.
While the recent emphasis on research productivity spans all types of research,1,4-6 intervention research is particularly relevant, as its findings are likely to be more directly applicable to health policy and practice.7-9 Intervention studies tend to be less prevalent in peer-reviewed journals than descriptive and epidemiological studies, and this has been partly attributed to the practical and scientific challenges of conducting intervention research.9-11
Few studies have empirically investigated the implementation and outputs of health intervention research. As part of a project on the impact of a sample of intervention research funded by the National Health and Medical Research Council (NHMRC), here, we examine the research outputs of these grants. Specifically, we report:
- the descriptive profile of NHMRC-funded intervention research in terms of topics, settings, funding terms, and stages of development of the interventions; and
- whether and how statistically significant intervention effects on primary outcome variables influenced research productivity.
Methods
Data were collected between 23 July 2012 and 10 December 2013 on studies funded by the NHMRC between 1 January 2003 and 31 December 2007. Studies were eligible if they fitted our definition of health intervention research, which was: “any form of trial or evaluation of a service, program or strategy aimed at disease, injury or mental illness prevention, health promotion or psychological intervention, conducted with general or special populations, or in clinical or institutional settings”. Clinical trials of potentially prescribable drugs, vaccines and diagnostic tests were excluded.
Eligibility was assessed by two coders who reviewed titles, application abstracts, end-of-project reports to the NHMRC and publications arising from the grant. The 5-year period was selected to allow enough time for completion of the research and publication of the findings, balanced against limiting recall bias about studies completed too long ago.
Descriptive profile
Basic information on sample grants was collated, including the duration of funding and the topic of the intervention. The studies were classified according to “stage of intervention development” based on definitions from a previously published guide, distinguishing controlled interventions (efficacy), those carried out in real-life conditions (effectiveness), those that were replication or adaptation studies in different settings, or dissemination studies.12 Additional information was gathered from online surveys of chief investigators. A full description of the data collection process and response rates is provided elsewhere.13
Bibliometric analysis
To collect consistent information, we reviewed all publications submitted by chief investigators, and conducted literature searches (in Web of Science and Google Scholar databases) for the years following the commencement of each completed grant. Publications were reviewed to check if they were related to the grant in question. Key search terms included chief investigators’ names, grant numbers, project titles, intervention descriptions and relevant health issues. In the case of grants for which publications of study results could not be found, we attempted to contact chief investigators, including previous non-responders.
Assessing published results of intervention research
Two assessors reviewed publications that reported results of interventions to identify whether there were any statistically significant changes to the primary outcomes proposed in the research application summary. Where there was any uncertainty, decisions about what constituted primary outcomes were checked by other authors in a panel process. We classified interventions as: (i) those that showed statistically significant effects on primary outcomes; (ii) those with “mixed” results (eg, significant changes for some but not all primary outcomes), or if unintended or secondary outcomes were emphasised; and (iii) those that found no statistically significant effects.
Ethics approval
This project had approval from the University of Sydney Human Research Ethics Committee (15003). All chief investigators were assured that projects would not be identified because of anticipated sensitivities about publication output, ineffective interventions or lack of real-world impact.
Results
Completion
Sixty-six (80%) of the 83 intervention studies we identified were completed at the time of data collection, and 13 were ongoing. The status of four was unknown, with no responses from chief investigators. Of the 13 that were ongoing, reasons stated for incompleteness included problems recruiting study participants, being part of larger international trials or being longitudinal studies with longer follow-up. The proportion that were incomplete or ongoing was highest for the eight studies that commenced in 2007, the most recently sampled year, and included three grants scheduled for completion in 2011 or 2012.
Description of funded intervention research projects
The mean duration of funding of the 66 completed projects was 3 years (range, 2–5 years). Interventions included treatment and management (30 studies), screening and early intervention (12 studies), and primary prevention (24 studies), implemented in clinical or community settings, with many dealing with aspects of chronic disease. Topics reflected a variety of health disciplines, including medicine, psychiatry, psychology, dietetics, dentistry, physiotherapy and nursing. In terms of stage of intervention development, most focused on intervention efficacy (28 studies) or effectiveness (27 studies); 10 were replications or adaptations of an intervention in a new setting or population group; and one tested dissemination of the intervention.
Intervention effects
We could not locate published results on primary outcomes for 12 of the completed studies. There were equal numbers of studies that produced statistically significant effects (including “mixed” results; 27 studies) and those that did not show significant effects (27 studies). An example of mixed results was a school intervention that prevented (or delayed) age-related increases in students’ alcohol consumption, but did not reduce the prevalence of students’ depressive symptoms, which had been nominated as the primary objective.
Publication outputs
Publications related to each completed grant were categorised according to whether they reported on intervention effects or on “other” descriptive topics, such as measurement, intervention feasibility, epidemiological questions, or commentaries. The mean number of published articles per grant was 3.3 (range, 0–13), with 2.0 reporting results. Many investigators reported that their publication process was ongoing; eight had not yet published any articles, and twelve had not published articles on intervention effects. Among grants with published results, those with and without significant intervention effects had similar numbers of “other” publications (mean, 1.3 per grant), although the latter had smaller numbers of publications reporting intervention results and of total publications (Box).
Discussion
Our study describes the publication outputs for intervention studies funded by the NHMRC from 2003 to 2007, inclusive, and provides a benchmark to inform expectations about the publication yield of such research. We found that publications covered many aspects of intervention development8,12,14 and were not restricted to intervention effects, although studies reporting no statistically significant intervention effects produced slightly fewer results-based publications.15
While the number of publications is not an indicator of relevance to health policy,6 publication volume remains a basic metric of academic productivity.2,16 Analysis of Australian health promotion intervention research has previously identified between one and seven publications per study,17 while another Australian study of primary care research reported a mean of 2.3 publications per grant (range, 0–7 publications).18 However, the contexts and funding sources for these two studies and our study vary, and there is no endorsed benchmark for assessing numbers of publications across different areas of research.
In relation to our estimates, we acknowledge that later assessment may be required to capture complete publication outputs, and that the impact of non-responders on estimates (whether they would be lower or higher) is unknown. Further, the output estimates in our study cannot be extrapolated to non-intervention research.
Our findings on the stage of intervention development are consistent with those of other reviews of intervention research.9,10 As research type is not routinely documented by the NHMRC, the proportion of available funding that is invested in intervention research is currently unknown. However, our methods indicated that intervention research accounted for a small proportion of NHMRC grants in this period, although the interventions studied related to national health priorities and major causes of mortality and morbidity across Australia.
While intervention research typically tests effectiveness, the statistical significance of study results is not an indicator of study value.2 Some studies reporting non-significant results generated findings with important implications for policy and practice — for example, that an intervention should be discontinued or modified. It was beyond the scope of our study to critically appraise the methods of each funded study, and thus assess whether studies had sufficient statistical power to detect the changes they hypothesised.
While it is often claimed that researchers are discouraged by the difficulty of publishing statistically non-significant findings, we found no evidence for this. However, the length of time to intervention study completion and the relatively small number of intervention study publications may constitute disincentives for researchers to embark on these kinds of studies, particularly as there are no established methods to demonstrate other forms of impact, such as measures of policy change and influence on practice. Reviews of research funding have called for an increase in intervention research and for strategies to help remove the barriers faced by intervention researchers (such as dedicated funding for intervention research, longer funding periods, support for pilot studies and separate review panels).19 Meanwhile, policy agencies have suggested similar remedies to redress their concerns about a lack of definitive evidence on effective interventions in many areas.20
This is the first independent study to document the publication outputs of a set of intervention studies funded through a major national funding body. Tracking research publication outputs is important as a mechanism to ensure accountability in expenditure of public funds and, potentially, as a basis for quality improvement of research funding systems. Ongoing investigations of this kind are needed to provide information on whether current research investment patterns match the need for evidence about health care interventions.
1 Peer-reviewed publications by category of intervention results
Category |
Number of studies |
Total number of articles |
Mean number per grant of all published articles |
Total number of articles reporting results |
Mean number per grant of articles reporting results |
Total number of other articles |
|||||||||
Statistically significant intervention effects |
19 |
76 |
4.0 |
52 |
2.7 |
24 |
|||||||||
Mixed results |
8 |
32 |
4.0 |
24 |
3.0 |
8 |
|||||||||
No statistically significant intervention effects |
27 |
90 |
3.3 |
54 |
2.0 |
36 |
|||||||||
No published intervention effects |
12 |
22 |
1.8 |
0 |
0 |
22 |
|||||||||
Total |
66 |
220 |
3.3 |
130 |
2.0 |
90 |
|||||||||
Received 3 November 2014, accepted 22 April 2015
- Lesley A King1
- Robyn S Newson1
- Gillian E Cohen1
- Jacqueline Schroeder1
- Selina Redman3
- Lucie Rychetnik4
- Andrew J Milat5
- Adrian Bauman1
- Simon Chapman1
- 1 University of Sydney, Sydney, NSW
- 2 Domestic Violence NSW Service Management, Sydney, NSW
- 3 The Sax Institute, Sydney, NSW
- 4 University of Notre Dame Australia (Sydney), Sydney, NSW
- 5 New South Wales Ministry of Health, Sydney, NSW
No relevant disclosures.
- 1. Chalmers I, Bracken MB, Djulbegovic B, et al. How to increase value and reduce waste when research priorities are set. Lancet 2014; 383: 156-165.
- 2. Quantifying the social impact of research and medical journals [editorial]. Lancet 2014; 384: 557.
- 3. Wells R, Whitworth JA. Assessing outcomes of health and medical research: do we measure what counts or count what we can measure? Aust New Zealand Health Policy 2007; 4: 14.
- 4. Wooding S, Hanney SR, Pollitt A, et al. Understanding factors associated with the translation of cardiovascular research: a multinational case study approach. Implement Sci 2014; 9: 47.
- 5. Donovan C, Butler L, Butt AJ, et al. Evaluation of the impact of National Breast Cancer Foundation-funded research. Med J Aust 2014; 200: 214-218. <MJA full text>
- 6. Ionnidis JP, Greenland S, Hlatky MA, et al. Increasing value and reducing waste in research design, conduct and analysis. Lancet 2014; 383: 166-175.
- 7. Grimshaw JM, Eccles MP, Lavis JN, et al. Knowledge translation of research findings. Implement Sci 2012; 7: 50.
- 8. Hawe P, Potvin L. What is population health intervention research? Can J Public Health 2009; 100(1): Suppl I8-I14.
- 9. Milat AJ, Bauman AE, Redman S, Curac N. Public health research outputs from efficacy to dissemination: a bibliometric analysis. BMC Public Health 2011; 11: 934.
- 10. Sanson-Fisher RW, Campbell EM, Htun AT, et al. We are what we do: research outputs of public health. Am J Prev Med 2008; 35: 380-385.
- 11. Reynolds J, DiLiberto D, Mangham-Jefferies L, et al. The practice of ‘doing’ evaluation: lessons learned from nine complex intervention trials in action. Implement Sci 2014; 9: 75.
- 12. Bauman A, Nutbeam D. Evaluation in a nutshell: a practical guide to the evaluation of health promotion programs. 2nd ed. Sydney: McGraw-Hill, 2013.
- 13. Cohen G, Schroeder J, Newson R, et al. Does health intervention research have real world policy and practice impacts: testing a new impact assessment tool. Health Res Policy Syst 2015; 13: 3.
- 14. Hawe P, Di Ruggiero E, Cohen E. Frequently asked questions about population health intervention research. Can J Public Health 2012; 103: e468-e471.
- 15. Song F, Parekh S, Hooper L, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 2010; 14: iii, ix-xi, 1-193.
- 16. Schapper CC, Dwyer T, Tregear GW, et al. Research performance evaluation: the experience of an independent medical research institute. Aust Health Rev 2012; 36: 218-223.
- 17. Milat AJ, Laws R, King L, et al. Policy and practice impacts of applied research: a case study analysis of the New South Wales Health Promotion Demonstration Research Grants Scheme 2000-2006. Health Res Policy Syst 2013; 11: 5.
- 18. Reed RL, Kalucy EC, Jackson-Bowers E, McIntyre E. What research impacts do Australian primary health care researchers expect and achieve? Health Res Policy Syst 2011; 9: 40.
- 19. Nutbeam D. Report of the Review of Public Health Research Funding in Australia. Canberra: National Health and Medical Research Council, 2008.
- 20. Haynes AS, Gillespie JA, Derrick GE, et al. Galvanizers, guides, champions, and shields: the many ways that policymakers use public health researchers. Milbank Q 2011; 89: 564-598.
Abstract
Objective: To describe the research publication outputs from intervention research funded by Australia’s National Health and Medical Research Council (NHMRC).
Design and setting: Analysis of descriptive data and data on publication outputs collected between 23 July 2012 and 10 December 2013 relating to health intervention research project grants funded between 1 January 2003 and 31 December 2007.
Main outcome measures: Stages of development of intervention studies (efficacy, effectiveness, replication, adaptation or dissemination of intervention); types of interventions studied; publication output per NHMRC grant; and whether interventions produced statistically significant changes in primary outcome variables.
Results: Most of the identified studies tested intervention efficacy or effectiveness in clinical or community settings, with few testing the later stages of intervention development, such as replication, adaptation or dissemination. Studies focused largely on chronic disease treatment and management, and encompassed various medical and allied health disciplines. Equal numbers of studies had interventions that produced statistically significant results on primary outcomes, (27) and those that did not (27). The mean number of total published articles per grant was 3.3, with 2.0 articles per grant focusing on results, and the remainder covering descriptive, exploratory or methodological aspects of intervention research.
Conclusions: Our study provides a benchmark for the publication outputs of NHMRC-funded health intervention research in Australia. Research productivity is particularly important for intervention research, where findings are likely to have more immediate and direct applicability to health policy and practice. Tracking research outputs in this way provides information on whether current research investment patterns match the need for evidence about health care interventions.