Research evaluation is a hot topic across the world as many economies seek to implement research performance exercises to meet the growing demands for accountability and to drive funding allocation.1 Universities are the custodians of significant amounts of publicly invested research funding and, as such, it is incumbent upon the sector to maximise the value of that investment over the longer term. Australian universities, despite operating in an environment of impoverished infrastructure, with a declining proportion of government support despite a sizeable and growing federal budget surplus, are fully aware that further taxpayer funding will demand greater accountability. With the release of the Expert Advisory Group’s (EAG) refined Research Quality Framework (RQF) measurement schema (Box 1),2 it is timely to review the outcomes of similar assessment exercises internationally before any final decision is taken.
The crucial question is whether the RQF is the most appropriate and cost-effective mechanism to achieve this. If we cannot at this late stage say with any confidence whether the benefit of introducing an RQF will outweigh the costs, should we proceed at all? The recent change in Federal Minister for Education, Science and Training provides an opportunity for the incoming minister, the Hon Julie Bishop, to consider the option of a modification of existing assessment criteria, which would achieve more at considerably less cost and with much less disturbance to the sector.
Australia’s RQF is based largely on the Research Assessment Exercise in the United Kingdom (UK-RAE) (as the appointment of the UK’s Professor Sir Gareth Roberts as chair of the Expert Advisory Group might suggest), with the addition of a measurement of research impact. The New Zealand Performance Based Review Fund (NZ-PBRF), while itself modelled to a large extent on the UK-RAE, is also instructive in that the unit of assessment is the individual rather than the team.
The UK-RAE introduced in 1986 (with substantial changes made in 1992) has concentrated research funding into fewer places and the best departments. The longer term effects of this increasingly selective and concentrated funding are yet to be fully appreciated, even though a polarisation within the sector, a disinclination to take risks, and adverse effects on clinical research have already been noted.3,4
While it has been credited with increasing the global impact of UK research, increasing its share of the 1% most cited research papers,5 the RAE has also attracted sharp and fervent criticism.3,6 Some have suggested that the perceived improvement in research performance (55% of the UK’s academic departments deemed to be of “international quality” in 2001, up from 31% in 1996) was not so much a true RAE outcome, rather it was more an artefact of successful academic “games playing”.6,7 Furthermore, with 55% of researchers placed within the top two grades, the scale now appears to lack discriminatory power.5 Exacerbating the problem was the UK Government’s subsequent failure to fully fund the improved ratings achieved in the 2001 assessment cycle.
The RAE is expensive (in 1996, it was estimated to cost between £27 and £37 million). It has also been claimed to have undermined university autonomy, forced researchers to realign their research pursuits within RAE “friendly” research domains, downgraded teaching and undervalued clinical academic medicine.8,9 In a survey conducted in 2005 by the British Medical Association, 40% of clinical academic and research staff regarded the RAE as having had a negative impact on their career,10 and data produced by the Council of Heads of Medical Schools show a significant decline in clinical academic staffing levels between 2000 and 2004, with the biggest slump reported among clinical lecturers.11
It is widely believed that the RAE has compromised clinical academic medicine through a failure to satisfactorily acknowledge the commitment and contribution of clinical academics, not only to research but also to teaching and clinical practice.9,10,12 Certain disciplines, for example craft specialties such as obstetrics and gynaecology and surgery, have suffered disproportionately.11 By its very nature, clinical research is disadvantaged by the RAE’s focus on short-term research outputs and over-emphasis on publications in high impact-factor journals. In addition, there is concern about a possible emergence of non-research medical schools as a result of the concentration of limited resources.
Little wonder the UK laments the widely accepted decline of its clinical research,11,13 when its own funding mechanism forces universities to ditch clinical academics in favour of more “productive” non-clinical scientists. Research-led teaching has been widely credited with improving the quality of both education and service in the health sector. This is particularly true in the medical arena, where the concept of a university hospital with university clinical departments, clinical schools and affiliated medical research institutes is seen as so important.
Following the Roberts review14 with its recommended departure from a “one size fits all” assessment approach, the controversial seven-point scale employed in previous assessments has been jettisoned in favour of a research activity quality profile. Each unit of assessment will be given a research profile documenting the proportion of work that falls into five categories from unclassified to four-star quality (Box 2). However, disappointment and incredulity has been expressed by the research community about the reluctance of the funding councils to disclose how RAE outcomes will be used to calculate funding and what proportion of funding will be assigned to each star grade.15
In the lead-up to the 2008 exercise, there are reports of universities conducting practice runs, shedding staff who are less research active, with teaching staff being replaced with “star” researchers in the pursuit of research ratings, and announcing the rating their academics are expected to gain,4,16 with little evidence such exercises actually raise the nation’s research output. The medical school at Imperial College, London, has been reported as threatening “disciplinary procedures” for academics who do not secure at least £75 000 in external research revenue yearly,17 despite no immediate link between fundraising and the quality of research produced. No mention is made of the need to encourage long-term research themes, for which results may take decades, or the need to balance research with teaching and professional leadership.
Much discussion is now centred on the future of the UK-RAE. In the budget delivered on 22 March 2006, the UK Chancellor declared that the 2008 RAE would indeed be the last and that a working group had been established to develop alternative metrics-based formulae to simplify the distribution of research funding.18 Metric measures, where funding is related to the impact of publications and research grant and contract money, will be tested in conjunction with the 2008 RAE.4,18 This should be of particular interest to Australia as we contemplate discarding our metrics-based system in favour of an un-reconstructed UK-RAE.
The NZ-PBRF, introduced in 2003, is based on a combination of peer review and performance indicators. The research assessment consists of three components: the quality of academics’ research outputs, research degree completions and external research income, weighted 60/25/15, respectively.
The publication of the 2003 results prompted several concerns regarding the appropriateness and efficacy of the funding model.19,20 Echoing the issues generated by the UK-RAE, the New Zealand scheme has been charged with the devaluation of teaching, downgrading of academic autonomy, disadvantaging applied research and creating a deterrent to collaboration.20,21 Additional concerns have been raised in relation to the real cost–benefit ratio of participation in the NZ-PBRF exercise, with reports that many universities had spent more on participating in the exercise than they will gain in funding increases.22 The most trenchant criticism, however, has been reserved for the scoring system, which placed most early career researchers in the lowest category (in essence “research inactive”), and the use of individual academics as the unit of assessment. Following the review by the sector reference group, provisions for new and emerging researchers are to be implemented and the controversial unit of assessment, believed to have disadvantaged certain groups and negatively affected staff, will be reviewed after the partial round assessment scheduled for this year.23
Submissions received in response to the Expert Advisory Group’s “Preferred Model” paper for the RQF highlighted myriad issues requiring clarification. Not least among the issues raised was the unanticipated announcement in the Minister’s foreword that the outcomes of the RQF may be used to determine the distribution of research funding through the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC). The worrying implication of this is diminished independence of the research councils, and the possibility that the RQF may render some research groups ineligible or disqualified from access to NHMRC and ARC funding. Some commentators advocate the converse, with ARC and NHMRC success informing the RQF.24 Further fears of political intervention have been fuelled by the veto by the previous Minister for Education, Science and Training of several ARC research grants, and the more recent allegations of federal government censorship of CSIRO (Commonwealth Scientific and Industrial Research Organisation) scientists.
The Vice-Chancellors of the Group of Eight universities maintain that there already exist appropriate processes for assessing the quality of national research outcomes, and that, at a small cost, the present research assessment and funding mechanisms could be modified to produce more comprehensive comparative data for the university sector.25 The allocation of funding from the National Competitive Grants Scheme is based on an extensive and rigorous peer review system which supports the highest quality research projects. As such, this income represents an existing and accepted measure of research quality. In the UK, the House of Commons Science and Technology Committee found that external grant income closely matched RAE results in the top 20–30 institutions.15 It is believed that an RQF would produce analogous results, duplicating competitive peer-reviewed processes.25 Given this likelihood, why introduce an enormously expensive experiment in its stead?
Throughout the university sector, there is a general feeling of concern regarding the funding implications of the RQF, and acceptance by the sector will depend on additional funding to meet and exceed the costs of participation in the exercise. It has been suggested that the cost of implementing the RQF could be in the region of $50 million per cycle.25 As yet there is little indication that implementing an RQF would be accompanied by a sufficient increase in funding to make this worthwhile.
If existing data can be used to obtain detailed quantitative data that closely match the proposed system, there is little to support introducing a potentially burdensome and expensive assessment process, especially in the absence of any pilot data to suggest advantages to a new system. A criticism of the present funding formulae is that they pay little attention to publication quality and impact compared with quantity. This can be readily and cheaply adjusted. A stronger emphasis on publication in prestigious “big name” journals might gather some support, even though, as we all know, significant breakthroughs and important research messages of high societal impact often appear first in lesser-known and specialist publications with the long-term value being recognised only in retrospect. We should also ask whether our two new Nobel laureates, Barry Marshall and Robin Warren, awarded the Nobel prize in Physiology or Medicine for their discovery of the bacterium Helicobacter pylori and its role in gastric disorders, would have been supported or closed down by our proposed assessment system. The incoming minister might be well advised to look at what is happening in the UK now, as opposed to when it commenced its experiment with RAEs.
1 Research Quality Framework (RQF) — summary of methodology (modified from RQF final advice)
(i) To nominate research groupings for assessment.
(ii) Provide researchers’ evidence portfolios, comprising context statements, four “best” research outputs per researcher, impact statements and a full list of research outputs for each research grouping over the assessment period.
(i) Approximately 12 panels mapped to Australian Bureau of Statistics research fields, courses and disciplines classification codes.
(ii) Each panel comprising 12–15 academic and expert reviewers to assess research groupings and provide ratings.
(i) Research quality measured on a five-point scale.
(ii) Research impact measured on a three-point scale.
(i) RQF ratings should be used to distribute 100% of the Institutional Grants Scheme (IGS), at least 50% of the Research Training Scheme (RTS) and 100% of additional funding.
(ii) The Expert Advisory Group believes that it may not be possible to achieve the full impact of the RQF without the distribution of additional funds.
(iii) Institutions will retain discretion to internally allocate RQF-driven research block funding.
(iv) Funding will take into account the size of the institution.
2 UK Research Assessment Exercise (RAE) 2008 — summary of methodology
(i) To nominate research unit of assessment (UoA).
(ii) Each submission to include researchers’ evidence portfolios (comprising up to four research outputs undertaken during a 6-year assessment period), individual staff details, data on research student numbers and studentships, research income and research environment and esteem.
(i) 15 main expert panels, 67 disciplinary subpanels.
(ii) Each panel comprising 12–15 academic and expert reviewers to assess research groupings and provide ratings.
Research Quality Profile based on:
(ii) Research environment and esteem indicators.
(iii) Publication of a “quality profile” showing the number of staff submitted for assessment and the proportion of research activity that falls into five categories from unclassified to four-star quality (according to degree of excellence).
Received 30 January 2006, accepted 29 March 2006
- Louise G Shewan1
- Andrew J S Coats2
- Faculty of Medicine, University of Sydney, Sydney, NSW.
None identified.
- 1. The objective evaluation of research isn’t working as it should. Nature 2006; 440: 1-2.
- 2. Australian Government Department of Education, Science and Training. Endorsed by the Expert Advisory Group for the RQF. Research quality framework: assessing the quality and impact of research in Australia — final advice on the preferred RQF model. 2005. Available at: http://www.dest.gov.au/NR/rdonlyres/1A7E21B1-9C74-4AD8-9C8A-FFED7688A32B/9798/Final_Advice_Paper.pdf (accessed Mar 2006).
- 3. House of Commons Science and Technology Committee — second report. 2002. Available at: http://www.publications.parliament.uk/pa/cm200102/cmselect/cmsctech/507/50702.htm (accessed Mar 2006).
- 4. MacLeod D. The hit parade. The Guardian 2005; Jun 14: 18. Available at: http://education.guardian.co.uk/egweekly/story/0,,1505392,00.html (accessed Mar 2006).
- 5. Adams J. Research assessment in the UK. Science 2002; 296: 805.
- 6. Bassnett S. Fruitless exercise. The Guardian (Education Weekly) 2002; 15 Jan: 13. Available at: http://education.guardian.co.uk/egweekly/story/0,,632550,00.html (accessed Mar 2006).
- 7. Stewart I. Reassessing research assessment in the UK [letter]. Science 2002; 296: 1802-1803; author reply, 1802-1803.
- 8. Tapper T, Salter B. The politics of governance in higher education: the case of the research assessment exercises. (OxCHEPS Occasional Paper No. 6.) 2002. Available at: http://oxcheps.new.ox.ac.uk/MainSite%20pages/Resources/OxCHEPS_OP6%20doc.pdf (accessed Mar 2006).
- 9. Banatvala J, Bell P, Symonds M. The Research Assessment Exercise is bad for UK medicine. Lancet 2005; 365: 458-460.
- 10. British Medical Association. Research assessment exercise 2008. A survey of clinical academic and research staff. Dec 2005. Available at: http://www.bma.org.uk/ap.nsf/AttachmentsByTitle/PDFasessexercise2008/$FILE/rae.pdf (accessed Mar 2006).
- 11. Clinical academic staffing levels in UK medical and dental schools: data update 2004. A survey by the Council of Heads of Medical Schools and Council of Heads and Deans of Dental Schools. Jun 2005. Available at: http://www.chms.ac.uk/CHMS&CHDDS%20Survey%20of%20Clinical %20Academic%20Numbers%20June%202005.pdf (accessed Mar 2006).
- 12. Symonds EM, Bell P, Banatvala J. Five futures for academic medicine: future of academic medicine looks bleak. BMJ 2005; 331: 694.
- 13. The Academy of Medical Sciences. Clinical academic medicine in jeopardy: recommendations for change. Jun 2002. Available at: http://www.acmedsci.ac.uk/images/publication/pclinaca.pdf (accessed Mar 2006).
- 14. Roberts G. Review of research assessment. Report by Sir Gareth Roberts to the UK funding bodies. 2003. Available at: http://www.ra-review.ac.uk/reports/roberts.asp (accessed Mar 2006).
- 15. UK House of Commons Science and Technology Committee. Science and technology — eleventh report. Session 2003-04. 15 Sep 2004. Available at: http://www.publications.parliament.uk/pa/cm200304/cmselect/cmsctech/586/58602.htm (see sections 4 and 5) (accessed Mar 2006).
- 16. Shepherd J. UCL memo ‘expects’ 3*s. The Times Higher Education Supplement (digital edition) 2005; 9 Dec: 2. Available at: http://www.thes.co.uk/search/story.aspx?story_id=2026628 (accessed Mar 2006).
- 17. Fazackerley A. Staff at risk in RAE run-up. The Times Higher Education Supplement (digital edition) 2004; 20 May. Available at: http://www.thes.co.uk/search/story.aspx?story_id=2013121 (accessed Mar 2006).
- 18. Ford L. Group to develop new research funding model. The Guardian (digital edition) 2006; 23 Mar. Available at: http://education.guardian.co.uk/higher/news/story/0,,1738114,00.html (accessed Mar 2006).
- 19. May R. Transcript of plenary sessions, PBRF Forum. Royal Society of New Zealand. Wellington, 21 May 2004. Available at: http://www.rsnz.org/advisory/social_science/media/May-PBRF_Forum_21051.doc (accessed Dec 2005).
- 20. Curtis B, Matthewman S. The managed university: the PBRF, its impacts and staff attitudes. N Z J Employment Relations 2005; 30: 1-17.
- 21. Davies E, Craig D, Robertson N. Is the Performance Based Research Fund in the public interest? N Z J Tertiary Education Policy 2005; 1: 1-5.
- 22. Association of University Staff of New Zealand. PBRF costs higher than rewards. AUS Tertiary Update. Vol 7(19), 3 Jun 2004. Available at: http://www.aus.ac.nz/publications/tertiary_update/2004/No19.htm#2 (accessed Jan 2006).
- 23. Tertiary Education Commission. Performance-Based Research Fund. 2006 Quality Evaluation. Response of the Steering Group to the Report of the Sector Reference Group. 2005. Available at: http://www.tec.govt.nz/downloads/a2z_publications/pbrf-steering-group-response.pdf (accessed Jan 2006).
- 24. Barlow S. The risk of perverse funding outcomes. The Australian 2005; Sep 14: 31.
- 25. Group of Eight. Supplementary paper on the development of a Research Quality Framework. Developing a workable model for encouraging and rewarding quality research in Australia. May 2005. Available at: http://www.go8.edu.au/news/2005/Go8%20RQF%20supplementary%20paper %2006.05.05.pdf (accessed Mar 2006).
Abstract
As the Australian university sector awaits final decisions about the introduction and stipulations of a research quality framework (RQF), to assess the quality and impact of research, we have studied international commentary on the value of such exercises. This suggests there is little hard evidence to recommend the proposed RQF.
The UK government led the field in 1986 with its research assessment exercise (RAE), which is widely believed to have compromised clinical academic medicine by failing to satisfactorily acknowledge the contribution of clinical academics, not only to research but also to teaching and clinical practice. After the 2008 RAE, the UK government will move to a simpler, metrics-based system for assessing research quality and allocating funding.
The New Zealand Performance Based Review Fund (PBRF), introduced in 2003, is based on a combination of peer review and performance indicators. Several concerns have been raised; among them is the real cost–benefit ratio of participation, with reports that many universities have spent more on the exercise than they will gain in funding increases. The scoring system has received the most criticism and, after the partial round assessment scheduled for this year, the controversial unit of assessment will be reviewed.
It might be more cost-effective for Australia to modify existing research assessment processes than to undertake a potentially costly and arduous exercise.