Research evaluation is a hot topic across the world as many economies seek to implement research performance exercises to meet the growing demands for accountability and to drive funding allocation.1 Universities are the custodians of significant amounts of publicly invested research funding and, as such, it is incumbent upon the sector to maximise the value of that investment over the longer term. Australian universities, despite operating in an environment of impoverished infrastructure, with a declining proportion of government support despite a sizeable and growing federal budget surplus, are fully aware that further taxpayer funding will demand greater accountability. With the release of the Expert Advisory Group’s (EAG) refined Research Quality Framework (RQF) measurement schema (Box 1),2 it is timely to review the outcomes of similar assessment exercises internationally before any final decision is taken.
Australia’s RQF is based largely on the Research Assessment Exercise in the United Kingdom (UK-RAE) (as the appointment of the UK’s Professor Sir Gareth Roberts as chair of the Expert Advisory Group might suggest), with the addition of a measurement of research impact. The New Zealand Performance Based Review Fund (NZ-PBRF), while itself modelled to a large extent on the UK-RAE, is also instructive in that the unit of assessment is the individual rather than the team.
The UK-RAE introduced in 1986 (with substantial changes made in 1992) has concentrated research funding into fewer places and the best departments. The longer term effects of this increasingly selective and concentrated funding are yet to be fully appreciated, even though a polarisation within the sector, a disinclination to take risks, and adverse effects on clinical research have already been noted.3,4
While it has been credited with increasing the global impact of UK research, increasing its share of the 1% most cited research papers,5 the RAE has also attracted sharp and fervent criticism.3,6 Some have suggested that the perceived improvement in research performance (55% of the UK’s academic departments deemed to be of “international quality” in 2001, up from 31% in 1996) was not so much a true RAE outcome, rather it was more an artefact of successful academic “games playing”.6,7 Furthermore, with 55% of researchers placed within the top two grades, the scale now appears to lack discriminatory power.5 Exacerbating the problem was the UK Government’s subsequent failure to fully fund the improved ratings achieved in the 2001 assessment cycle.
The RAE is expensive (in 1996, it was estimated to cost between £27 and £37 million). It has also been claimed to have undermined university autonomy, forced researchers to realign their research pursuits within RAE “friendly” research domains, downgraded teaching and undervalued clinical academic medicine.8,9 In a survey conducted in 2005 by the British Medical Association, 40% of clinical academic and research staff regarded the RAE as having had a negative impact on their career,10 and data produced by the Council of Heads of Medical Schools show a significant decline in clinical academic staffing levels between 2000 and 2004, with the biggest slump reported among clinical lecturers.11
It is widely believed that the RAE has compromised clinical academic medicine through a failure to satisfactorily acknowledge the commitment and contribution of clinical academics, not only to research but also to teaching and clinical practice.9,10,12 Certain disciplines, for example craft specialties such as obstetrics and gynaecology and surgery, have suffered disproportionately.11 By its very nature, clinical research is disadvantaged by the RAE’s focus on short-term research outputs and over-emphasis on publications in high impact-factor journals. In addition, there is concern about a possible emergence of non-research medical schools as a result of the concentration of limited resources.
Little wonder the UK laments the widely accepted decline of its clinical research,11,13 when its own funding mechanism forces universities to ditch clinical academics in favour of more “productive” non-clinical scientists. Research-led teaching has been widely credited with improving the quality of both education and service in the health sector. This is particularly true in the medical arena, where the concept of a university hospital with university clinical departments, clinical schools and affiliated medical research institutes is seen as so important.
Following the Roberts review14 with its recommended departure from a “one size fits all” assessment approach, the controversial seven-point scale employed in previous assessments has been jettisoned in favour of a research activity quality profile. Each unit of assessment will be given a research profile documenting the proportion of work that falls into five categories from unclassified to four-star quality (Box 2). However, disappointment and incredulity has been expressed by the research community about the reluctance of the funding councils to disclose how RAE outcomes will be used to calculate funding and what proportion of funding will be assigned to each star grade.15
In the lead-up to the 2008 exercise, there are reports of universities conducting practice runs, shedding staff who are less research active, with teaching staff being replaced with “star” researchers in the pursuit of research ratings, and announcing the rating their academics are expected to gain,4,16 with little evidence such exercises actually raise the nation’s research output. The medical school at Imperial College, London, has been reported as threatening “disciplinary procedures” for academics who do not secure at least £75 000 in external research revenue yearly,17 despite no immediate link between fundraising and the quality of research produced. No mention is made of the need to encourage long-term research themes, for which results may take decades, or the need to balance research with teaching and professional leadership.
Much discussion is now centred on the future of the UK-RAE. In the budget delivered on 22 March 2006, the UK Chancellor declared that the 2008 RAE would indeed be the last and that a working group had been established to develop alternative metrics-based formulae to simplify the distribution of research funding.18 Metric measures, where funding is related to the impact of publications and research grant and contract money, will be tested in conjunction with the 2008 RAE.4,18 This should be of particular interest to Australia as we contemplate discarding our metrics-based system in favour of an un-reconstructed UK-RAE.
The NZ-PBRF, introduced in 2003, is based on a combination of peer review and performance indicators. The research assessment consists of three components: the quality of academics’ research outputs, research degree completions and external research income, weighted 60/25/15, respectively.
The publication of the 2003 results prompted several concerns regarding the appropriateness and efficacy of the funding model.19,20 Echoing the issues generated by the UK-RAE, the New Zealand scheme has been charged with the devaluation of teaching, downgrading of academic autonomy, disadvantaging applied research and creating a deterrent to collaboration.20,21 Additional concerns have been raised in relation to the real cost–benefit ratio of participation in the NZ-PBRF exercise, with reports that many universities had spent more on participating in the exercise than they will gain in funding increases.22 The most trenchant criticism, however, has been reserved for the scoring system, which placed most early career researchers in the lowest category (in essence “research inactive”), and the use of individual academics as the unit of assessment. Following the review by the sector reference group, provisions for new and emerging researchers are to be implemented and the controversial unit of assessment, believed to have disadvantaged certain groups and negatively affected staff, will be reviewed after the partial round assessment scheduled for this year.23
Submissions received in response to the Expert Advisory Group’s “Preferred Model” paper for the RQF highlighted myriad issues requiring clarification. Not least among the issues raised was the unanticipated announcement in the Minister’s foreword that the outcomes of the RQF may be used to determine the distribution of research funding through the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC). The worrying implication of this is diminished independence of the research councils, and the possibility that the RQF may render some research groups ineligible or disqualified from access to NHMRC and ARC funding. Some commentators advocate the converse, with ARC and NHMRC success informing the RQF.24 Further fears of political intervention have been fuelled by the veto by the previous Minister for Education, Science and Training of several ARC research grants, and the more recent allegations of federal government censorship of CSIRO (Commonwealth Scientific and Industrial Research Organisation) scientists.
The Vice-Chancellors of the Group of Eight universities maintain that there already exist appropriate processes for assessing the quality of national research outcomes, and that, at a small cost, the present research assessment and funding mechanisms could be modified to produce more comprehensive comparative data for the university sector.25 The allocation of funding from the National Competitive Grants Scheme is based on an extensive and rigorous peer review system which supports the highest quality research projects. As such, this income represents an existing and accepted measure of research quality. In the UK, the House of Commons Science and Technology Committee found that external grant income closely matched RAE results in the top 20–30 institutions.15 It is believed that an RQF would produce analogous results, duplicating competitive peer-reviewed processes.25 Given this likelihood, why introduce an enormously expensive experiment in its stead?
Throughout the university sector, there is a general feeling of concern regarding the funding implications of the RQF, and acceptance by the sector will depend on additional funding to meet and exceed the costs of participation in the exercise. It has been suggested that the cost of implementing the RQF could be in the region of $50 million per cycle.25 As yet there is little indication that implementing an RQF would be accompanied by a sufficient increase in funding to make this worthwhile.
1 Research Quality Framework (RQF) — summary of methodology (modified from RQF final advice)
(i) To nominate research groupings for assessment.
(ii) Provide researchers’ evidence portfolios, comprising context statements, four “best” research outputs per researcher, impact statements and a full list of research outputs for each research grouping over the assessment period.
(i) Approximately 12 panels mapped to Australian Bureau of Statistics research fields, courses and disciplines classification codes.
(ii) Each panel comprising 12–15 academic and expert reviewers to assess research groupings and provide ratings.
(i) Research quality measured on a five-point scale.
(ii) Research impact measured on a three-point scale.
(i) RQF ratings should be used to distribute 100% of the Institutional Grants Scheme (IGS), at least 50% of the Research Training Scheme (RTS) and 100% of additional funding.
(ii) The Expert Advisory Group believes that it may not be possible to achieve the full impact of the RQF without the distribution of additional funds.
(iii) Institutions will retain discretion to internally allocate RQF-driven research block funding.
(iv) Funding will take into account the size of the institution.
2 UK Research Assessment Exercise (RAE) 2008 — summary of methodology
(i) To nominate research unit of assessment (UoA).
(ii) Each submission to include researchers’ evidence portfolios (comprising up to four research outputs undertaken during a 6-year assessment period), individual staff details, data on research student numbers and studentships, research income and research environment and esteem.
(i) 15 main expert panels, 67 disciplinary subpanels.
(ii) Each panel comprising 12–15 academic and expert reviewers to assess research groupings and provide ratings.
Research Quality Profile based on:
(ii) Research environment and esteem indicators.
(iii) Publication of a “quality profile” showing the number of staff submitted for assessment and the proportion of research activity that falls into five categories from unclassified to four-star quality (according to degree of excellence).
Received 30 January 2006, accepted 29 March 2006
Abstract
As the Australian university sector awaits final decisions about the introduction and stipulations of a research quality framework (RQF), to assess the quality and impact of research, we have studied international commentary on the value of such exercises. This suggests there is little hard evidence to recommend the proposed RQF.
The UK government led the field in 1986 with its research assessment exercise (RAE), which is widely believed to have compromised clinical academic medicine by failing to satisfactorily acknowledge the contribution of clinical academics, not only to research but also to teaching and clinical practice. After the 2008 RAE, the UK government will move to a simpler, metrics-based system for assessing research quality and allocating funding.
The New Zealand Performance Based Review Fund (PBRF), introduced in 2003, is based on a combination of peer review and performance indicators. Several concerns have been raised; among them is the real cost–benefit ratio of participation, with reports that many universities have spent more on the exercise than they will gain in funding increases. The scoring system has received the most criticism and, after the partial round assessment scheduled for this year, the controversial unit of assessment will be reviewed.
It might be more cost-effective for Australia to modify existing research assessment processes than to undertake a potentially costly and arduous exercise.