Medical technologies are believed to be a major driver of increased health expenditure and are therefore an object of cost containment, which requires a systematic approach to evaluating and introducing new medical technologies and procedures.
Australia implemented one of the world’s first health technology assessment (HTA) programs in 1982.1-3 The government body now responsible for the management of Australia’s HTA program is the Medical Services Advisory Committee (MSAC). The MSAC comprises 22 members from specialist health care, health administration, consumer and academic health (epidemiologists, health economists, etc) who work together to fulfil the MSAC’s terms of reference:4
Advise the federal Minister for Health and Ageing on the strength of evidence pertaining to new or emerging medical technologies and procedures in relation to their safety, effectiveness (clinical), and cost-effectiveness, and under what circumstances public funding should be supported;
Advise the Minister on which new medical technologies and procedures should be funded on an interim basis, to allow further data to be assembled to determine their safety, effectiveness and cost-effectiveness;
Advise the Minister on references relating to new or existing medical technologies and procedures; and
Undertake HTA work referred by the Australian Health Ministers’ Advisory Council.
Further details including current membership and administrative arrangements are available on the MSAC website (http://www.msac.gov.au).
Although there is an expanding body of literature examining evaluation methods or the results of their application,3,5 little is known of the methods used to provide the evidence that informs policy when introducing new medical technologies to the Medicare Benefits Schedule (MBS) or of the quality of methods used to undertake HTAs for the MSAC. Here, we examine the methods used to produce the section of MSAC HTA reports that reviews effectiveness of the technology or procedure.
New medical technologies and procedures are brought to the attention of the MSAC in two ways: (i) a sponsor (eg, a manufacturer, a craft group such as a medical college, or a consumer group) can submit an application to the MSAC for consideration (an “application”); or (ii) the Department of Health and Ageing can refer a particular technology to the MSAC (a “reference”). All technologies considered by the MSAC for listing on the MBS must have prior approval for marketing by the Therapeutic Goods Administration, if such approval is required for the technology in question. Evaluation of pharmaceuticals is administered by a separate body, the Pharmaceutical Benefits Advisory Committee.
An HTA commissioned by the MSAC takes the form of a systematic review. The review itself is undertaken by an independent group contracted by the MSAC under the guidance of an advisory panel formed by the MSAC, whose membership may comprise any combination of clinical experts (nominated by relevant clinical colleges), consumers of the technology, epidemiologists and health economists. On completion of the review, the MSAC can make one of three recommendations to the Minister for Health and Ageing: to fund the technology or procedure without restriction; to provide interim funding and have further evaluation take place at some future point; or to not fund the technology or procedure. This process provides a formal means by which new and emerging technologies and procedures may gain entry to the MBS. Once a technology or procedure has been listed on the MBS, a proportion of its cost will be met by government reimbursement.3
The Australian National Health and Medical Research Council (NHMRC) has provided much of the methodology for evaluating new technologies, with formalised levels of evidence that reflect susceptibility to bias within particular study designs and by which the validity of primary studies can be assessed.6
The MSAC website was searched on 17 July 2006; all HTA application assessment reports published since 1 January 1998 and accessible in full text on the website on that date were considered for inclusion in this study.
To be eligible for inclusion in this evaluation, HTAs had to:
be accessible in full text on the MSAC website;
have a published recommendation regarding the decision to either fund or not fund the technology or procedure through the MBS;
review the effectiveness of a treatment; and
be based on an application.
the full text of the HTA was not available on the MSAC website;
the HTA contained commercial-in-confidence information that prevented publication in full on the MSAC website;
the HTA did not review the effectiveness of a treatment; or
the HTA was based on a reference from the Department of Health and Ageing rather than an application.
Data were extracted from the HTA reports by two investigators using a standardised data extraction form. The extracted covariates were predominantly based on the criteria developed by Busse et al to assess the quality of HTA reports;7 however, we restricted our data extraction to the section of each HTA report dealing with effectiveness of the technology or procedure. Details of the data extracted are shown in Box 1. For most of the covariates, we simply assessed whether the variable had been reported in the HTA (yes, no or unclear). Disagreements between the reviewers were resolved by consensus, with a third reviewer available (although not required) for adjudication.
Of the 56 available HTA reports of applications to the MSAC, 31 met the inclusion criteria. Box 2 summarises the selection process for HTAs and the reasons for exclusion. Of the 31 included HTAs, six were published in 1999, five in 2000, six in 2001, six in 2002, six in 2003 and two in 2005. Results of the data extraction are shown in Box 1. Sixteen of the 31 technologies or procedures assessed in these HTAs were recommended for funding, either on a permanent or interim basis.
Declarations of conflict of interest of any party involved in the assessment process — either MSAC committee members or report authors — were not recorded in any of the HTA reports. The MSAC requires that potential conflicts of interest be declared, however no information was presented detailing policies for dealing with conflicts of interest if they existed. All HTA reports published since 2000 (25/31) included specific details of the authors. All reports contained contact details for the MSAC, enabling contact if further details were required.
The majority of HTAs (19/31) did not formally report the research question that the assessment was attempting to answer. This did, however, seem to improve over time, with no HTAs reporting the research question in 1999, but over half (4/6) doing so by 2003. Most HTAs (27/31) did publish, as a minimum, the inclusion criteria of study types that were to be included in the report. All HTAs included some detail of search terms used to search databases for relevant literature. The number of reviewers who extracted data was not reported in 21 HTAs. In eight HTAs, it was explicitly stated that more than one reviewer was involved in extracting data and assessing the inclusion of studies, and in the remaining two reports only one reviewer extracted data. Twelve HTAs provided a diagram of the study inclusion process, showing the number of studies located and the number of studies that were subsequently included and excluded.
MEDLINE was searched for relevant literature for all published HTAs. Other common sources searched were the Cochrane Library (24/31), EMBASE (21/31), PreMEDLINE (15/31), and Current Contents (16/31). Additional databases were searched for most reports (30/31), with the number of additional databases ranging from one to nine. Other websites and specialist sources of information were searched in 23 HTAs. Common sources searched included the website databases of the International Society for Technology Assessment in Health Care and the International Network of Agencies for Health Technology Assessment. Other sources, including specialist websites, trial registries, other health technology assessment agency websites and grey literature, were searched in 13 HTAs.
The type of evidence included in the effectiveness section of the HTA reports was classified according to a modified version of the NHMRC designation of levels of evidence6 in most reports. Level I evidence (a systematic review) was included in eight HTAs. Level II evidence (randomised controlled trials) was included in 12 HTAs, with the number of randomised controlled trials included in each of these ranging from one to 16. Level III-1 evidence (pseudo-randomised controlled trials) was only included in three HTAs, and level III-2 evidence (comparative studies with concurrent controls and allocation not randomised, cohort studies, case–control studies, or interrupted time series with a control group) was also rare, being included in only four HTAs. Level IV evidence (case series, either post-test or pre-test/post-test) was included in half of all HTAs (16/31). Interestingly, the level of evidence of some included studies was not reported in five HTAs.
Eighteen HTAs provided a validity assessment of included studies with explicit details of the methods used. The methods used were unclear in nine HTAs, and in the remainder of reports it was assumed no validity assessment had been undertaken. The most commonly reported method to assess the validity of included studies, used in five HTAs, was the Cochrane handbook for systematic reviews of interventions;8 other methods used in at least two reports were the Centre for Reviews and Dissemination handbook,9 Quality of Reporting of Meta-analyses (QUOROM) checklist10 and assessment methods proposed by Schulz et al11 and Greenhalgh.12 In total, at least seven different validity appraisal methods were used in these reports. In other cases it was difficult to determine whether formal validity assessments had been undertaken, as, although there were elements of validity assessment in the reports, the exact methods used were not reported. Both the descriptive elements and the results of included studies were presented in both tables and text in 30 HTAs. Due to a reported lack of appropriate studies, meta-analysis was performed in only three reports.
Our examination of 31 HTA reports produced for the MSAC between 1998 and 2006 found considerable variability in quality. Reports did not describe authors’ potential conflicts of interest. Most reports did not formally state the research question, and just over half provided details of any validity assessments of included studies.
Conflict of interest is of concern because of the perception that it could lead to unreasonable bias in an HTA report.13 No information was provided within the published HTA reports, nor is there any documentation available publicly, which details how conflicts of interest are handled beyond the recording of such information by the MSAC.9,14 Other international HTA agencies have provided greater detail of how conflicts of interest are both identified and recorded.15,16
In undertaking our study we used the criteria of Busse et al, which were published in 2002.7 Most of the HTAs we evaluated were conducted before these criteria were published, and indeed there is some evidence that reporting is improving, with an increase in the number of reports including details of the methods used over the time period examined. Optimal search strategies for primary studies to be included in the effectiveness and cost-effectiveness sections of HTAs have been suggested and may prove useful in future reports as a minimum standard to which report authors could adhere.17 Problems with reporting of systematic reviews and HTAs have been noted previously. A study by Olsen et al found that 29% of new Cochrane systematic reviews published in 1998 had major problems,18 and the authors specifically highlighted three areas of concern: that the evidence did not support the conclusions; that the conduct or reporting of reviews was unsatisfactory; and that reviews had stylistic concerns. A study conducted in Canada, which evaluated the conduct of reports from four HTA agencies in that country, found similar results to our study in that almost half of all reports failed to specify the methods used.19
The MSAC has produced its own guidance for evaluators undertaking HTAs,14 and this has recently been updated to include information on undertaking reviews of diagnostic and screening technologies.20 Other organisations such as the Cochrane Collaboration and the QUOROM group have produced guidance on the undertaking and reporting of systematic reviews and meta-analyses.8,10
Given the policy implications of HTAs produced by the MSAC, it is only right that they are produced to the highest quality standards. Statistical and methodological advances continue to take place in the field of systematic reviewing. There is an ongoing need to update the methods of conduct and reporting of HTAs in order to ensure that decisionmakers have access to the most scientifically rigorous information possible. The advantage of moving towards a system in which minimum reporting standards have been entrenched will be a reduction in variability of reports, raising their overall standards of quality and providing greater transparency of the decisions made.
1 Data extracted from 31 health technology assessment (HTA) reports produced for the Medical Services Advisory Committee, 1999–2005,* based on criteria developed by Busse et al7
Abstract
Objective: To examine the methods used in health technology assessments (HTAs) produced for the Medical Services Advisory Committee (MSAC) reviewing the effectiveness of a technology or procedure.
Design and setting: Data were extracted from the effectiveness section of HTA application assessment reports published between 1 January 1998 and 17 July 2006 and available on the MSAC website. Only HTAs of effectiveness interventions were examined, as the methods used to undertake such reviews are well established.
Main outcome measures: Variables reflecting methods used in the HTAs to evaluate the effectiveness of health technologies or procedures.
Results: Of 56 MSAC HTA reports available, 31 met the inclusion criteria. Considerable variability was shown to exist between the various indicators of quality and the methodology used within the HTAs. Reports did not describe potential conflicts of interest of participants. The majority of reports (19/31) did not formally state the research question that the assessment was attempting to answer. Just over half of the reports (18/31) provided details of validity assessment of the included studies.
Conclusions: Minimum and consistent standards of methodology and reporting are required in Australian HTAs, using international recommendations of best practice to increase the transparency and applicability of these reports.