Public reporting of health care outcomes such as in-hospital mortality or complication rates is now well established internationally. It is less advanced in Australia, but has been advocated academically1 and as part of contemporary federal–state negotiations. The current political direction favours publication of risk-adjusted in-hospital mortality, infection and complication rates for each facility as the preferred indicators.2-4
One of the outcomes of Queensland’s Bundaberg Hospital scandal5,6 and the associated public inquiries was a revitalisation of quality management processes and a new emphasis on transparency in the health system. The Health Services Act 1991 (Qld) was changed in 2005 to require publication of an annual public hospital performance report. A shake-up in clinical governance also occurred, with the introduction of new quality management processes that included more robust and consistent reporting of clinical incidents and sentinel events7 as well as a monitoring system using statistical process control charts for 30 clinical indicators.8-10 The statistical process control approach emphasises the dynamic nature of performance against particular outcome measures and flags significant variations from the state mean. Public and private hospitals are given feedback on their performance against the indicators on a monthly basis. Depending on the extent to which a hospital’s indicators deviate from the state average, there are requirements for reporting at various levels of the bureaucratic hierarchy, using a standardised approach to reporting findings that emphasises systematic reasons for variation.5
What is critical in the new approach is not that an indicator is flagged for further investigation, but that robust investigation takes place. Investigation reports for indicators flagged at twice the state average rate (for non-mortality indicators, such as complications of care) or 75% above the state average rate (for mortality indicators) are reviewed externally to the hospital to assess the adequacy of the hospital’s internal investigation. A rating is given for the “strength” of actions and the comprehensibility of the report for public presentation.9
This dynamic and quality improvement approach to quality management was first used as the basis for the mandated public reporting in 2008. Although quantitative performance data for each of the 30 indicators for each relevant hospital are published as a separate table on the Internet,11 the main printed public report (also available on the Internet)12 focuses on whether the indicator performance of any individual hospital was significantly different from the state average and, more importantly, the actions that the identified hospital is taking in response to flagged variations from the average. Examples of published summaries of investigations are shown in the Box. A similar approach is taken with regard to reporting on clinical incidents and sentinel events.7
While controversy remains about its value and purpose,13,14 the case for public reporting has been variously argued as being:
to enhance patient choice;
to stimulate improvement in outcomes (whether it be through fear of market responses, fear of risk to the hospital’s reputation,15,16 or other, more personal motivations17); and/or
Contrary to the first stated aim, consumers appear not to rely on public reports in selecting hospitals.19 But even if patients can’t or don’t use public reporting, the other two goals (improvement and accountability) may still warrant continuation of public reporting. The two goals are linked, as presumably one purpose of public accountability is to stimulate remedial action where necessary. In a systematic review, Fung and colleagues showed that public reporting did indeed stimulate hospital quality improvement activity, but the impact on clinical outcomes was “mixed”.19 There is still controversy about the emphasis on outcome indicators (such as mortality rates) in public reporting, with a steady stream of research suggesting that process measures (such as prophylaxis against venous thromboembolism), which are rarely captured in routine data, are more useful in guiding quality improvement efforts.20-22
More fundamentally, public reports may also be criticised because they are based on an outmoded conception of the quality endeavour. The very terminology of “score cards” and “report cards” brings to mind the picture of an errant schoolboy standing in the corner awaiting discipline for poor performance,23 — part of the “name–shame–blame” culture that has pervaded the health sector’s approach to safety and quality in the past. A “just” culture is now seen as critical to redressing quality problems, but unfortunately, this approach appears not to have permeated to the reporting advocates and so there has been no reconceptualisation of public reporting. The name–shame–blame approach, criticising the data and often “shooting the messenger”, encourages a defensive response by hospitals.
Queensland Health’s approach responds to two of the aims of public reporting: providing public accountability and stimulating action. In terms of the latter, it does this not by trading on hospitals’ concern with risk to their reputation, but by requiring and reporting on results of actual investigations. Although media reporting of the new approach in Queensland still focuses on naming, shaming and blaming,24-26 this is to some extent characteristic of the tabloid approach that pervades, but is not unique to, journalism in Queensland.
Health professionals are now encouraged to report incidents and near misses so that we can learn from them. This involves creating internal cultures free from inappropriate blame. But the external reporting environment for most hospitals has not kept pace with this change in internal culture. The emphasis is still generally on a cross-sectional, static approach to identifying poorly performing facilities that involves naming and shaming hospitals to stimulate action for improvement.
In contrast, what we are trying to do in Queensland is to change the emphasis of public reporting from simply pointing the finger at hospitals whose performance is below average to focusing on what action is being taken to improve their performance. We are thus trying to focus on a quality improvement outcome rather than on public shaming.
Examples of published summaries of local investigations
Laparoscopic cholecystectomy complications of surgery: Gold Coast Hospital
A review revealed data issues caused by poor documentation in patient medical charts. The Director of Surgery is working with the Coder Educator to make sure that documentation is clear and understandable so that the information can be coded correctly.
Paediatric tonsillectomy and adenoidectomy readmission: Royal Children’s Hospital
Hospital examination has highlighted a potential management issue. The Ear, Nose and Throat Clinical Nurse Consultant (CNC) was on extended leave during the latter part of 2006, which may have impacted on the level of advice and education being provided to both parents and patients. The CNC has incorporated age-specific education into the CNC Succession Plan in order to provide appropriate descriptions and education to all of the patient population.
Paediatric tonsillectomy and adenoidectomy readmission: Mater Children’s Public Hospital
Data, patient casemix and process of care were three areas identified as contributing to this flag. As a result, Mater Children’s Public Hospital has implemented a clinical care pathway. Data reviewed were found to contain coding errors. These errors have been fixed and the data have been resubmitted.
Heart failure in-hospital mortality: Townsville Hospital
Hospital review revealed that data and casemix issues significantly contributed to this flag. This review revealed a cohort of complex high-risk patients with end-stage diseases. There were no patterns or significant omissions in care. Issues with incomplete discharge summaries and coding were discovered, and education sessions are scheduled for each specialty area within the hospital. Implementation of a mortality review process for every patient death should contribute to clinical coding accuracy in the future.
Abstract
In many settings, public reporting of health care outcomes still reflects the “name–shame–blame” culture that has permeated large areas of the health care sector for decades.
A new approach to public reporting in Queensland, based on statistical process control, emphasises the dynamic nature of performance against specified outcome measures by focusing on the actions that hospitals are taking if their indicators vary from the average.
The aim is for public reporting to contribute to, rather than detract from, the creation of an internal culture that emphasises rigorous investigation and improvement rather than merely assigning blame for problems.