Currently available data can be used to focus clinical quality, patient centredness and safety of care in hospitals
Australia has traditionally focused its public reporting efforts concerning hospital care on indicators of volume, costs, length of stay and efficiency at state, territory and national levels. Curiously, there is much less nationally consistent hospital-level reporting on other dimensions of care such as appropriateness, effectiveness, patient-centredness and safety across Australia.
There are two major reasons for measuring, monitoring and reporting on these dimensions of quality of hospital care.
First, hospital-level reporting stimulates and focuses quality improvement initiatives that support better care and better health. Quality improvement techniques such as benchmarking, Six Sigma, “lean” programs, collaboratives and process re-engineering all depend on measurement and reporting to monitor impact. There is evidence, for example, that confidential, hospital- and physician-level reporting can substantially reduce 30-day mortality after cardiac surgery.1
A review of international evidence indicates that there is “strong and consistent evidence that public reporting stimulates quality improvement in hospitals” and “the majority of studies show significant positive impact of public reporting on clinical outcomes”.2
Second, hospital-level reporting is necessary for accountability and transparency as governments, insurers and the public reasonably expect to understand how effectively care is being delivered. Public reporting is also necessary for transparency if ready access to information is expected to influence patient choice. A recent review of international evidence indicated that public reporting “may be able to make significant and policy-important changes in consumers’ decisions in choosing hospitals in some settings”.1
This Supplement features articles that describe how currently available data can be used to focus quality improvement and to support accountability and transparency through the creation and use of timely and accurate information on clinical quality, patient centredness and safety of care in hospitals.
The article by Sketcher-Baker and colleagues on the use of variable life-adjusted displays (VLADs) describes how inpatient data in Queensland have been used to support quality improvement, accountability and transparency.3 Data collation, calculation of VLADs, and feedback inform a clinical improvement program and support accountability while delivering transparency through public reporting of outcomes. Other important elements of this program are the commitment to ongoing review and consultation around the indicators, and the clinical governance model that underpins the VLAD review and response cycle.
The article by Clarke and colleagues on the AusPSI program describes how inpatient data have been used to support routine reporting to the Patient Safety Monitoring Initiative in Victoria.4 This initiative involves the use of risk-adjusted outcome measures that build on patient safety indicator work established by the Agency for Healthcare Research and Quality.5
The main questions to ask in assessing data collections that report on quality of care include:
Are we asking the right questions — will each data collection accurately describe significant variance in practice and outcomes?
Is the data collection feasible and efficient, or unrealistically burdensome?
Is there high-quality data scrutiny — of accuracy and reliability?
Is there high-quality interpretation and clinical review of reported compliance and variance?
Is the risk adjustment fair?
Reid and colleagues describe the Australian Cardiac Procedures Registry (ACPR).6 The ACPR includes patient, procedure and outcome data from 21 participating facilities, generating and feeding back risk-adjusted outcome measures against local and international benchmarks.
In a previous issue of the Journal, McNeil and colleagues recommended the establishment of clinical quality registries for high-cost, high-volume interventions where there is variation in practice and where practice modification can improve outcomes.7 The successes of the National Joint Replacement Registry,8 the National Breast Cancer Audit,9 the Australian and New Zealand Intensive Care Society Centre for Outcome and Resource Evaluation patient databases,10 and the Australia and New Zealand Dialysis and Transplant Registry11 suggest that clinicians are prepared to trade off the burden of submitting a succinct dataset — that they themselves have developed — in return for routine reports showing their own performance, risk-adjusted, against their peers.
Ben-Tovim and colleagues describe their efforts to measure standardised, in-hospital death rates.12 The authors, in collaboration with the Australian Institute for Health and Welfare (AIHW), have led national analyses of hospital mortality data, and refined the Canadian risk adjustment through detailed analyses of the National Hospital Morbidity Database. The calculation, monitoring and reporting of hospital-standardised mortality ratios (HSMRs) is not without controversy.13,14 The Australian Commission on Safety and Quality in Health Care, however, has recommended that hospitals routinely review HSMRs, deaths in low-mortality diagnosis-related groups and condition-specific inhospital mortality rates to identify opportunities to improve hospital care.15
Kennedy and colleagues describe the importance of clinical practice guidelines, and the need to identify and respond to variations in practice, in the context of the rapid escalation of the comparative effectiveness agenda in the United States.16
Leathley and colleagues summarise the results of a forum, convened by the Clinical Excellence Commission in New South Wales, on measuring hospital performance.17 The authors outline key principles for the design of hospital performance measures, and identify measures with the highest potential.
McNeil and colleagues describe the National Antimicrobial Utilisation Surveillance Program approach to measure, monitor and identify significant variance and trends in antibiotic usage.18 They demonstrate how antibiotic usage data from 28 principal referral hospitals and one private hospital have generated interventions and real change in antibiotic prescribing practice.
At the national level, the AIHW and the Australian Government Productivity Commission take seriously their charters to “provide information on Australia’s health and welfare, through statistics and data development”19 and “promote public understanding of matters related to industry and productivity”20 on health services. On 20 April 2010, the Council of Australian Governments agreed (with the exception of Western Australia) to sign the National Health and Hospitals Network Agreement, including the establishment of a National Performance Authority, and there are plans to launch a public website with hospital-level information.21
At the state and territory level, several governments release information on the performance of public hospitals, including Victoria,22 Queensland23 and NSW.24 NSW publishes information including waiting lists for elective surgery, health care-associated infections and current safety notices, and hosts a health service website.25
The NSW Bureau of Health Information was recently established to publicly report on the performance of the state’s health system. The Bureau’s first report provided comparative, hospital-level information on patient-centred care in 38 large hospitals.26 Its hospital quarterly reports will provide information on inpatient services, surgical care and emergency departments every 3 months. The first issue expanded the scope of hospital-level information previously reported to include new measures of accessibility and patient-centred care and increased the number of hospital emergency departments reported on from 40 to 66.27
If we accept the premise that timely, accurate and comparable information about the performance of hospitals is “a good thing”, the most important question remains: “What measures are meaningful and useful?”
If the purpose of reporting is better care, selection of measures should be driven by the priorities of clinicians and health care management and policy communities. If the purpose of reporting is accountability, selection of measures depends on the aim of investments. If the purpose is transparency about the performance of hospitals, a broad and balanced portfolio of measures is important.
Authors of articles in this Supplement highlight how current information systems in Australia can be used to gather meaningful, useful information for clinical, management and policy communities. It’s up to the rest of us to build on these initiatives to create and use timely, accurate and comparable information about the performance of hospitals in Australia to support better care.
- Neville Board1
- Diane E Watson2
- 1 Australian Commission on Safety and Quality in Health Care, Sydney, NSW.
- 2 Bureau of Health Information, Sydney, NSW.
- 1. Guru V, Fremes SE, Naylor CD, et al. Public versus private institutional performance reporting: what is mandatory for quality improvement? Am Heart J 2006; 152: 573-578.
- 2. Chen J. Public reporting of health system performance: review of evidence on impact on patients, providers and healthcare organisations. [An Evidence Check rapid review brokered by the Sax Institute for the Bureau of Health Information.] Sydney: The Sax Institute, 2010: 92.
- 3. Sketcher-Baker KM, Kamp MC, Connors JA, et al. Using the quality improvement cycle on clinical indicators — improve or remove? Med J Aust 2010; 193 (8 Suppl): S104-S106. <MJA full text>
- 4. Clarke ALL, Shearer W, McMillan AJ, Ireland PD. Investigating apparent variation in quality of care: the critical role of clinician engagement. Med J Aust 2010; 193 (8 Suppl): S111-S113. <MJA full text>
- 5. Agency for Healthcare Research and Quality. Patient safety indicators overview. Rockville, Md: AHRQ, 2006. http://www.qualityindicators.ahrq.gov/psi_overview.htm (accessed Sep 2010).
- 6. Reid CM, Brennan AL, Dinh DT, et al. Measuring safety and quality to improve clinical outcomes — current activities and future directions for the Australian Cardiac Procedures Registry. Med J Aust 2010; 193 (8 Suppl): S107-S110. <MJA full text>
- 7. McNeil JJ, Evans SME, Johnson NPJ, Cameron PA. Clinical-quality registries: their role in quality improvement. Med J Aust 2010; 192: 244-245. <MJA full text>
- 8. Australian Orthopaedic Association. National Joint Replacement Registry. http://www.dmac.adelaide.edu.au/aoanjrr/index.jsp (accessed Sep 2010).
- 9. Royal Australasian College of Surgeons. National Breast Cancer Audit. http://www.surgeons.org/nbca (accessed Sep 2010).
- 10. Australian and New Zealand Intensive Care Society. Centre for Outcome and Resource Evaluation (CORE). http://www.anzics.com.au/core (accessed Sep 2010).
- 11. Australia and New Zealand Dialysis and Transplant Registry. http://www.anzdata.org.au/v1/index.html (accessed Sep 2010).
- 12. Ben-Tovim DI, Pointer SC, Woodman R, et al. Routine use of administrative data for safety and quality purposes — hospital mortality. Med J Aust 2010; 193 (8 Suppl): S100-S103. <MJA full text>
- 13. Jarman B, Pieter D, van der Veen AA, et al. The hospital standardised mortality ratio: a powerful tool for Dutch hospitals to assess their quality of care? Qual Saf Health Care 2010; 19: 9-13.
- 14. Mohammed MA, Deeks JJ, Girling A, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ 2009; 338: b780.
- 15. Australian Commission on Safety and Quality in Health Care. National indicators of quality and safety. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/PriorityProgram-08_HospLvl-Indicators#hospital (accessed Sep 2010).
- 16. Kennedy PJ, Leathley CM, Hughes CF. Clinical practice variation. Med J Aust 2010; 193 (8 Suppl): S97-S99. <MJA full text>
- 17. Leathley CM, Gilbert R, Kennedy PJ, Hughes CF. Measuring hospital performance — 2008 forum summary. Med J Aust 2010; 193 (8 Suppl): S95-S96. <MJA full text>
- 18. McNeil V, Cruickshank M, Duguid M. Safer use of antimicrobials in hospitals: the value of antimicrobial usage data. Med J Aust 2010; 193 (8 Suppl): S114-S117. <MJA full text>
- 19. Australian Institute of Health and Welfare. About us. http://www.aihw.gov.au/aboutus (accessed Sep 2010).
- 20. Australian Government Productivity Commission. The Role of the Commission. http://www.pc.gov.au/about-us/role (accessed Sep 2010).
- 21. Drape J. Hospitals website launched but no date. Sydney Morning Herald 2010; 16 Jul. http://news.smh.com.au/breaking-news-national/hospitals-website-launched-but-no-date-20100716-10dg2.html (accessed Sep 2010).
- 22. Victorian Government Department of Health. Your hospitals — an overview of public hospital activity. http://www.health.vic.gov.au/yourhospitals/ (accessed Sep 2010).
- 23. Queensland Health. Performance reports. http://www.health.qld.gov.au/performance/default.asp (accessed Sep 2010).
- 24. NSW Health. Monthly hospital performance reports. http://www.health.nsw.gov.au/reports/reports.asp (accessed Sep 2010).
- 25. NSW Health. Search your health service. http://www.health.nsw.gov.au/hospitals/search.asp (accessed Sep 2010).
- 26. Bureau of Health Information. Insights into care: patients’ perspectives on NSW public hospitals. Sydney: BHI, 2010.
- 27. Bureau of Health Information. Hospital Quarterly: performance of NSW public hospitals April to June 2010. http://www.bhi.nsw.gov.au/publications/hospital_quarterly_report (accessed Sep 2010).