A Clinical Excellence Commission seminar explored how clinical practice variation can be monitored, and identified directions and opportunities in this field
The Clinical Excellence Commission (CEC) is a statutory corporation, entrusted with improving patient safety and clinical quality within the New South Wales health system, in collaboration with key partners. In addition to sponsoring several clinical improvement initiatives, the CEC has a reporting, capacity-building and networking role to identify, promote and help spread best practice.
In 2008, to meet community and government requirements of transparency, openness, accountability and integrity, the CEC sought to progress the development of a small set of safety and quality measures for NSW hospitals. Related reform initiatives included the Special Commission of Inquiry into Acute Care Services in NSW Public Hospitals1 and outcomes of the 2008 Council of Australian Governments meeting,2 calling for the development of robust, agreed, reliable, readily available and publicly reported measures relating to quality and safety of health care.
Cognisant of successful developments overseas and nationally, and of reform initiatives within NSW, the CEC hosted a seminar in Sydney on 13 November 2008, entitled Hospital Performance and How We Can Measure and Report On It. The aim of the seminar was to explore current reporting mechanisms and to engage a range of key stakeholders in discussion regarding the development of critical measures that could best report quality and safety performance in NSW hospitals. Over 50 senior clinical and administrative stakeholders were in attendance.
The seminar focused on the experiences of health agencies in Canada and Australia, identifying initiatives for capturing and reducing variation, and highlighting challenges and opportunities related to data collection and reporting. Outcomes from the seminar included identification by the assembled group of agreed key principles, critical measures with the highest potential and the next steps to facilitate progress.
The seminar included the following presentations:
The Canadian Experience of Performance Measurement and Reporting (Professor G Ross Baker, University of Toronto) shared the experience of developing a balanced scorecard in the Canadian health system in the mid 1990s and of the growing acceptance of public reporting of key indicators.
Measures for the Australian Health System (Jenny Hargreaves, Australian Institute of Health and Welfare) outlined a set of proposed performance indicators across the health and aged care system, and the development of national indicators of safety and quality in health care, being undertaken for the Australian Commission on Safety and Quality in Health Care.
The Queensland Experience (Professor Michael Ward, Health Quality and Complaints Commission) described the origins and evolution of public hospital performance reporting via variable life-adjusted displays (VLADs).
The Overview (Professor Clifford Hughes, CEC) outlined key drivers and developments of developing public hospital performance measures in the NSW health system.
A plenary discussion after the presentations debated the need for improved measures of safety and quality, the requirement to report these publicly, and the potential role of NSW Health or the CEC in measuring hospital performance. The following key points were identified during the discussion.
Improvement and accountability are interlinked.
There is value in distinguishing between dimensions of care; for example, low-dimension elements with clear intervention, outcomes and accountabilities, and high-dimension elements, which are more complex and where outcomes, processes and accountabilities are less clear.
Community expectations and patient satisfaction measures do not generally correlate with performance outcome measures.
There is a need to consider timing correlations between performance reports, reduction in variation and improved quality of care. VLAD data provide immediate feedback with links to process changes relatively evident, whereas mortality-related data take longer to review. Triangulation approaches are likely to assist.
Linkage to population health is supported, with development of relevant measures being considered.
“Accountability” generally involves elements of blame and responsibility for improvement. A key question to address is: “Accountability to whom?”
Accreditation is a useful part of the framework, but is not an end in itself.
Mortality is a complex outcome, not simply a “bad” one, and its negative aspects need to be balanced against allowing a patient to die with dignity and respect.
Datasets will need to be refined over time, in terms of number and value, rather than trying to get a perfect set first time.
The value for hospitals and clinicians is in being able to compare performance with their peers in regard to valid measures.
Feedback loops are important for sharing and acting on data — there is a need to link in with broader quality improvement processes and other parts of the system, such as ambulance services.
Systematic, regular reporting is seen to be important for the health system (from Minister to clinicians) and consumers, to provide reassurance that the system is performing as it should, and to highlight vulnerabilities to be improved.
Quality of data is more important than quantity of data.
There was general consensus that the key principles shown in Box 1 should apply to the design of a set of hospital performance measures for safety and quality.
Seven working groups were asked to identify the indicators or measures with the highest potential, in terms of being:
relevant;
clearly defined;
measurable;
routinely reportable;
robust (high-volume, reliable, clearly defined);
evidence-based and/or representative of interventions that will most improve safety and quality;
timely;
risk-adjusted;
consumer- or patient-focused;
immune to political influence, “gaming” or manipulation, and perverse incentives; and
able to be collected with minimal cost and burden on clinicians.
The measures with the highest potential as selected by all groups are shown in Box 2. The group believed that this would be an ambitious, but realistic, initial set of measures, to be refined or developed with experience.
Other measures which were considered to have potential, but not to meet all criteria at this time, included:
caesarean section and other women’s health intervention rates including hysterectomy and episiotomy;
stroke and heart failure best-practice care (bundle of evidence-based interventions as per acute coronary syndromes);
mental health (readmissions, number of admissions per annum, follow-up after 7 days);
hospital-acquired malnutrition;
mortality from conditions considered amenable to health care;
hospital standardised mortality rates;
staff satisfaction; and
open disclosure process.
Seminar participants endorsed the CEC proceeding in partnership with other key stakeholders in developing and implementing a key set of indicators for reporting safety and quality in NSW hospitals. As part of this process, a summary report of the seminar was distributed to participants and temporarily posted on the CEC website to communicate, lead discussion and increase buy-in. This included acknowledgement of the need to engage more broadly with significant groups within hospitals in the development, collection and reporting of relevant measures.
Shortly after the seminar, the final report of the Special Commission of Inquiry into Acute Care Services in NSW Public Hospitals was published.1 A key recommendation of the report was identification, development and publication of patient care measurements as a comprehensive way of seeing how patients in NSW hospitals are being looked after. Key measurements identified by the report were:
access to and availability of hospital services;
clinical performance;
safety and quality of clinical care and hospital attendance and admission;
cost of clinical care;
patient experience and satisfaction;
staff experience and satisfaction; and
system impact and sustainability.
The report recommended the establishment of a Bureau of Health Information to meet the above reporting requirements. This Bureau (http://www.bhi.nsw.gov.au/) was formally established in 2009, and the CEC will work in close association with the Bureau to support and help promote the development, implementation and reporting of relevant quality and safety measures. The findings of the seminar reported here will be a key factor in this development.
1 Key principles
Choose measures based on strategy and intent, not political imperative
Identify the core purpose (eg, accountability, improvement, research, consumer/patient knowledge)
Choose a limited number of measures
Have dual sets of indicators for different purposes:
high-level indicators for public reporting
more detailed outcome and process measures for quality improvement
Engage clinicians in the design and collection of indicators; this is crucial
Increase capacity of computerised systems to facilitate access to outcome measures, and to reduce the burden of data collection
Recognise that most improvements in indicator data quality usually follow their reporting
2 Selected measures
Hospital-acquired infections (bundle of measures including methicillin-resistant Staphylococcus aureus infection, infection with vancomycin-resistant enterococci, central line infections, surgical site infections, Clostridium difficile infection, and ventilator-associated pneumonia)
Pressure ulcers
Best-practice care for acute coronary syndromes (bundle of evidence-based interventions including provision of medications on discharge)
Unplanned return to intensive care unit
Unplanned return to operating theatre
Medication errors (with associated measures of extent of harm)
Patient falls
Management of patients with deteriorating conditions
Venous thromboembolism
30-day unplanned overnight readmission rate
- 1. Garling P. Final report of the Special Commission of Inquiry: acute care services in NSW public hospitals. Sydney: NSW Government, 27 Nov 2008.
- 2. Council of Australian Governments. National healthcare agreement. Canberra: COAG, 2008. http://www.coag.gov.au/intergov_agreements/federal_financial_relations/docs/IGA_FFR_ScheduleF_National_Healthcare_Agreement.pdf (accessed Aug 2010).
None identified.