The Quality in Australian Health Care Study highlighted the extent of harm to patients in Australia’s health care system in the 1990s,1 and stimulated initiatives to improve the quality, safety, and accountability of patient care. The Australian Council for Safety and Quality in Health Care (ACSQHC) was established in January 2000 as a key national body to drive quality and safety health care reform. A key priority was to increase the use of health care performance measurements to drive quality improvement.2 This priority was underpinned by evidence that performance measurements are associated with improved quality and safety outcomes.3-7
External benchmarking and public reporting of the measurement of organisation and individual performance is limited in Australia, in part because of concerns about data validity and the adequacy of models to adjust for differences in casemix.5 Individuals and organisations need access to robust measurement tools to enable internal, local and national comparisons to be made.8-12 To date, there has been no systematic assessment of the tools used by Australian public hospitals to measure their own performance.
In November 2004, the ACSQHC commissioned the development of an evidence-based resource, the Measurement for Improvement Toolkit,13 to help Australian health care professionals access appropriate measurement tools and processes to support their patient safety programs.
Here, we provide the results of a survey of Australian public hospitals undertaken to inform the development and subsequent implementation of the Measurement for Improvement Toolkit. The primary objective of this national survey was to identify patient safety measurement tools across the three domains of patient safety defined by the ACSQHC: organisational capacity to provide safe health care; patient safety incidents; and clinical performance. The second objective was to identify perceived barriers to the use of these tools.
In the absence of internationally accepted definitions, the project team developed a project definition of patient safety subdomains (within the three main domains defined by the ACSQHC) by conducting a comprehensive MEDLINE and CINAHL database search of peer-reviewed literature and a web-based search of literature of key health care safety and quality improvement organisations. These searches, which were also designed to identify existing patient safety tools, were conducted in April 2005. Definitions were ratified for inclusion by the expert panel (Box 1).
A random sample of Australian public hospitals, stratified by state and hospital peer group (location, type and size of hospital), was obtained from the Australian Institute of Health and Welfare’s public hospital list 2002–2003.16 Because they were few in number, all Australian Capital Territory, Northern Territory and Tasmanian public hospitals in the hospital peer groups we chose were included in the sample (Box 2). To maintain a representative sample and yet have the study remain feasible, a proportion of each stratum (hospital peer group by state) was selected. Computer-generated random number sampling was used to obtain the random sample using Stata, version 8 (StataCorp, College Station, Tex, USA).
The survey period was 4 March 2005 to 19 May 2005. Repeated survey dissemination (a maximum of three times) was used to increase the likelihood of at least a 50% response rate.17 In keeping with the requirements of the HREC, hospitals were first contacted through their chief executive officers (CEOs) or directors to request their participation, and that of quality/safety/risk management staff, directors of nursing, allied health, and pharmacy, and up to three directors of medical departments. In accordance with HREC approval, there was no direct contact with hospital staff other than through the CEO or director. A limit of three follow-up calls and two global emails were sent by the project team to CEOs or their nominated staff, to remind staff to complete the survey.
A total of 167 public hospitals, representing 22% of all Australian public hospitals, were invited to participate. State and territory response rates and the reported use of measurement tools are summarised in Box 3. Eighty-two invited hospitals (49%) agreed to participate, with representation from each state and territory. The anticipated response rate was 50%, which was achieved in all but two states (New South Wales [36%] and Western Australia [43%]).
Responses on identification and use of patient safety measurement tools, satisfaction with them, and barriers to their use, were received from 182 individuals from the 82 responding hospitals. The tools they identified are summarised in Box 4. Individuals from responding hospitals did not identify any measurement tools that had not already been identified by the literature search and expert panel. In all domains, there was a focus on the use of processes (eg, accreditation) rather than use of tools designed to quantitatively measure responsiveness to change.
The proportions of individual respondents reporting satisfaction, ambivalence or dissatisfaction with existing measurement tools are shown in Box 5. About half the individual respondents indicated they were satisfied with the existing patient safety measurement tools. A high proportion of respondents reported being “neither satisfied nor dissatisfied”, especially with tools measuring organisational capacity and clinical performance. Where measurement tools were not in use, or where there was dissatisfaction with tools used, the most frequently listed limitations across all three domains (Box 6) were lack of an integrated patient safety system and administrative resource constraints. Lack of well developed tools for local use was reported to be a major limitation for measuring organisational capacity and clinical performance, but not reported as a limitation for measuring patient safety incidents.
Fifty hospitals (61%) reported measuring organisational capacity; 81 (99%) measured patient safety incidents; and 81 (99%) measured clinical performance (either organisational/departmental or individual). There was some variation between states and territories (Box 3), which did not reach statistical significance. There were no significant differences between hospital size or peer group and measurement of the three patient safety domains, although numbers of responses were small and confidence intervals accordingly wide. There was no difference in reported measurement across these domains according to respondent positions.
Our study has ascertained that satisfaction with patient safety measurement tools among health professional is modest at best. Dissatisfaction may be linked to a range of reported limitations, the most prevalent of which was lack of integrated systems within hospitals. Not perceiving the value of change is one of the most powerful barriers to implementing innovation.18 If an organisational system does not support measurement in all aspects of data management, from collection through to review, timely feedback and response, then measurement is unlikely to be perceived as worthwhile and hence unlikely to be supported by individuals within the system. In addition, adoption of new information depends not only on awareness and perception of value, but also on the credibility of the information. The second most reported limitation was lack of access to robust measurement tools, a finding supported by additional work that identified that most patient safety tools have not been developed through rigorous psychometric methods, and have not been validated within the Australian context.13
1 Definitions of the domains of patient safety
Patient safety incident — an event or circumstance that could or did lead to unintended and/or unnecessary harm to a person and/or to a complaint, loss or damage.14 Subdomains included: incident detection; incident reporting; incident investigation and analysis; incident management; and health care worker and patient feedback and learning.
Clinical performance assessment — measurement of practice behaviour and adherence to objective and evidence-based clinical process and outcomes of care by organisations, departments or individual clinicians.15
2 Summary of the sampling process for inviting Australian public hospitals to participate in the survey
4 Measurement tools and processes identified by 182 individual respondents from the 82 participating hospitals
Organisational capacity tools and processes
Accreditation
A checklist: patient safety management systems (Australian Council for Safety and Quality in Health Care)
Board clinical governance self-evaluation (Victorian Quality Council)
Checklist for reviewing your safety and quality program against the framework elements (Victorian Quality Council)
Consumer and community participation self-assessment tool for hospitals (National Resource Centre for Consumer Participation in Health)
Develop a culture of safety* (Institute for Healthcare Improvement)
Key performance indicators
Patient safety measurement tools and processes
Clinical audit (including Limited Adverse Occurrence Screening)
Clinical incident reporting systems
Complaints/patient liaison
Coroner’s reports
Failure mode and effects analysis
Morbidity and mortality meetings/death review
Risk register
Root cause analysis
Sentinel event reporting
Clinical performance of organisation/department and individual clinician tools
Accreditation
Benchmarking certification of staff
Clinical indicators
Formal assessment of professional competence
Peer review
Professional development programs
5 Satisfaction with patient safety measurement tools reported by 182 individual respondents from the 82 participating hospitals
- Caroline A Brand1,2
- Joanne Tropea1
- Joseph E Ibrahim3
- Shaymaa O Elkadi4
- Christopher A Bain5
- David I Ben-Tovim6
- Tracey K Bucknall7,8
- Peter B Greenberg9,10
- Allan D Spigelman11
- 1 Clinical Epidemiology and Health Service Evaluation Unit, Royal Melbourne Hospital, Melbourne, VIC.
- 2 Centre for Research Excellence in Patient Safety, Monash University, Melbourne, VIC.
- 3 Clinical and Work-Related Liaison Services, State Coroner’s Office and Victorian Institute of Forensic Medicine, Melbourne, VIC.
- 4 Caraniche Pty Ltd, Melbourne, VIC.
- 5 Western and Central Melbourne Integrated Cancer Service, Melbourne, VIC.
- 6 Flinders Medical Centre, Adelaide, SA.
- 7 Deakin University, Melbourne, VIC.
- 8 Cabrini-Deakin Centre for Nursing Research, Melbourne, VIC.
- 9 Royal Melbourne Hospital, Melbourne, VIC.
- 10 University of Melbourne, Melbourne, VIC.
- 11 St Vincent’s Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW.
This national survey was undertaken as part of a series of activities commissioned and funded by the Australian Council for Safety and Quality in Health Care towards development of a Measurement for Improvement Toolkit. Thanks to Professor Paddy Phillips for his support as Chair, Australian Council for Safety and Quality in Health Care Measurement for Improvement Group, to other members of the expert working group — Dr Shiong Tan, Mr Tony McBride, Dr Alan Wolff, Ms Cathie Steele, Ms Jane Phelan, Ms Marie Colwell — and to all those who participated in the survey.
Caroline Brand, Joanne Tropea and Shaymaa Elkadi were project team members, and all other authors were members of the expert working group commissioned to develop the Measurement for Improvement Toolkit.
- 1. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471. <MJA full text>
- 2. Australian Council for Safety and Quality in Health Care. Part B. Summary of council’s work 2004–2005. Achieving safety and quality improvements in health care — sixth report to the Australian Health Ministers’ Conference, July 2005. http://www.health.gov.au/internet/safety/publishing.nsf/Content/former-pubs-archive-annrept2005 (accessed Jun 2008).
- 3. Scott IA, Denaro CP, Bennett CJ, et al. Achieving better in-hospital and after-hospital care of patients with acute cardiac disease. Med J Aust 2004; 180 (10 Suppl): S83-S88. <MJA full text>
- 4. Scott IA, Denaro CP, Flores JL, et al. Quality of care of patients hospitalized with congestive heart failure. Intern Med J 2003; 33: 140-151.
- 5. Scott IA, Duke AB, Darwin IC, et al. Variations in indicated care of patients with acute coronary syndromes in Queensland hospitals. Med J Aust 2005; 182: 325-330. <MJA full text>
- 6. NSW Institute for Clinical Excellence. Annual report 2003/2004. Let’s make a noticeable difference, together. http://www.cec.health.nsw.gov.au/pdf/ice_ar04.pdf (accessed Jun 2008).
- 7. NSW Institute for Clinical Excellence; Royal Australian College of Physicians. Towards a safer culture in New South Wales. http://www.racp.edu.au/bp/new_tasc_nsw1.html (accessed Jun 2008).
- 8. Institute for Healthcare Improvement. Measures. http://www.ihi.org/IHI/Topics/Improvement/ImprovementMethods/Measures/ (accessed Jun 2008).
- 9. Barraclough B. Measuring and reporting outcomes can identify opportunities to provide better and safer care. ANZ J Surg 2004; 74: 90.
- 10. Mannion R, Davies HT. Reporting health care performance: learning from the past, prospects for the future. J Eval Clin Pract 2002; 8: 215-228.
- 11. Hibbard JH, Stockard J, Tusler M. Does publicizing hospital performance stimulate quality improvement efforts? Health Aff (Millwood) 2003; 22: 84-94.
- 12. Scobie S, Thomson R, McNeil JJ, Phillips PA. Measurement of the safety and quality of health care. Med J Aust 2006; 184 (10 Suppl): S51-S55. <MJA full text>
- 13. Australian Council for Safety and Quality in Health Care. Measurement for Improvement Toolkit, 2005. http://www.safetyandquality.org/internet/safety/publishing.nsf/Content/CommissionPubs (accessed Jun 2008).
- 14. Australian Commission on Safety and Quality in Health Care. Former council terms and definitions for safety and quality concepts. http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/former-pubs-archive-definitions (accessed Jun 2008).
- 15. Daley J, Vogeli C, Blumenthal D, et al. Physician clinical performance assessment. The state of the art: issues, possibilities and challenges for the future. Boston, Mass: Institute for Health Policy, Massachusetts General Hospital, 2002.
- 16. Australian Institute of Health and Welfare. Australian hospital statistics 2002–03. Health services series No. 22. Appendix 4, Table A4.2. Canberra: AIHW, 2003. (AIHW Cat. No. HSE 32.) http://www.aihw.gov.au/publications/index.cfm/title/10015 (accessed Jun 2008).
- 17. Dillman DA. Mail and telephone surveys: the total design method. New York: John Wiley & Sons, 1999.
- 18. Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract 1999; 5: 401-416.
Abstract
Objective: To identify patient safety measurement tools in use in Australian public hospitals and to determine barriers to their use.
Design: Structured survey, conducted between 4 March and 19 May 2005, designed to identify tools, and to assess current use of, levels of satisfaction with, and barriers to use of tools for measuring the domains and subdomains of: organisational capacity to provide safe health care; patient safety incidents; and clinical performance.
Participants and setting: Hospital executives, managers and clinicians from a nationwide random sample of Australian public hospitals stratified by state and hospital peer grouping.
Main outcome measures: Tools used by hospitals within the three domains and their subdomains; patient safety tools and processes identified by individuals at these hospitals; satisfaction with the tools; and barriers to their use.
Results: Eighty-two of 167 invited hospitals (49%) responded. The survey ascertained a comprehensive list of patient safety measurement tools that are in current use for measuring all patient safety domains. Overall, there was a focus on use of processes rather than quantitative measurement tools. Approximately half the 182 individual respondents from participating hospitals reported satisfaction with existing tools. The main reported barriers were lack of integrated supportive systems, resource constraints and inadequate access to robust measurement tools validated in the Australian context. Measurement of organisational capacity was reported by 50 (61%), of patient safety incidents by 81 (99%) and of clinical performance by 81 (99%).
Conclusion: Australian public hospitals are measuring the safety of their health care, with some variation in measurement of patient safety domains and their subdomains. Improved access to robust tools may support future standardisation of measurement for improvement.