The Quality in Australian Health Care Study highlighted the extent of harm to patients in Australia’s health care system in the 1990s,1 and stimulated initiatives to improve the quality, safety, and accountability of patient care. The Australian Council for Safety and Quality in Health Care (ACSQHC) was established in January 2000 as a key national body to drive quality and safety health care reform. A key priority was to increase the use of health care performance measurements to drive quality improvement.2 This priority was underpinned by evidence that performance measurements are associated with improved quality and safety outcomes.3-7
External benchmarking and public reporting of the measurement of organisation and individual performance is limited in Australia, in part because of concerns about data validity and the adequacy of models to adjust for differences in casemix.5 Individuals and organisations need access to robust measurement tools to enable internal, local and national comparisons to be made.8-12 To date, there has been no systematic assessment of the tools used by Australian public hospitals to measure their own performance.
In November 2004, the ACSQHC commissioned the development of an evidence-based resource, the Measurement for Improvement Toolkit,13 to help Australian health care professionals access appropriate measurement tools and processes to support their patient safety programs.
Here, we provide the results of a survey of Australian public hospitals undertaken to inform the development and subsequent implementation of the Measurement for Improvement Toolkit. The primary objective of this national survey was to identify patient safety measurement tools across the three domains of patient safety defined by the ACSQHC: organisational capacity to provide safe health care; patient safety incidents; and clinical performance. The second objective was to identify perceived barriers to the use of these tools.
The project was supervised by a multidisciplinary national panel of experts with expertise in patient safety and quality, in conjunction with a technical team based within the Clinical Epidemiology and Health Service Evaluation Unit, Melbourne Health in Victoria.
A patient safety measurement tool was defined as an instrument or device that provides instruction and support for measurement, and is used by organisations and/or individuals to maintain and improve patient safety.
In the absence of internationally accepted definitions, the project team developed a project definition of patient safety subdomains (within the three main domains defined by the ACSQHC) by conducting a comprehensive MEDLINE and CINAHL database search of peer-reviewed literature and a web-based search of literature of key health care safety and quality improvement organisations. These searches, which were also designed to identify existing patient safety tools, were conducted in April 2005. Definitions were ratified for inclusion by the expert panel (Box 1).
A random sample of Australian public hospitals, stratified by state and hospital peer group (location, type and size of hospital), was obtained from the Australian Institute of Health and Welfare’s public hospital list 2002–2003.16 Because they were few in number, all Australian Capital Territory, Northern Territory and Tasmanian public hospitals in the hospital peer groups we chose were included in the sample (Box 2). To maintain a representative sample and yet have the study remain feasible, a proportion of each stratum (hospital peer group by state) was selected. Computer-generated random number sampling was used to obtain the random sample using Stata, version 8 (StataCorp, College Station, Tex, USA).
The survey period was 4 March 2005 to 19 May 2005. Repeated survey dissemination (a maximum of three times) was used to increase the likelihood of at least a 50% response rate.17 In keeping with the requirements of the HREC, hospitals were first contacted through their chief executive officers (CEOs) or directors to request their participation, and that of quality/safety/risk management staff, directors of nursing, allied health, and pharmacy, and up to three directors of medical departments. In accordance with HREC approval, there was no direct contact with hospital staff other than through the CEO or director. A limit of three follow-up calls and two global emails were sent by the project team to CEOs or their nominated staff, to remind staff to complete the survey.
There was limited ability to validate the survey responses, as participants remained anonymous. For this reason, data were triangulated to assess whether the spectrum of patient safety measurement tools we identified was comprehensive and representative across Australia. Additional data sources included the extensive literature search, input from the broad selection of national experts on the expert panel, and stakeholder workshops held in six states and attended by 101 participants, including hospital CEOs, clinicians and clinical management representatives.
Descriptive statistics were used to summarise the responses. Each hospital (hospital response) was represented by the respondent with the most senior organisational position related to quality and safety. The most senior positions for the purposes of the survey were nominated in the following order: CEO, quality and safety manager, head of department, other clinician. Hospital responses were used to assess the overall measurement of organisational capacity, patient safety incidents and clinical performance. Differences in measurement by respondent position, hospital peer group and state were calculated. Because of the small numbers in each hospital peer group, three groups were created by collapsing hospital peer groups based on the number of acute weighted separations: less than 5000; 5000 to 10 000; and more than 10 000 acute weighted separations.
A total of 167 public hospitals, representing 22% of all Australian public hospitals, were invited to participate. State and territory response rates and the reported use of measurement tools are summarised in Box 3. Eighty-two invited hospitals (49%) agreed to participate, with representation from each state and territory. The anticipated response rate was 50%, which was achieved in all but two states (New South Wales [36%] and Western Australia [43%]).
Responses on identification and use of patient safety measurement tools, satisfaction with them, and barriers to their use, were received from 182 individuals from the 82 responding hospitals. The tools they identified are summarised in Box 4. Individuals from responding hospitals did not identify any measurement tools that had not already been identified by the literature search and expert panel. In all domains, there was a focus on the use of processes (eg, accreditation) rather than use of tools designed to quantitatively measure responsiveness to change.
The proportions of individual respondents reporting satisfaction, ambivalence or dissatisfaction with existing measurement tools are shown in Box 5. About half the individual respondents indicated they were satisfied with the existing patient safety measurement tools. A high proportion of respondents reported being “neither satisfied nor dissatisfied”, especially with tools measuring organisational capacity and clinical performance. Where measurement tools were not in use, or where there was dissatisfaction with tools used, the most frequently listed limitations across all three domains (Box 6) were lack of an integrated patient safety system and administrative resource constraints. Lack of well developed tools for local use was reported to be a major limitation for measuring organisational capacity and clinical performance, but not reported as a limitation for measuring patient safety incidents.
Fifty hospitals (61%) reported measuring organisational capacity; 81 (99%) measured patient safety incidents; and 81 (99%) measured clinical performance (either organisational/departmental or individual). There was some variation between states and territories (Box 3), which did not reach statistical significance. There were no significant differences between hospital size or peer group and measurement of the three patient safety domains, although numbers of responses were small and confidence intervals accordingly wide. There was no difference in reported measurement across these domains according to respondent positions.
This study provides the first comprehensive overview of patient safety measurement tools currently used in Australian public hospitals. To our knowledge, no other survey of this kind has been undertaken outside Australia. The survey identified a breadth of tools in use, and provided preliminary evidence for variation in the use of tools to measure patient safety domains. It provides insight into barriers that need to be considered in planning implementation strategies for improving access to, and sustained uptake of, high-quality tools by Australian public hospitals.
Our study has ascertained that satisfaction with patient safety measurement tools among health professional is modest at best. Dissatisfaction may be linked to a range of reported limitations, the most prevalent of which was lack of integrated systems within hospitals. Not perceiving the value of change is one of the most powerful barriers to implementing innovation.18 If an organisational system does not support measurement in all aspects of data management, from collection through to review, timely feedback and response, then measurement is unlikely to be perceived as worthwhile and hence unlikely to be supported by individuals within the system. In addition, adoption of new information depends not only on awareness and perception of value, but also on the credibility of the information. The second most reported limitation was lack of access to robust measurement tools, a finding supported by additional work that identified that most patient safety tools have not been developed through rigorous psychometric methods, and have not been validated within the Australian context.13
1 Definitions of the domains of patient safety
Organisational capacity — the capacity of the organisation to provide safe care; this includes the structures, resources and commitment to patient safe care within the organisation. Subdomains identified from the literature review included: clinical governance; leadership; safety culture; consumer and community involvement; professional competence and education; and information management capacity and processes.
Patient safety incident — an event or circumstance that could or did lead to unintended and/or unnecessary harm to a person and/or to a complaint, loss or damage.14 Subdomains included: incident detection; incident reporting; incident investigation and analysis; incident management; and health care worker and patient feedback and learning.
Clinical performance assessment — measurement of practice behaviour and adherence to objective and evidence-based clinical process and outcomes of care by organisations, departments or individual clinicians.15
2 Summary of the sampling process for inviting Australian public hospitals to participate in the survey
4 Measurement tools and processes identified by 182 individual respondents from the 82 participating hospitals
Organisational capacity tools and processes
Accreditation
A checklist: patient safety management systems (Australian Council for Safety and Quality in Health Care)
Board clinical governance self-evaluation (Victorian Quality Council)
Checklist for reviewing your safety and quality program against the framework elements (Victorian Quality Council)
Consumer and community participation self-assessment tool for hospitals (National Resource Centre for Consumer Participation in Health)
Develop a culture of safety* (Institute for Healthcare Improvement)
Key performance indicators
Patient safety measurement tools and processes
Clinical audit (including Limited Adverse Occurrence Screening)
Clinical incident reporting systems
Complaints/patient liaison
Coroner’s reports
Failure mode and effects analysis
Morbidity and mortality meetings/death review
Risk register
Root cause analysis
Sentinel event reporting
Clinical performance of organisation/department and individual clinician tools
Accreditation
Benchmarking certification of staff
Clinical indicators
Formal assessment of professional competence
Peer review
Professional development programs
5 Satisfaction with patient safety measurement tools reported by 182 individual respondents from the 82 participating hospitals

Abstract
Objective: To identify patient safety measurement tools in use in Australian public hospitals and to determine barriers to their use.
Design: Structured survey, conducted between 4 March and 19 May 2005, designed to identify tools, and to assess current use of, levels of satisfaction with, and barriers to use of tools for measuring the domains and subdomains of: organisational capacity to provide safe health care; patient safety incidents; and clinical performance.
Participants and setting: Hospital executives, managers and clinicians from a nationwide random sample of Australian public hospitals stratified by state and hospital peer grouping.
Main outcome measures: Tools used by hospitals within the three domains and their subdomains; patient safety tools and processes identified by individuals at these hospitals; satisfaction with the tools; and barriers to their use.
Results: Eighty-two of 167 invited hospitals (49%) responded. The survey ascertained a comprehensive list of patient safety measurement tools that are in current use for measuring all patient safety domains. Overall, there was a focus on use of processes rather than quantitative measurement tools. Approximately half the 182 individual respondents from participating hospitals reported satisfaction with existing tools. The main reported barriers were lack of integrated supportive systems, resource constraints and inadequate access to robust measurement tools validated in the Australian context. Measurement of organisational capacity was reported by 50 (61%), of patient safety incidents by 81 (99%) and of clinical performance by 81 (99%).
Conclusion: Australian public hospitals are measuring the safety of their health care, with some variation in measurement of patient safety domains and their subdomains. Improved access to robust tools may support future standardisation of measurement for improvement.