In an attempt to arrive at the truth, I have applied everywhere for information but in scarcely an instance have I been able to obtain hospital records fit for any purpose of comparison. If they could be obtained they would enable us to answer many questions. They would show subscribers how their money was being spent, what amount of good was really being done with it or whether the money was not doing mischief rather than good.
Measurement is vital in all areas of clinical medicine. To fully understand any disease or therapeutic process, it is essential to describe it quantitatively and qualitatively. However, as identified by Florence Nightingale over 140 years ago, the use of rigorous measures to describe the quality, safety and effectiveness of our health care system has lagged behind the science of clinical measurement.
Indeed, it was not until the 1990s that landmark studies confronted all of us with comprehensive data describing what we already knew as clinicians — that health care sometimes does harm.1,2 To better understand the size of the problem and to detect changes, it is essential to measure the safety and quality of health care.
A variety of strategically chosen measures are needed to understand the quality and safety of health. Quantitative measures, such as mortality, will always be integral to this measurement, but, for clinicians, semi-quantitative and qualitative assessments can highlight broad areas or issues that require scrutiny (Box 1).
Useful quantitative measures may require large databases and powerful statistical analyses, such as those developed in the United States to highlight areas of unexpectedly high mortality in cardiac surgery.3 However, measures can also be as simple as local audits of practice to determine whether benchmarks are being met.4
A variety of tools are becoming available to help health professionals approach particular issues of health care safety or quality and to choose an appropriate measurement. The Australian Council for Safety and Quality in Health Care (ACSQHC) has produced a “Measurement for Improvement Toolkit”, which is a practical, evidence-based guide for use by clinicians in both the public and private sectors in Australia4 (Box 2).
Just as in clinical medicine, any measurement of safety and quality is useful only if it actually measures what it is supposed to, and is used and interpreted correctly. The National Health Performance Committee is a committee of the Australian Health Ministers’ Advisory Council, whose role is to develop and maintain a national framework of performance measurement for the health system; to establish and maintain national performance indicators within the national performance measurement framework; to facilitate benchmarking for health system improvement; and to report on these to the annual Australian Health Ministers’ Conference. The Committee has developed criteria for selecting health performance indicators (Box 3).5 To avoid duplication of effort, indicators should use existing datasets whenever possible.
Nevertheless, it is important to remember that many measures of safety and quality of health care are relatively inexact, and so should not be interpreted as a conclusive picture of an individual’s, an agency’s, or even a system’s performance. An indicator is not an absolute measure of quality or safety, but rather can act as a screen to determine or identify areas for further local analysis. While data can be collated, analysed and fed back centrally, it is only at a local level that the underlying reasons for a particular result (eg, rate of surgical-site infections) can be truly explained, and changes made to improve practice.6 Thus, indicators are a tool to encourage performance improvement and to identify areas worthy of further study; they are typically hypothesis-generating rather than hypothesis-proving.
In clinical terms, measurements of health care safety and quality may be useful for screening and ruling out a problem, for diagnosing a problem, and for monitoring progress.7 However, use of a screening measure to diagnose poor quality will produce “false positives”; equally, use of a highly specific diagnostic indicator to rule out problems will produce “false negatives”. Either way, the measure is not useful. False positives with screening tools such as raw mortality cause much anxiety when interpreted to indicate a health system problem, rather than a need to focus more closely to determine whether a problem really exists. For these reasons, a variety of risk adjustments have been developed to make raw data more meaningful to the types of patients seen and to adjust for factors over which clinicians have no control, including sociodemographic and clinical characteristics (eg, age, sex, socioeconomic status, comorbidities, physiological variables, and emergency versus planned status). These adjustments make the raw statistic more specific and meaningful when deciding whether there is a problem. However, as risk adjustment has limitations and can adjust only for known confounders, it seems highly unlikely that it can ever fully compensate for the effects of casemix variables, so that remaining variation reflects quality of care alone.8,9
Other problems arise in understanding when a change in an event rate is a real improvement or deterioration, especially when the event is uncommon. Frequency charts of the raw number of events occurring over time (time series or “saw tooth” charts) are rarely helpful, because of underlying background variation and small numbers of adverse events. Statistical process control methods (eg, exponential weighted moving averages, process control limits, and cusum analyses) developed in laboratory science for quality control make these fluctuations more interpretable. For example, cusum analysis was used recently to better understand bed occupancy and to plan medical and surgical admissions with the aim of improving access to health care,10 while statistical process control methods have been used to identify outliers more reliably.11,12 Such methods are now routinely used for reporting safety, quality and administrative data at Flinders Medical Centre and the Repatriation General Hospital, Adelaide.
The final problem with measures of health care quality and safety is to ensure that they are timely and repeatable. All measurement and clinical practice change is ultimately individual and local. If results are to support change, they must be reported to those who use them in a way that is relevant to current practice. While system-wide measures might be ideal to ensure equity of safety and quality, and to monitor effects of broader-scale or longer-term initiatives (eg, via national agencies such as the Australian Institute of Health and Welfare), the coordination and standardisation of their collection, submission, analysis and publication makes timeliness difficult and decreases their usefulness for local clinicians. Similarly, measures are most useful when they can be repeated after practice change, to determine its effects. The development of such key performance indicators is fundamental to any clinical practice improvement or innovation. This has been highlighted by recent studies which demonstrate evidence-to-practice gaps in virtually every area of health care.13-16 By developing measures that are timely, can be replicated, and inform understanding of the quality of care, local change initiatives can lead to dramatic improvements in care.17-19
Recently, Wilson and Van Der Weyden called for better systems by which we can understand how our health system is performing.20 Ways of measuring processes, outcomes and the culture of health care are well described and freely available.4 However, the most fundamental barrier to better measurement seems to be our failure to invest in these systems as part of the health care structure, in the way we have invested in, for example, financial management systems. Gathering data on measures of safety and quality of health care systems that are structural, valid, reliable, accurate, timely, collectable, meaningful, relevant and important requires resources, which are still lacking. However, the situation is changing rapidly with the introduction of nationally agreed requirements for health care incident reporting systems, sentinel event reporting, and a variety of morbidity and procedure registries, and with development of a national minimum dataset for safety and quality through the ACSQHC and its successor, the Australian Commission for Safety and Quality in Health Care.
Similarly, professional bodies, such as the Royal Australasian College of Surgeons (RACS) Section of Breast Surgery and the Australasian Society of Cardiac and Thoracic Surgeons, have introduced systems of performance reporting, feedback and improvement for their members. Indeed, participation in the process is required for membership of the Section of Breast Surgery. The Cardiac and Thoracic Surgeons’ national reporting system is not mandatory, with eight hospitals participating in 2005, and another six to join in 2006. A major issue is the $15 000–$20 000 recurrent cost per hospital needed for data collection. Clearly, such initiatives are resource intensive and, like financial management systems, require structural investment.
The UK National Health Service (NHS) has also realised the need for a systematic approach to improving patient safety and has established the National Patient Safety Agency (NPSA) to bring together information to quantify, characterise and prioritise patient safety issues. A core function of the NPSA is the development of the National Reporting and Learning System to collect reports of patient safety incidents from all service settings across England and Wales, and to learn from these reports, including developing solutions to enhance safety.21,22
However, it is recognised that incident reporting on its own cannot reveal a complete picture of what does, or could, lead to patient harm. Incident reporting systems are not comprehensive, because of under-reporting, biases in what types of incident are reported,23 and the multiplicity of reporting systems. For example, in addition to the National Reporting and Learning System, the UK has separate reporting systems for medical device incidents, adverse drug reactions, health care-associated infections, and maternal and infant deaths. Furthermore, as serious events are rare, and information on them is distributed across the health care system, better use needs to be made of data collections already in existence, even if such collections were designed for different purposes.
Recognition of the need to access a range of data sources led the NPSA in 2004 to set up a Patient Safety Observatory in collaboration with partners from both within and outside the NHS. These include key national organisations, such as the Healthcare Commission (an independent body set up to improve health services in England), the Office for National Statistics, the Medicines and Healthcare products Regulatory Agency, which regulates medicines and medical devices in the UK, patient organisations such as Action against Medical Accidents, the NHS Litigation Authority, and medical defence organisations.24,25 The Observatory enables the NPSA to draw on a wide range of data and intelligence, including clinical negligence claims, complaints, and routine data from a range of sources about complications of clinical care. These form the basis for identifying and monitoring patient safety incident trends, highlighting areas for action, and setting priorities. Examples of NPSA Observatory activities are shown in Box 4.
This type of approach is well established in the UK for public health, with a network of regional public health observatories tasked with providing health intelligence to support the monitoring and assessment of population health.29 Setting up similar networks in Australia, with its smaller population base, would be relatively more costly, and would need to be done efficiently and, where possible, within expanded and strengthened existing organisations.
The biases within incident reporting systems also provide challenges for their use to compare or evaluate safety across institutions. Thus, a hospital may have more reports as a result of a better developed reporting culture: for example, incident reporting rates for acute trusts in England vary by a factor of seven, from 1.8 to 12.4 incidents per 100 admissions,24 but this is likely to reflect differences in completeness of reporting and artefacts of the reporting process, rather than differences in the occurrence of incidents. Using comparative data on reporting rates is thus highly problematic and even counterproductive, if external judgements about safety are crudely made on the basis of reporting rates.
The NPSA Observatory faces other challenges to integrating data from a range of sources. Many of the data sources with potential for assessing patient safety are collected for other purposes, and there may be limitations to their use. For example, a study of the value of clinical negligence data to assess safety encountered issues of confidentiality, data quality and completeness, and the resources needed to extract relevant information.30 The NPSA is working with the relevant organisations in England and Wales to develop a more consistent approach to collecting data about clinical negligence that will support patient safety.
Clinicians in practice today are generally well educated in the basics of clinical data assessment, with many having participated in clinical research activities. Especially in teaching hospitals, there is a sophisticated appreciation of the science of clinical measurement and its strengths and weaknesses. In the past, a major barrier to quality improvement activities has been the poor quality of data presented to clinicians purporting to represent indicators of performance.
In the future, the potential to engage clinicians in quality improvement activities will require information that is respected for its accuracy, relevance and impartiality. Clinicians will need training in the use of measurement to improve health care safety and quality, just as they require training in the use of clinical diagnostic tests. Already many examples exist, ranging from local initiatives7,12,13 to national procedure registries and disease databases, which demonstrate that clinicians are interested in this issue and that they respond positively to trusted performance data that are methodologically sound, risk-adjusted and timely. Clearly, in the short term, we could all do more to understand and improve what we do with the measures, techniques and skills that are already available. However, for the longer term, investment is needed to extend the required measures and skills widely and systematically through our health care system, especially where the financial and human costs and consequences of variable performance are high. It is hoped that we will be able to redress these deficiencies in a much shorter time than has elapsed since Florence Nightingale identified them.
1 Potential measures of health care quality and safety
Sentinel events (wrong site or wrong person surgery) must be reported to state and territory jurisdictions and are counted annually.
Adverse events or near misses (eg, medication errors) are voluntarily reported via incident notification systems, such as AIMS (Advanced Incident Management System), which are now mandated in hospitals.
Administrative datasets (eg, ICD-10-AM codes) are reported via casemix systems.
Databases and registries (eg, Australia and New Zealand Dialysis and Transplantation database for renal transplantation outcomes) are voluntary and may be local, national or international.
Key performance indicators (eg, rates of health care-acquired infection) are voluntary and usually developed locally or in association with national statistical, professional or accrediting bodies.
Medical record reviews (eg, Quality in Australian Health Care Study2) are used as snapshots for in-depth analysis of particular issues, but require trained staff and good documentation.
Semi-quantitative and qualitative assessments
Accreditation standards set by external bodies (eg, the Australian Council on Healthcare Standards) may include quantitative indicators.
Assessments of organisational capacity for clinical governance (eg, leadership, safety culture, communication and teamwork).
Credentialling and determining the scope of practice for clinicians.
Patient and staff satisfaction and complaints surveys can be local or system-wide with formal statistically valid population sampling.
ICD-10-AM = Australian modification of the International statistical classification of diseases and health related problems, 10th revision.
2 Measurement for Improvement Toolkit4
The “toolkit” was developed by the Australian Council for Safety and Quality in Health Care to help clinicians approach particular issues of health care safety or quality and to select appropriate measurements. It comprises three sections:
A. User’s guide
Instructions on how to use the toolkit and some case examples on how to use the different sections.
B. Background information and resources
A review of information on measurement and patient safety, as well as a reference list and guides to other resources.
C. Measurement tools and processes
An easy-to-follow guide to various tools and how to use them.
3 Criteria developed by the National Health Performance Committee (NHPC) for health performance indicators5
Generic indicators for use at any level, from program to whole-of-system, should have all or some of the following qualities. They should:
- Be worth measuring
- Be measurable for diverse populations
- Be understood by people who need to act
- Galvanise action
- Be relevant to policy and practice
- Reflect results of actions when measured over time
- Be feasible to collect and report
- Comply with national data definitions such as the National health data dictionary
The indicators represent an important and salient aspect of the public’s health or the performance of the health system.
The indicators are valid and reliable for the general population and diverse populations (eg, Aboriginal and Torres Strait Islander populations, sex, rural/urban, socioeconomic level).
People who need to act on their own behalf or that of others should be able to readily comprehend the indicators and what can be done to improve health.
The indicators are of a nature that action can be taken at the national, state, local or community level by individuals, organised groups and public and private agencies.
Actions that can lead to improvement are expected and feasible —they are plausible actions that can alter the course of an indicator when widely applied.
If action is taken, tangible results will be seen indicating improvements in various aspects of the nation’s health.
The information required for the indicator can be obtained at reasonable cost in relation to its value and can be collected, analysed and reported in an appropriate time frame.
Additional selection criteria specific to NHPC reporting
In addition to the above general criteria, NHPC selection criteria should also:
- Facilitate the use of data at the health-industry service-unit level for benchmarking purposes.
- Be consistent and use established and existing indicators where possible.
General approach to indicator selection or development
In selecting or developing relevant indicators of health system performance, it is important to keep in mind that indicators are just that — an indication of organisational achievement. They are not an exact measure, and individual indicators should not be taken to provide a conclusive picture of an agency’s or system’s achievements.
A suite of relevant indicators is usually required, followed by an interpretation of their results. Performance information does not exist in isolation and is not an end in itself, but rather provides a tool that allows opinions to be formed and decisions made. Some indicators should be ratios of output/input, and outcome/output.
4 Examples of activities of the UK National Patient Safety Agency (NPSA) Observatory
A rare issue of patient safety — tracheostomy
Concern about the care of patients with tracheostomies transferred from an intensive care unit to a general ward led the Patient Safety Observatory to collate information from a range of sources:
The National Reporting and Learning System had received reports of 36 incidents involving tracheostomies, including one death, between November 2003 and March 2005.
The National Health Service (NHS) Litigation Authority indicated that there had been 45 litigation claims involving tracheostomy or tracheostomy tubes from February 1996 to April 2005, of which 13, including seven deaths, related to the management of tracheostomy tubes.
The Medicines and Healthcare Products Regulatory Agency had received reports of 10 similar incidents since 1998.
Analysis of hospital episode data showed an increase in both the number of tracheostomies performed in the previous 5 years, and the proportion of patients with a tracheostomy being cared for outside surgical and anaesthetic specialties.
Information about this issue was fed back to the NHS via the NPSA’s Patient Safety Bulletin in July 2005.26
Using routine data sources — hospital data
The US Agency for Healthcare Research and Quality has developed patient safety indicators that can be derived from routinely collected hospital administrative data.27 The NPSA is working with the Healthcare Commission to adapt and validate these US patient safety indicators for use with UK Hospital Episode Statistics (HES).28
The HES are derived from routine administrative data provided by all NHS hospitals; they describe episodes of inpatient care, including patient characteristics, diagnoses, procedures, specialty and length of stay. HES records are also linked to mortality data, so that mortality within and after hospital episodes can be included in the analysis. However, HES data use a different coding scheme for diagnoses and procedures than that defined by the US Agency, and differences in clinical practice between the US and UK mean that the indicators need careful validation.
- Sarah Scobie1
- Richard Thomson1,2
- John J McNeil3
- Paddy A Phillips4
- 1 National Patient Safety Agency, London, UK.
- 2 School of Population and Health Sciences, Newcastle-upon-Tyne Medical School, Newcastle, UK.
- 3 Department of Epidemiology and Preventive Medicine, Central and Eastern Clinical School, Monash University, Alfred Hospital, Melbourne, VIC.
- 4 Department of Medicine, Flinders University, Adelaide, SA.
Sarah Scobie is employed by the UK National Patient Safety Agency (NPSA) as Head of Observatory. Richard Thomson is on secondment to the NPSA as Director of Epidemiology and Research. The NPSA funded their attendance at patient safety meetings.
Paddy Phillips was a member of the Australian Council for Safety and Quality in Health Care (ACSQHC) and received the standard Australian Government sitting fees for attendance at meetings. ACSQHC paid for his travel and accommodation to attend several Australian conferences on safety and quality in health care, reimbursed him for some of the time involved in producing the report Charting the safety and quality of health care in Australia,7 and sponsored the production of the Measurement for Improvement Toolkit.4
- 1. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalised patients. N Engl J Med 1991; 324: 370-376.
- 2. Wilson RM, Runciman WB, Gibberd RW, et al. The Quality in Australian Health Care Study. Med J Aust 1995; 163: 458-471. <eMJA pdf>
- 3. Hannan EL, Kilburn H, O’Donnell JF, et al. Adult open heart surgery in New York State: an analysis of risk factors and hospital mortality rates. JAMA 1990; 264: 2768-2774.
- 4. Brand C, Elkadi, Tropea J. Measurement for Improvement Toolkit. Canberra: Australian Council for Safety and Quality in Healthcare, 2005.
- 5. National Health Performance Committee. National report on health sector performance indicators 2003. A report to the Australian Health Ministers’ Conference, November 2004. Canberra: AIHW, 2004. (AIHW Catalogue No. HWI 786.)
- 6. Lally J, Thomson RG. Is indicator use for quality improvement and performance measurement compatible? In: Davies HTO, Tavakoli M, Malek M, Neilson AR, editors. Managing quality: strategic issues in health care management. Aldershot, UK: Ashgate Publishing, 1999: 199–214.
- 7. Elzinga R, Ben-Tovim D, Phillips PA. Charting the safety and quality of health care in Australia: steps towards systematic health care safety and quality measurement and reporting in Australia. A report commissioned by the Australian Council on Safety and Quality in Health Care. Canberra: ACSQHC, 2003.
- 8. Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet 2004; 363: 1147-1154.
- 9. Iezzono LI. The risks of risk adjustment. JAMA 1997; 278: 1600-1607.
- 10. Burns CM, Bennett CJ, Myers CT, Ward M. The use of cusum analysis in the early detection and management of hospital bed occupancy crises. Med J Aust 2005; 183: 291-294. <MJA full text>
- 11. Aylin P, Alves B, Best N, et al. Comparison of UK paediatric cardiac surgical performance by analysis of routinely collected data 1984-96: was Bristol an outlier? Lancet 2001; 358: 181-187.
- 12. Mohammed MA, Cheng K, Rouse A, Marshall T. Bristol, Shipman and clinical governance: Shewhart’s forgotten lessons. Lancet 2001; 357: 463-467.
- 13. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med 2003; 348: 2635-2645.
- 14. Australian Council for Safety and Quality in Health Care. Charting the safety and quality of health care in Australia. Canberra: ACSQHC, 2004.
- 15. National Institute for Clinical Studies. Evidence-practice gaps report. Vol 1. Melbourne: NICS, 2003.
- 16. National Institute for Clinical Studies. Evidence-practice gaps report. Vol 2. Melbourne: NICS, 2005.
- 17. Scott IA, Darwin IC, Harvey KH, et al. Multisite, quality-improvement collaboration to optimise cardiac care in Queensland public hospitals. Med J Aust 2004; 180: 392-397. <MJA full text>
- 18. Ferry CT, Fitzpatrick MA, Long PW, et al. Towards a safer culture: clinical pathways in acute coronary syndromes and stroke. Med J Aust 2004; 180 (10 Suppl): S92-S96. <MJA full text>
- 19. Semmens JB, Aitken RJ, Sanfilippo FM, et al. The Western Australian Audit of Surgical Mortality: advancing surgical accountability. Med J Aust 2005; 183: 504-508. <MJA full text>
- 20. Wilson RM, Van Der Weyden MB. The safety of Australian healthcare: 10 years after QAHCS. We need a patient safety initiative that captures the imagination of politicians, professionals and the public [editorial]. Med J Aust 2005; 182: 260-261. <MJA full text>
- 21. UK Department of Health. An organisation with a memory. London: The Stationery Office, 2000. Available at: www.dh.gov.uk/PublicationsAndStatistics/Publications/PublicationsPolicyAndGuidance/PublicationsPAmpGBrowsableDocument/fs/en?CONTENT_ID=4098184&chk=u1I0ex (accessed Feb 2006).
- 22. UK Department of Health. Building a safer NHS for patients. London: Department of Health, 2001. Available at: www.dh.gov.uk/assetRoot/04/05/80/94/04058094.pdf (accessed Feb 2006).
- 23. O’Neil AC, Petersen LA, Cook EF, et al. Physician reporting compared with medical-record review to identify adverse medical events. Ann Intern Med 1993; 119: 370–376.
- 24. Scobie S, Thomson R. Building a memory: preventing harm, reducing risks and improving patient safety. The first report of the National Reporting and Learning System and the Patient Safety Observatory. London: National Patient Safety Agency, 2005. Available at: http://www.npsa.nhs.uk/site/media/documents/1280_PSO_Report.pdf (accessed Mar 2006).
- 25. National Patient Safety Agency. Patient Safety Observatory. Available at: http://www.saferhealthcare.org.uk/IHI/ProgrammesAndEvents/Observatory/ (accessed Mar 2006).
- 26. Russell J. Management of patients with a tracheostomy. Patient Safety Bull 2005; 1: 4. Available at: http://www.npsa.nhs.uk/site/media/documents/1257_PSO_Bulletin.pdf (accessed Feb 2006).
- 27. Agency for Healthcare Research and Quality. Patient safety indicators download. AHRQ quality indicators. Rockville, Md, USA: AHRQ, 2006. Available at: http://www.qualityindicators.ahrq.gov/psi_download.htm (accessed Feb 2006).
- 28. National Health Service Health and Social Care Information Centre. HESonline home page. Available at: http://www.hesonline.nhs.uk (accessed Feb 2006).
- 29. Association of Public Health Observatories [website]. Available at: http://www.apho.org.uk (accessed Feb 2006).
- 30. Fenn P, Gray A, Rivero-Arias O, et al, The epidemiology of error: an analysis of databases of clinical negligence litigation. Manchester, UK: University of Manchester, 2004.
Abstract
Measurement of safety and quality is fundamental to health care delivery.
A variety of measures are needed to fully understand the system; quantitative and qualitative measures are both useful in different ways.
Measures need to be valid, reliable, accurate, timely, collectable, meaningful, relevant and important to those who will use them.
Clinicians value appropriate measures and respond to them.