The high profile “Doctor Death” Bundaberg Hospital scandal led to public inquiries and a major shake-up of the leadership of Queensland Health at ministerial and departmental levels. The public inquiries focused attention on the management culture of Queensland Health and the need for the department to improve its transparency and openness.1-3 In response, Queensland Health has transformed its clinical governance arrangements.4 This article describes one aspect of the new arrangements — the use of statistical process control charts using routine data to provide a starting point for learning and subsequent action to improve the quality of care.
All Queensland hospitals (public and private) regularly provide routine data to Queensland Health. These data include information on demographic characteristics of the patients, the principal diagnosis, other conditions treated, and procedures performed. Coding standards require the coded data to be provided within 35 days from the end of the month. In consultation with clinicians, 31 clinical indicators have been selected for regular monitoring of outcomes of care using statistical process control (Box 1).
Control charts are currently provided to the 87 largest public and private hospitals in Queensland, accounting for 83% of all hospital activity.5 Public hospitals are required by administrative instruction to analyse the charts and report within the Queensland Health processes on outcomes of reviews; private hospitals are required to report to the Private Health Unit (the regulatory oversight unit within Queensland Health) on their reviews.
In Australia and elsewhere, hospital-specific comparisons based on routine data have relied on cross-sectional analysis.6,7 This involves aggregating data for all patients over a set period, say 12 months, and determining whether the number of adverse outcomes (eg, in-hospital deaths after admission for stroke) is higher than expected based on the average for all hospitals. By definition, these cross-sectional analyses can only occur at the end of some set period, and provide average results for all patients admitted to the hospital during that time. In contrast, statistical process control is a continuous approach, and displays data on outcomes of care of individual patients. The method can identify changes in outcomes relatively quickly and is more sensitive to such changes than less regular, cross-sectional approaches, which can obscure important patterns in the data.8 Statistical process control also highlights the dynamic nature of health care: that patterns of care can change over time and a negative signal at some point in the past can be rectified.
Statistical process control was developed several decades ago to improve the quality of manufactured products. Its application in health is complicated by the need to adjust for risk to ensure that hospitals or doctors who see sicker patients are not unfairly penalised. Several methods have been proposed that incorporate risk adjustment;9 the method adopted by Queensland Health follows Sherlaw-Johnson’s approach,10 and is known as variable life-adjusted display (VLAD) (Box 2). Display charts are provided to Queensland hospitals each month, with the first distribution of charts providing trend data from July 2003 to late 2006.
Finally, thresholds are calculated where the chart is said to flag. Critical in quality improvement approaches is not just providing data, but ensuring that aberrant patterns identified by monitoring are investigated, and that practice changes occur.9 Queensland Health has developed hierarchical flagging criteria that signal closer scrutiny, depending on the extent of variation from the state average and whether the indicator incorporates a fatal or non-fatal outcome (Box 3).
For example, if the trend line shows that the cumulative experience of outcomes of care is more than 30% worse than the state average (for an indicator with a fatal outcome), the indicator is flagged for internal hospital review. The Queensland Health VLAD policy requires identification of “clinician leads” to facilitate clinician involvement in the review process.11 In addition to reporting through the various organisational structures of Queensland Health, public hospitals are required to report remedial action to the local consumer consultative group.
The statistical process control methods used in industry were developed to help identify special (also called assignable) cause variation,12 which is defined as variation that warrants further investigation. Standard methods of frequentist inference (P values and confidence intervals) are not suitable for identifying such variation. Instead, likelihood methods, which are not affected by the problem of multiple looks at the data,13 are used, and within this framework the characteristics of the VLAD are usually described in terms of the average run length to true or false alarm. We used standard methods based on simulations14 to identify average run lengths to true and false alarm.
The flagging criteria were set to balance the costs of investigating false alarms (where the change in outcomes is simply a statistical artefact) against the need to identify special or assignable cause variation, which might benefit from further investigation.15 As is the case in industry,16 this was a policy decision; the reasoning is similar to that used to decide on a balance between sensitivity and specificity for a screening test.17
The 31 indicators currently monitored involve 17 conditions or procedures, accounting for about 6% of total discharges from Queensland public and private hospitals. Box 4 shows the indicators, and information about the dataset and the incidence of flagging of negative outcomes.
There is an ambiguous relationship between outcome and process measures of quality of care,18 so a pyramid model, which recognises multiple explanations for variation in recorded outcomes, is recommended as a focus for the investigation process.19 The first investigation should be whether the data have been coded accurately. A second screen is whether there is casemix variation that has not been fully accounted for in the risk adjustment process (eg, Indigenous status is not incorporated in the risk adjustment model, but is often associated with worse clinical outcomes). Box 5 shows the stages in the pyramid model, and typical questions that should be asked as part of an investigation.
The flags are a way of standardising the process for deciding when the data are worth a closer look. A virtue of the VLAD approach is that it encourages visual inspection of data and, in many cases, a more detailed look at the data could be instigated without using flags; for example, if a downward slope appeared abruptly. The VLAD can take many possible forms, depending on the length and clustering of runs of good or poor performance. However, in terms of actions that should be taken, VLADs can be grouped into four basic patterns (Box 6).
Routine data are limited and cannot provide risk adjustment for the full range of factors known to affect outcomes,20 and this is recognised in the second stage of the pyramid model of investigation. That is, more detailed investigation at the local level might reveal that a run of poor outcomes at a particular hospital might be due to a run of sicker patients. This should not undermine the utility of statistical process control approaches: the aim is to identify causes of variation in outcomes, be they variations in data quality, casemix, or quality of care.
As demonstrated in Box 6, statistical process control charts facilitate visual inspection of patterns of care, facilitating the task of identifying whether there has been a pattern change or a continuation of an underlying trend. However, control charts cannot provide definitive answers about the quality of care. They more closely resemble techniques from the area of statistics known as exploratory data analysis, and should be used to develop theories about why variations occur and suggest possible solutions: improving data quality, improving casemix adjustment, or implementing system changes to improve quality of care.
The current approach monitors 31 indicators and is focused on the largest public and private hospitals. Over time, it is proposed to expand the indicator set to include indicators sensitive to ward care (pressure ulcers, falls) and indicators that can be used to measure outcomes in smaller hospitals (such as the incidence of possibly preventable complications).21 This will ensure a more comprehensive monitoring of clinical outcomes across Queensland.
1 Indicators used in process control charts in Queensland Health*
Acute myocardial infarction: in-hospital mortality, readmission, long stays
Heart failure: in-hospital mortality, readmission, long stays
Stroke: in-hospital mortality
Pneumonia: in-hospital mortality
Fractured neck of femur: in-hospital mortality, complication of surgery
Laparoscopic cholecystectomy: complication of surgery
Colorectal cancer: complication of surgery
Hip replacement: complication of surgery, readmission, long stays
Knee replacement: complication of surgery, readmission, long stays
Prostatectomy: complication of surgery
Abdominal hysterectomy: complication of surgery
Vaginal hysterectomy: complication of surgery
Paediatric tonsillectomy: readmission, long stays
Selected primiparae induction of labour
Selected primiparae caesarean section (public hospitals)
Selected primiparae caesarean section (private hospitals)
First births: perineal tears (3rd or 4th degree)
* Detailed definitions of the indicators are available at http://www.health.qld.gov.au/performance/docs/Tech_Sup.pdf (pages 24–44).
3 Flagging criteria and related action
Area Clinical Governance Unit or Private Health Unit should be involved in investigation |
|||||||||||||||
4 Incidence of adverse trend flagging in indicators used by Queensland Health by indicator and flagging criterion,* 1 July 2003 to 30 June 2006 (non-perinatal indicators), 1 January 2003 to 31 December 2005 (perinatal indicators)
5 Issues for investigation under the pyramid model
- Stephen J Duckett1,2
- Michael Coory1,2
- Kirstine Sketcher-Baker1
- 1 Reform and Development Division, Queensland Health, Brisbane, QLD.
- 2 School of Population Health, University of Queensland, Brisbane, QLD.
None identified.
- 1. Van Der Weyden MB. The Bundaberg Hospital scandal: the need for reform in Queensland and beyond [editorial]. Med J Aust 2005; 183: 284-285. <MJA full text>
- 2. Birrell B, Schwartz A. The aftermath of Dr Death: has anything changed? People Place 2005; 13: 54-61.
- 3. Dunbar JA, Reddy P, Beresford B, et al. In the wake of hospital inquiries: impact on staff and safety. Med J Aust 2007; 186: 80-83. <eMJA full text>
- 4. Duckett S. A new approach to clinical governance in Queensland. Aust Health Rev 2007; 31 Suppl 1: S16-S19.
- 5. Queensland Health. An investment in health. Queensland public hospitals performance report 2005–06. Brisbane: Queensland Health, 2006. http://www.health.qld.gov.au/performance/docs/Perform_rpt_05-06.pdf (accessed Oct 2007).
- 6. Shearer A, Cronin C, Feeney D. The state of the art of online hospital public reporting: a review of forty-seven websites. Easton, Md: Delmarva Foundation, 2004. http://www.delmarvafoundation.org/newsAndPublications/reports/documents/WebSummariesFinal9.2.04.pdf (accessed Oct 2007).
- 7. Robinowitz D, Dudley R. Public reporting of provider performance: can its impact be made greater? Annu Rev Public Health 2006; 27: 517-536.
- 8. Woodall W. Controversies and contradictions in statistical process control. J Qual Technol 2000; 32: 341-350.
- 9. Grigg O, Farewell V. Use of risk-adjusted CUSUM and RSPRT charts for monitoring in medical contexts. Stat Methods Med Res 2003; 12: 147-170.
- 10. Sherlaw-Johnson C. A method for detecting runs of good and bad clinical outcomes on variable life-adjusted display (VLAD) charts. Health Care Manag Sci 2005; 8: 61-65.
- 11. Queensland Health. Clinical governance implementation standard. 4. Variable life adjusted display — dissemination and reporting. Brisbane: QH, 2007. (QHEPS Document Identifier: 32547.) http://www.health.qld.gov.au/quality/publication/32547.pdf (accessed Oct 2007).
- 12. Deming W. The new economics. Cambridge, Mass: MIT Press, 1993.
- 13. Blume JD. Likelihood methods for measuring statistical evidence. Stat Med 2002; 21: 2563-2599.
- 14. Steiner S, Cook R, Farewell V, et al. Monitoring surgical performance using risk-adjusted cumulative sum charts. Biostatistics 2000; 1: 441-452.
- 15. Lim T. Statistical process control tools for monitoring clinical performance. Int J Qual Health Care 2003; 15: 3-4.
- 16. Nelson LS. Notes on the Shewart control chart. J Qual Technol 1999; 31: 124-126.
- 17. Altman DG, Bland JM. Diagnostic tests. 1: Sensitivity and specificity. BMJ 1994; 308: 1552.
- 18. Pitches D, Mohammed M, Lilford R. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res 2007; 7: 91.
- 19. Lilford R, Mohammed M, Spiegelhalter D, et al. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet 2004; 363: 1147-1154.
- 20. Scott IA, Ward M. Public reporting of hospital outcomes based on administrative data: risks and opportunities. Med J Aust 2006; 184: 571-575. <MJA full text>
- 21. Hughes J, Averill R, Goldfield N, et al. Identifying potentially preventable complications using a present on admission indicator. Health Care Financ Rev 2006; 27: 63-82.
Abstract
Identifying and acting on variations from good practice is one of the critical tasks of clinical governance. We describe one aspect of Queensland’s post-Bundaberg clinical governance arrangements: the use of variable life-adjusted displays (VLADs) to monitor outcomes of care in the 87 largest public and private hospitals in Queensland, which together account for 83% of all hospital activity.
VLAD control charts were created for 31 clinical indicators using routinely collected data, and are disseminated monthly.
About a third of hospitals had a run of cases in the 3-year period that flagged at the 30% level (local level investigation). For three indicators, about one in five hospitals had sufficiently cumulatively more deaths than statistically expected that the hospital was highlighted for state-wide review.
VLADs do not provide definitive answers about the quality of care. They are used to develop ideas about why variations in reported outcomes occur and suggest possible solutions, be they ways of improving data quality, improving casemix adjustment, or implementing system changes to improve quality of care.
Critical to the approach is that there is not just monitoring — the monitoring is tied in with systems that ensure that investigation, learning and action occur as a result of a flag.