In 2009, the Australian Institute of Health and Welfare (AIHW) published a report, Measuring and reporting mortality in hospital patients.1 The report followed a request from the Australian Commission on Safety and Quality in Health Care to determine whether it was possible to produce accurate, valid indicators of inhospital mortality using currently available Australian administrative data.
It is common practice for hospital mortality to be reported as a hospital standardised mortality ratio (HSMR).2 The HSMR is an indirectly standardised mortality rate, generated by comparing observed mortality within a hospital against the expected mortality for the same patients had they been treated in a “standard” hospital, defined as comprising all hospitals of interest in the population. In this way, the observed mortality in one institution can be compared with what might be expected from overall outcomes in the population of hospitals as a whole. The expected probability of death for a given patient is computed using the risks (of death) of all patients in the combined standard hospital, with adjustment for various patient level characteristics that might influence mortality risk. The HSMR for a given hospital = observed number of deaths/expected number of deaths × 100. An HSMR above 100 indicates an unfavourable outcome, whereas an HSMR below 100 is favourable relative to the standard or total population.
Recent critiques of HSMRs have been based on the premise that HSMRs measure avoidable or preventable mortality.3,4 The critiques are ill founded. HSMRs are a measure of mortality, and mortality alone. This was explained very clearly by Professor Sir Brian Jarman, who provided evidence to the Independent Inquiry into Care Provided by Mid-Staffordshire NHS Foundation Trust:
Within HSMR it is not possible to give an exact figure for the number of unnecessary or excess deaths but one can give a figure for the number by which the actual observed deaths exceeds the expected deaths and give 95% confidence intervals for this figure. It would be impossible to statistically calculate the precise number of deaths that were unnecessary, or to statistically pinpoint which particular incidents were avoidable. That, if it were possible, would require careful consideration of the case notes for individual mortalities themselves.5
HSMRs based on information contained in routinely collected administrative datasets have been published for several countries.6-8 Such datasets provide primary and comorbid diagnoses and a variety of patient and episode characteristics, but not physiological data or clinical observations. The absence of patient-level clinical or physiological data may be seen as a weakness in HSMR presentations.9,10
Over the years, several studies have been undertaken to test the impact of such information on the discriminatory power of mortality risk adjustment models. Recently, researchers in the United Kingdom compared the discriminatory capacity of risk adjustment models derived from an administrative dataset with models based on databases compiled by professionals and held within specialist registries.11 Their results clearly demonstrated that models based on administrative data were as successful in discriminating cases as those derived from the more detailed clinical information held in specialist registries. Models derived from administrative data systems were also found to be adequately discriminatory in a study of postsurgical outcomes in the United States Department of Veterans Affairs surgical clinical improvement program.12,13
Contemporary administrative data systems — professionally extracted and coded, with a wide variety of primary and secondary diagnoses — are acceptable resources for generating HSMRs, and there is little difference in terms of discriminatory power between risk adjustment models derived from them and models derived from clinical databases.14 This is reassuring, because the cost and complexity of extracting clinical, or even simple laboratory, information on a large scale from existing record systems are substantial. This is true even in countries such as the US, where:
Although it is not clear whether our results would have differed if we had access to detailed clinical information for better risk adjustment, this question may be moot from a practical perspective. With the exception of cardiac surgery, clinical data for determining risk-adjusted mortality rates with other procedures are currently not on the horizon.15
Although at times concerns have been expressed as to the accuracy of coding of diagnostic information within administrative datasets,16,17 the extent of such disagreements in Australia, at least, is modest. They certainly appear no greater than those found in the daily interactions between colleagues within the same team or discipline.18,19 Apart from diagnoses, data elements in administrative datasets have generally been chosen because they are robust, straightforward to collect and enumerate and, in the Australian National Hospital Morbidity Database, come with very explicit rules for their definition and tabulation.20 Coding audits constitute the test of inter-rater reliability relevant to assessing the utility of risk-adjusted measures of hospital mortality. Audits commonly lead to no more than a small proportion of cases being re-coded, implying an acceptable level of inter-rater reliability.21
There are two levels of influence on HSMR outcomes: patient-level influences, present at the point of admission; and institutional and jurisdiction-wide influences. There is a concern that the random play of chance on factors at and above the patient level may be so great as to render HSMRs uninterpretable. In this regard, the AIHW report was reassuring.1 The report contained a 3-year, longitudinal study of Australian hospital outcomes, and its results confirmed the findings of a Dutch study7 that hospital HSMRs are mostly stable over time, and that the effect of random variation is modest (the main exception being small hospitals, where numbers of deaths are small).
Although concerns about reliability can now be largely laid to rest, interpretation of HSMRs remains controversial,17,19 and the report concluded that HSMR reports should be seen as a safety and quality screening tool, rather than as being diagnostic.1 In other words, they should be seen as general-purpose indicators that provide a spur to further investigation, rather than as a definitive report on existing practice in any one institution or clinical service. Further, although HSMRs do seem applicable to a wide range of hospitals, they are less relevant to small hospitals and specialised hospitals with an atypical casemix, such as specialised women’s or children’s hospitals.
The Box shows the distribution of HSMRs across Australia for patients in one peer group of hospitals, reported using a visual presentation system known as a caterpillar plot.22 Other formats, such as funnel plots, locate hospitals against predetermined statistical parameters, and allow for identification of hospitals whose results are statistical outliers in terms of higher and lower than expected mortality.1,23 There is a concern that caterpillar plots can be turned “on their side” and used to rank hospitals. No visual plot is immune from being used as a ranking resource. If hospitals are named, and their position relative to each other indicated, they can be ranked. A sustained education campaign will be necessary to minimise the inappropriate use of HSMR information, if that information is to be placed in the public domain.
Peer-reviewed research demonstrating that the provision of HSMRs to hospitals acts as a spur to efforts to improve hospital safety and quality is limited, but it does confirm that, when taken at face value, HSMR reports can provide an impetus to hospital-level efforts to improve the safety of care that is provided, and be used to monitor the impact of efforts to reduce inhospital mortality.24,25 Experience indicates that for HSMRs to have an effect, they have to be provided regularly, and in a format that clinicians can understand and relate to. Concerns about methodology among those who feel their reputations might be prejudiced have to be able to be worked through, and may influence the nature and extent of data provision to jurisdictions, individual institutions, and the community at large.
Important opportunities exist to further enhance the robustness of mortality measures by means of data linkage processes creating patient-level, rather than separation-level, analyses of hospital activity. Including deaths occurring soon after discharge from hospital in mortality calculations may resolve many of the concerns about the influence of palliative care on HSMRs, and allow for assessing the extent to which existing risk adjustment for the impact of inter-hospital transfers on HSMRs do, or do not, require further elaboration. Also, from the beginning of the 2008–09 financial year, clinical coders have been required to indicate whether a secondary diagnosis was “present on admission” or occurred after admission. Risk adjustment that only adjusts patient-level problems present at the point of admission holds out a promise of avoiding the “moral hazard problem” of risk adjusting out the impact of secondary conditions that may have been the result of problems that occurred as a result of suboptimal care, rather than identifying them.26,27 Further work will be required before the real impact of the “present on admission” flag can be ascertained. But overall, it appears that Australia is well placed to make good use of the information provided in its administrative datasets to generate mortality measures, and has made solid progress towards the routine production of these important measures.
- David I Ben-Tovim1,2
- Sophie C Pointer2
- Richard Woodman2
- Paul H Hakendorf1
- James E Harrison2
- 1 Flinders Medical Centre, Adelaide, SA.
- 2 Flinders University, Adelaide, SA.
This article is based on research undertaken in collaboration with the AIHW and funded by the Australian Commission on Safety and Quality in Health Care.
None identified.
- 1. Ben-Tovim D, Woodman R, Harrison J, et al. Measuring and reporting mortality in hospital patients. Canberra: Australian Institute of Health and Welfare; 2009. (AIHW Cat. No. HSE 69.)
- 2. Jarman B, Gault S, Alvers, B, et al. Explaining differences in English hospital death rates using routinely collected data. BMJ 1999; 318: 1515-1520.
- 3. Penfold RB, Dean S, Flemons W, et al. Do hospital standardized mortality ratios measure patient safety? HSMRs in the Winnipeg region. Healthc Pap 2008; 8: 8-24.
- 4. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010; 340: c2016.
- 5. Francis R, chair. Independent inquiry into care provided by Mid-Staffordshire NHS Foundation Trust January 2005 to March 2009. London: Stationery Office, 2010.
- 6. Jarman B, Bottle A, Aylin P, et al. Monitoring changes in hospital standardised mortality ratios. BMJ 2005; 330: 329.
- 7. Heijink R, Koolman X, Pieter D, et al. Measuring and explaining mortality in Dutch hospitals: the hospital standardised mortality rate between 2003 and 2005. BMC Health Serv Res 2008; 8: 73.
- 8. Canadian Institute for Health Information. HSMR: a new approach for measuring hospital mortality trends in Canada. Ottawa: CIHI, 2007. http://secure.cihi.ca/cihiweb/products/HSMR_hospital_mortality_trends_in_canada.pdf (accessed Feb 2010).
- 9. Hadorn D, Keeler EB, Rogers WH, et al. Assessing performance of mortality prediction models. Final report for HCFA Severity Project. Los Angeles: RAND Corporation. 1993. (Monograph/report no. MR-181-HCFA.)
- 10. Iezzoni LI. The risks of risk adjustment JAMA 1997; 278: 1600-1607.
- 11. Aylin P, Bottle A, Majeed A. Use of administrative data or clinical databases as predictors of risk of death in hospital: comparison of models. BMJ 2007; 334: 1044-1051.
- 12. Geraci JM, Johnson ML, Gordon HS, et al. Mortality after cardiac bypass surgery: prediction from administrative versus clinical data. Med Care 2005; 43: 149-158.
- 13. Gordon HS, Johnson ML, Wray NP, et al. Mortality after noncardiac surgery: prediction from administrative versus clinical data. Med Care 2005; 43: 159-167.
- 14. Smith DW. Evaluating risk adjustment by partitioning variation in hospital mortality rates. Stat Med 1994; 13: 1001-1013.
- 15. Birkmeyer JD, Dimick JB, Staiger DO. Operative mortality and procedure volume as predictors of subsequent hospital performance. Ann Surg 2006; 243: 411-417.
- 16. Scott IA, Ward M. Public reporting of hospital outcomes based on administrative data: risks and opportunities. Med J Aust 2006; 184: 571-515. <MJA full text>
- 17. Mohammed MA, Deeks JJ, Girling A, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ 2009; 338: b780.
- 18. Ben-Tovim DI, Woodman RJ, Hakendorf P, et al. Standardised mortality ratios. Neither constant nor a fallacy. BMJ 2009; 338: b1748.
- 19. Aylin P, Bottle A, Jarman B. Standardised mortality ratios. Monitoring mortality. BMJ 2009; 338: b1745.
- 20. Australian Institute of Health and Welfare. National hospital morbidity database. http://www.aihw.gov.au/hospitals/nhm_database.cfm (accessed Feb 2010).
- 21. Australian Institute of Health and Welfare. National Minimum Data Set for Admitted Patient Care: Compliance evaluation 2001–02 to 2003–04. Canberra: AIHW, 2007. (AIHW Cat. No. HSE 44.)
- 22. Australian Commission on Safety and Quality in Health Care. Windows into safety and quality in health care 2009. Sydney: ACSQHC, 2009.
- 23. Spiegelhalter DJ. Funnel plots for comparing institutional performance. Stat Med 2005; 24: 1185-1202.
- 24. Wright J, Dugdale B, Hammond I, et al. Learning from death: a hospital mortality reduction programme. J R Soc Med 2006; 99: 303-308.
- 25. Gilligan S, Walters M. Quality improvements in hospital flow may lead to a reduction in mortality. Clin Govern Int J 2008; 13: 26-34.
- 26. Glance LG, Osler TM, Mukamel DB, et al. Impact of the present-on-admission indicator on hospital quality measurement: experience with the Agency for Healthcare Research and Quality (AHRQ) Inpatient Quality Indicators. Med Care 2008; 46: 112-119.
- 27. Ehsani JP, Jackson T, Duckett SJ. The incidence and cost of adverse events in Victorian hospitals 2003–04. Med J Aust 2006; 184: 551-555. <MJA full text>
Abstract
Worldwide, current practice is to report hospital mortality using the hospital standardised mortality ratio (HSMR).
An HSMR is generated by comparing an indirectly standardised expected mortality rate against a hospital’s observed mortality rate. A hospital’s HSMR can be compared with the overall outcomes for all hospitals in a population, or with peer hospitals.
HSMRs should be used as screening tools that alert institutions to the need for further investigation, rather than as definitive measures of the quality of care provided by individual hospitals.
HSMRs are computed from existing hospital administrative data sources, which are fit for such a purpose. The addition of clinical or physiological data does not, at present, add to the discriminative powers of the risk adjustment models used to adjust HSMR values for differences in hospitals’ casemixes.
There has been concern that HSMRs may be too variable over time for individual values to be interpretable. A study of HSMR outcomes in Australian hospitals confirmed earlier reports of the stability of the measure.
Considerable progress has been made with developing Australian HSMRs for use as routine measures to improve the safety and quality of Australian hospital care.