Patient safety advocates have recently posed the question: “Is health care getting safer?”1 They conclude it is impossible to know, but in order to make progress toward the answer, health systems need to change focus “away from unsystematic voluntary reporting towards systematic measurement”. Their prescription is “a broad but manageable spectrum of indicators that are genuinely useful to the clinical teams that monitor quality and safety day to day”, using “local data that are relevant to clinical concerns . . . how a team is doing compared with last month and last year”.1
Most current approaches to systematic measurement of patient outcomes in hospital do not satisfy these criteria, instead relying on voluntary reporting, a relatively narrow range of diagnoses, or detailed, condition-specific profiles of comorbidities for risk adjustment (Box 1). To our knowledge, there has been only one attempt to use routinely recorded data on diagnosis codes to monitor the full range of hospital-acquired illness and injury — the Utah/Missouri Patient Safety Project16,17 (Box 1). This was limited by the need for expert clinical review to distinguish hospital-acquired diagnoses from comorbidities, eliminating many conditions that could also be community acquired.
Here, we describe the development of a tool to allow routinely coded inpatient data to be used to monitor a full range of hospital-acquired diagnoses (“complications”) to support quality improvement efforts by hospital-based clinical teams. This tool — the Classification of Hospital Acquired Diagnoses (CHADx) — was developed under the sponsorship of the Australian Commission on Safety and Quality in Health Care and builds on the Utah/Missouri project. To identify hospital-acquired diagnoses, it uses a “condition onset” flag that is now common in a number of jurisdictions18 and recorded in all Australian states. We termed these diagnoses “complications” in an attempt to find neutral terminology reflecting the lack of either risk adjustment or information on causation.
The classification is designed to provide a comprehensive overview of all complications as the basis for estimating total and relative per case expenditure by complication type.19 It is also intended to provide hospitals with a computerised tool to group the 4000 + valid diagnosis codes typically used with a hospital-acquired diagnosis-onset flag into a smaller set of clinically meaningful classes for routine monitoring of patient safety and safety improvement efforts.
The CHADx (pronounced “chaddix”) uses data coded according to the International Classification of Diseases, 10th revision, Australian modification (ICD-10-AM). As the ICD was not designed specifically to identify hospital-acquired conditions, the CHADx had to accommodate the idiosyncrasies of the source data and coding rules, while seeking to extract as much information as possible from the record abstracts.
The complications flag (C prefix20) used in Victoria was the model for the recently adopted national system of “condition onset” flagging.21 To be flagged, a diagnosis must have occasioned treatment or active investigation in hospital, or have extended length of hospital stay. Coders review the patient’s clinical notes to establish whether each diagnosis was recorded as present on admission. If the diagnosis was not present on admission and is plausibly hospital acquired (ie, not a congenital or chronic condition), the C prefix is assigned. In the past, coding standards have not encouraged assigning of the C prefix to diagnoses in obstetric or perinatal patients because of ambiguities in the timing of onset of particular diagnoses. Thus, because of the small proportion of C prefixes in obstetric and neonatal records, we analysed all codes listed in the obstetric and neonatal chapters of ICD-10-AM that were plausibly hospital acquired.
We selected records with any hospital-acquired diagnosis code from the Victorian Admitted Episodes Data Set (VAED) for the financial year 2005–06. Of the 2 031 666 inpatient episodes, 126 940 (6.25%) included at least one hospital-acquired diagnosis, with a mean of three flagged diagnoses per record, giving a total of 386 048 flagged diagnoses. With the addition of 128 323 obstetric and neonatal diagnoses, 514 371 diagnoses were available for analysis.
We applied a recently developed computerised algorithm22 to remove codes judged ineligible for the hospital-acquired flag (ie, congenital or chronic conditions). On this basis, 14 898 diagnoses (2.9%) were removed. Consolidation of redundant codes removed a further 118 640 diagnoses, resulting in 380 833 instances for grouping into CHADx classes.
To determine the optimal number of end classes in the CHADx, we examined how variations between hospitals in depth of coding and total number of separations per year interacted with classifications of various sizes. This suggested that for hospitals with over 6000 admissions per year, 120–130 end classes with an incidence of over 0.1% of cases would provide sufficient granularity (specificity of classes and avoidance of “catch all” classes), without creating too many “empty” classes because of infrequently occurring diagnoses. For hospitals with fewer admissions, major “roll-up” groups could be used to monitor a smaller number of broad complication types.
Australian coding standards mandate that codes be recorded in specific sequences: for example, an injury or manifestation should be coded before the cause.21 Some combinations of code types represent redundant coding, or only marginally refine the information available from a single code. We developed working principles for prioritising code selection and reducing double counting arising from these sequenced codes. T and EOC codes are given priority as the most specific codes for hospital-acquired conditions. To avoid double counting of manifestations related to the same cause, a “bracket rule” was used: any codes bracketed between a T or EOC code and a following Y (external cause) code were assigned to a postprocedural CHADx, with no further assignment to other CHADx classes. Exceptions were made for three “high saliency” infection-related complications: septicaemia, methicillin-resistant Staphylococcus aureus, and “other drug resistant” infections.
High-volume sets of codes were then grouped together into the first draft of the CHADx, using an iterative process principally involving the first two authors (T J J and J L M). Major groupings were compared with similar published grouping systems11,14,16,17,23 to ensure that salient event types were not overlooked in the grouping process. While grouping was based in part on the size of groups, single codes or low-volume code groups with high saliency for patient safety were created as their own groups. Other small-volume and less specific codes were grouped together. The logic used in the construction of the CHADx is shown in Box 2.
We used 4345 unique codes to characterise hospital-acquired conditions; in the final CHADx these were grouped into 144 detailed subclasses and 17 major “roll-up” groups. These major groups can be routinely monitored by low-volume hospitals, and the subclasses by larger hospitals; the subclasses also support economic analysis.
The 17 major groups are shown by volume of diagnoses in 2005–06 in Box 3. CHADx 12 (labour and delivery) was the largest major group by volume (75 320 diagnoses), and CHADx 1 (postprocedural complications) was the largest by number of subclasses (23). Box 4 shows the top 20 CHADx subclasses by volume of diagnoses in 2005–06.
A complete list of the CHADx major groups and subclasses, along with the code sequences assigned to each category and the number of admissions from the 2005–06 VAED in each end class, is available on the website of the Australian Commission on Safety and Quality in Health Care (http://www.safetyandquality.gov.au/internet/safety/publishing.nsf/Content/PriorityProgram-08_CostHAD).
The CHADx is intended as a tool to help hospitals monitor rates of complications and the effect of patient safety interventions. In most Australian states, hospitals submit diagnosis abstracts on a monthly basis. Monthly use of the CHADx would allow hospitals to identify any changes associated with local patient safety strategies in near “real time”. Longitudinal measurement would provide information not available from methods that rely on periodic and costly intensive investigations, such as chart review. We also foresee its potential to help include cost in prioritising patient safety programs.
the classification is not primarily intended to be comparative (a hospital’s casemix constitutes the context of its patient safety efforts, regardless of its risk profile); and
risk adjustment may cultivate a therapeutic nihilism for a proportion of complications or particular patient groups (“old patients just get more complications”).
The number of end categories in the CHADx is designed according to the level of specificity required. The optimal granularity of the classification (and thus usefulness of the categories) will vary, depending on both hospital size and local depth of coding: fewer classes provide more robust cell sizes for monitoring, but may group unlike complications together. Detail has been highlighted as a critical feature of patient-event classifications.24 The tiered structure of the 17 major roll-up groups and 144 detailed and comprehensive subclasses is designed to suit a range of potential uses.
Assignment of the condition-onset flag has not been audited, although the annual Victorian Department of Human Services inpatient data audit covered assignment of these flags for the first time in 2008.25 The usefulness of the source codes could be compromised if financial incentives were applied to hospitals reporting hospital-acquired illnesses and injuries in their data. For this reason, we argue against the use of the CHADx for public reporting or the application of financial incentives.
Despite increased reporting of mortality rates and other measures of quality of care, individual hospitals have had few ways of systematically investigating rates and patterns of quality problems, focusing instead on incident investigations. The CHADx is designed to provide clinicians and hospitals with a computerised tool to group hospital-acquired diagnoses into smaller sets of clinically meaningful classes for routine monitoring of patient safety. It is premised on a “just culture” approach to improving patient safety (which recognises that competent professionals make mistakes, but has zero tolerance for reckless behaviour26), and also due attention to the organisational contexts in which clinicians work. It requires well documented medical charts and investment in training and supervision of coding staff, but does not rely on special-purpose collection of data. It supports local monitoring of complication rates over time, to focus efforts to improve patient outcomes by minimising their incidence. These complications may not be preventable in every patient but are amenable to systematic efforts to reduce their rates. The CHADx can also be used as the basis for setting priorities through supporting the estimation of relative per-case and total expenditure attributable to each CHADx class.
1 Approaches to systematic measurement of hospital outcomes
Current approaches are of four general types.
Analysis of causal factors. This approach focuses on the causes of adverse events in patient care.1,3 These systems aim to be “multiaxial”, allowing patient “safety events” to be analysed by cataloguing a range of relevant contributory factors, such as characteristics of the patient and care team, and the circumstances leading up to, or causing, any breach of patient safety. This is an important focus for workers at the “sharp end” of patient safety, but will continue to rely on voluntary reporting for the near future. Because of the historical focus on accountability of individual health care workers (largely ignoring contributory organisational factors), voluntary reporting is vulnerable to underestimation of rates of such events.4-6 These collections may be more useful to characterise events than to count them.
Case-finding for sentinel events. This type of system typically reports serious, “sentinel” or “never” events.7-9 These systems are better understood as case-finding systems that enable in-depth investigation of particular events. They also use voluntary reporting. The assumption (rarely tested) is that such relatively uncommon events function as sentinels for more systemic problems in patient care.
Performance reporting with risk adjustment. This approach uses routine hospital morbidity data and focuses on performance reporting. It places a premium on preventability and risk adjustment to avoid inappropriate blame of hospitals or providers for adverse outcomes beyond their control. The foremost example of this approach is the US Pay for Performance rules,10 where a set of specific events coded in the record abstract leads to denial of Medicare funding. Other examples are the US Agency for Healthcare Research and Quality Patient Safety Indicators,11 which build on similar earlier work;12,13 the 3M proprietary system Potentially Preventable Complications;14 and Queensland Health’s VLAD (variable life-adjusted display) indicators.15 By necessity, these systems focus on a narrower range of diagnoses or procedures and rely on detailed, condition-specific profiles of comorbidities that predict a higher rate of unfavourable patient outcomes. This approach is primarily embraced by regulatory and funding authorities seeking to reward better patient outcomes and to penalise poor performance.
Monitoring of the range of hospital-acquired diagnoses. The final approach also uses routine data but seeks to use the full range of routinely recorded diagnosis codes that characterise hospital-acquired illness and injury. We have identified only one example: the Utah/Missouri Patient Safety Project,16,17 funded by the US Agency for Healthcare Research and Quality, which developed a set of 64 classes. It used expert clinical review to try to distinguish at the code level between comorbidities (conditions present on admission) and complications (hospital-acquired diagnoses). Thus, it could not include conditions such as pneumonia or urinary tract infections, which may be either community- or hospital-acquired. Because US jurisdictions have yet to switch to the 10th revision of the International Classification of Diseases for coding, the project was developed using the previous version of the World Health Organization’s hospital mortality and morbidity coding system (9th revision, clinical modification [ICD-9-CM]).
Received 2 March 2009, accepted 24 August 2009
Abstract
Objective: To develop a tool to allow Australian hospitals to monitor the range of hospital-acquired diagnoses coded in routine data in support of quality improvement efforts.
Design and setting: Secondary analysis of abstracted inpatient records for all episodes in acute care hospitals in Victoria for the financial year 2005–06 (n = 2.032 million) to develop a classification system for hospital-acquired diagnoses; each record contains up to 40 diagnosis fields coded with the ICD-10-AM (International Classification of Diseases, 10th revision, Australian modification).
Main outcome measure: The Classification of Hospital Acquired Diagnoses (CHADx) was developed by: analysing codes with a “complications” flag to identify high-volume code groups; assessing their salience through an iterative review by health information managers, patient safety researchers and clinicians; and developing principles to reduce double counting arising from coding standards.
Results: The dataset included 126 940 inpatient episodes with any hospital-acquired diagnosis (complication rate, 6.25%). Records had a mean of three flagged diagnoses; including unflagged obstetric and neonatal codes, 514 371 diagnoses were available for analysis. Of these, 2.9% (14 898) were removed as comorbidities rather than complications, and another 118 640 were removed as redundant codes, leaving 380 833 diagnoses for grouping into CHADx classes. We used 4345 unique codes to characterise hospital-acquired conditions; in the final CHADx these were grouped into 144 detailed subclasses and 17 “roll-up” groups.
Conclusions: Monitoring quality improvement requires timely hospital-onset data, regardless of causation or “preventability” of each complication. The CHADx uses routinely abstracted hospital diagnosis and condition-onset information about in-hospital complications. Use of this classification will allow hospitals to track monthly performance for any of the CHADx indicators, or to evaluate specific quality improvement projects.