One of the key goals of health reform is to drive hospital quality improvement by creating greater transparency in health services performance.1 With the launch of the MyHospitals website in December 2010, quality indicators for individual hospitals across Australia were publicly reported for the first time. The MyHospitals website began by detailing emergency department (ED) and elective surgery waiting time performance data for public hospitals, and has since added hospital rates of staphylococcal infection and waiting times for cancer surgery. Additional quality indicators will be added in the future. Our study focused on ED waiting times, which the website presents as the proportion of patients in each of five ED triage categories whose treatment began within recommended time frames. Previously, these statistics were only available at the state or territory level, and not for all jurisdictions.
Critics of the MyHospitals website have questioned the data quality and comparability.2,3 Concern over the comparability of ED waiting time data is not a new issue. Reports by the Productivity Commission and the Australian Institute of Health and Welfare (AIHW) have identified jurisdictional-level differences in the allocation of patients across the five ED triage categories, and have suggested that these differences may influence ED waiting time performance.4,5 If, in fact, reported ED patient urgency mix is associated with waiting time performance, then comparing performance of EDs that have different patient urgency mixes may be unfair.
We aimed to investigate the relationship between ED patient urgency mix and ED waiting time performance at the hospital level, using MyHospitals data. We also aimed to assess the variation among Australian hospitals in the assignment of ED patients to triage categories, and whether differences in the proportion of patients assigned to each category are associated with ED performance. Lastly, we aimed to determine the degree to which ED performance scores change when patient urgency mix and hospital size are taken into account.
Our study was a cross-sectional study using publicly reported Australian hospital-level data on the MyHospitals website. We recorded the number of patients assigned to each ED triage category and the proportion of patients in each ED triage category whose treatment began within the recommended time frame. Five triage categories are used by the MyHospitals website: resuscitation, emergency, urgent, semi-urgent, and non-urgent. These are the same categories used by the AIHW, and are based on the National Triage Scale (NTS).6,7 The recommended time frames used by MyHospitals (emergency, ≤ 10 min; urgent, ≤ 30 min; semi-urgent, ≤ 60 min; non-urgent, ≤ 2 h) are the same as the NTS and the updated Australasian Triage Scale8 except for the resuscitation category, to which MyHospitals assigned a measurable time frame of “receipt of care within 2 minutes” rather than “immediate”.6 We obtained data for July 2009 to June 2010 from the MyHospitals website.
Our analysis included all hospitals reporting ED performance data, with the exception of 25 hospitals with too few resuscitation cases to report full statistics.
We first used Pearson correlation tests to examine the bivariate associations between allocation of patients to the five triage groups and ED performance. We excluded performance for the resuscitation category because there was very little variation (97% of EDs reported meeting the guidelines 95% or more of the time), the distribution was skewed, and it was not linearly related to the allocation of patients.
We then developed multivariate regression models that adjusted for differences in hospital characteristics. The dependent variables were ED performance (the proportion of patients whose care was initiated within the recommended time frame) for four of the five triage categories. The independent variables were the proportions of patients assessed as being in the emergency and non-urgent categories. We used two triage categories because of the high correlation among the five triage categories, which created multicollinearity problems. We chose the emergency and non-urgent categories because they showed consistently strong bivariate correlations with the dependent variables.
The control variables in the models included the socioeconomic status and accessibility category of the community surrounding the public hospital, and the size of the hospital. Socioeconomic status was measured using the Socio-economic Index for Areas (SEIFA) Index of Relative Socio-economic Advantage and Disadvantage9 based on the postcode of the hospital. Higher values indicate more advantage. The accessibility of the hospital was measured using the Accessibility/Remoteness Index of Australia (ARIA).10 Hospital peer group was used to indicate the type of hospital. We compared combined peer groups A and B (principal referral, specialist women’s and children’s, and large hospitals) with smaller hospitals, since no differences in performance were detected between peer group A and B hospitals.
Regression model assumptions were checked and were acceptable. The relationships between the independent and dependent variables were assessed for linearity by inspection of scatter plots. The equality of variance of the residuals was tested by inspection of the residuals versus predicted values, and by White’s test. Multicollinearity was assessed with variance inflation factors, and models were confirmed excluding observations identified by Cook’s distance as potentially influential.
To evaluate the impact of adjusting performance scores, we computed an expected performance score for each hospital in each triage category, assuming the hospital had the median triage percentages and that the hospital was peer group A or B (we used the hospitals’ actual SEIFA and ARIA levels). Since we were comparing the actual performance (predicted performance based upon the regression equation plus the residual) with the performance we would expect under the median triage percentage in a peer group A or B hospital, we computed the expected scores using the coefficients from the models as well as the residuals. The difference between the expected and observed was therefore only due to the change in triage percentage and hospital type, and not due to the fit of the model. We then compared the absolute value of the difference between the expected and the observed performance. We used Stata, version 11 (StataCorp) to conduct all analyses, and used a significance level of P < 0.05.
Our analysis included 158 hospitals. These were all public hospitals, except for two private hospitals in Queensland that provided services to public patients. Most hospitals (113) were peer group A or B hospitals, while the remainder (45) were smaller hospitals.
On average, 0.6% of ED patients were assigned to the top urgency triage category, resuscitation (Box 1). There were only small differences between the hospitals in allocation of patients to this category (range, 0.1%–4.0%). In contrast, there was substantial variation between hospitals in allocating patients to the other four triage categories. While hospitals, on average, assessed 8% of ED patients as being in the emergency category, the proportions ranged over 22 percentage points. The ranges for the three less serious categories were more than twice as large, at 45, 50, and 62 percentage points for urgent, semi-urgent, and non-urgent, respectively.
There was also substantial variation in ED waiting time performance (Box 1). The range in performance was 59 percentage points for patients assigned to the emergency category. The range was even larger for the urgent and semi-urgent categories — 66 and 62 percentage points, respectively — and smaller for resuscitation and non-urgent, at 25 and 30 percentage points, respectively.
The correlations in Box 2 show that EDs that allocated more patients to the three most urgent triage categories had poorer overall performance. Conversely, the greater the proportion of non-urgent patients, the better the ED’s emergency, urgent, and semi-urgent performance.
Multivariate regression models that controlled for differences in hospital characteristics showed an association between higher proportions of patients assigned to the emergency category and poorer waiting time performance in the emergency, urgent, and non-urgent triage categories (Box 3). For instance, for every increase of two percentage points in the proportion of patients assigned to the emergency category, performance for the emergency triage category was about one percentage point lower. In addition, the greater the proportion of patients triaged to the non-urgent category, the better the performance for the emergency, urgent, and semi-urgent categories.
The multivariate models also indicated that hospitals located in higher socioeconomic areas had better performance for the emergency triage category than those in lower socioeconomic areas, though there was no relationship between socioeconomic status and performance for the other triage categories (Box 3). Small hospitals had better performance for the urgent, semi-urgent, and non-urgent triage categories than did peer group A and B hospitals (5.9, 9.7, and 6.1 percentage points, respectively).
We investigated the impact on performance scores of standardising triage proportions and hospital type (Box 4). Based on the regression results, if each hospital were to have the median triage percentages and were peer group A or B, performance scores would be expected to change on average by 3.7, 7.1, and 6.2 percentage points for the emergency, urgent and semi-urgent categories, respectively. While the mean adjustments were modest in size (and are smaller for the non-urgent category), the ranges were wide (as large as 31 percentage points).
The results of our study suggest that better performance by EDs in meeting waiting time criteria is related to the reported urgency mix of the EDs’ patients. The data for 158 Australian EDs in 2009–10 showed that those reporting a disproportionately large percentage of emergency patients had poorer performance than those reporting smaller proportions. We also found that EDs reporting proportionally more non-urgent patients had better performance than those reporting fewer non-urgent patients; and, related to this, smaller hospitals, which have more non-urgent patients, performed better than larger hospitals in the three less urgent triage categories. These results raise questions about the comparability of the current Australia-wide performance reporting methods.
The policy goals of publicly reporting hospital quality indicators are to provide greater public accountability, spark hospital quality improvement efforts that would lift the standards of hospitals across the country, and provide consumers with information for making informed choices about their health care.11,12 To achieve these goals, hospital performance data must be accurate and comparable.
One explanation for the patterns shown in the study is that it may be more difficult operationally to ensure that ED patients are treated within the recommended time frames when patients need treatment very quickly. Since patients allocated to higher urgency categories are more likely to be admitted to the hospital, one potential cause of lower waiting time performance among EDs treating highly urgent patients may be access block.13 Conversely, it may be easier for an ED to meet the recommended guidelines for initiating treatment when its patients need treatment to commence within a longer time frame. If true, then hospitals do not face a level playing field when being assessed on ED performance, and EDs with higher proportions of more urgent patients are disadvantaged under the current reporting system.
Another potential explanation for our findings relates to assigning patients to a less urgent triage category than is appropriate. Such “undertriaging” gives EDs a longer recommended time frame for initiating treatment, which would be likely to translate into better performance. If there are EDs that routinely allocate “true” emergency patients to the urgent category, their performance scores would probably be inflated. Our study does not provide any evidence that undertriaging is taking place in EDs. It is a possibility worthy of consideration, however, since there has been documented gaming of other hospital performance indicators in Australia, and of ED performance overseas.14,15
Performance scores that have been adjusted according to the urgency mix of patients and the size of the hospital are fair comparisons, and they are not dependent on which of the above explanations is driving the observed relationship between urgency mix and performance. Our findings suggest that, while on average such adjustments would be modest in size, they could have a substantive impact on hospital rankings. For example, of the 10 hospitals with the highest performance scores for the emergency triage category, only six would remain in the top 10 when adjusted performance scores were used. Fewer than half (4 and 3, respectively) would remain in the top 10 for performance in the urgent and semi-urgent triage categories if adjusted scores were used. It is important to ensure, however, that any adjustment of scores is limited to accounting for factors that causally affect performance and are not the result of confounding variables. If, for example, EDs with more emergency patients had worse performance because they attracted managers who were less skilled in managing high-demand situations and accepted long patient waiting times as immutable, then fully adjusting performance scores would in essence excuse poorer management. More investigation of the factors that affect ED performance is required.
Future research on the relationship between patient urgency and waiting time would benefit from analysis of patient-level data, which would allow controlling for differences in patient demographics and health status. This was not possible with the hospital-level data used in this study. It is also noteworthy that the measures currently used to assess ED performance are not validated.16 In other words, it is unknown whether patient health outcomes are worse in EDs that less consistently meet the recommended time frames for initiating patients’ treatment. Developing this evidence base or identifying alternative evidence-based performance metrics is important for creating a public reporting scheme that is trusted and respected by the medical profession.
Our study also contributes to the literature on equity in health care in Australia. Overall, the results suggest that hospitals have similar ED waiting time performance in areas of high compared with low socioeconomic status, and in urban compared with rural areas. There was, however, one exception. Performance in the emergency triage category was worse in areas of lower socioeconomic status. Future research should monitor these trends and further examine whether there are differences in ED waiting times for patients of differing socioeconomic status within hospitals.
In conclusion, our study highlights the challenge of publicly reporting hospital quality data. We found that the current ED performance metrics may be biased in favour of EDs that report fewer urgent patients. Adjusting performance scores for variation in patient and hospital characteristics could ameliorate this bias. This type of adjustment is considered crucial for the viability of patient-outcome performance measures,17-19 and our findings suggest it may be important for waiting time measures too. Data audits, however, would still be necessary to ensure the comparability of data collection processes across hospitals, particularly since inconsistencies have been documented in the past.20 As more quality indicators are publicly reported in the future, it will be increasingly important to consider when and how adjustment of quality indicators is applied.
1 Descriptive statistics for patient urgency mix and ED waiting time performance among Australian hospitals in 2009–10, by triage category (n = 158)
2 Correlation* between patient urgency mix and ED performance†
3 Results of multivariate regression analyses (regression coefficients), adjusted for variation in hospital characterisitics, predicting ED performance* by triage category
Received 28 September 2011, accepted 8 July 2012
- Jessica Greene1
- Jane Hall2
- 1 Planning, Public Policy and Management, University of Oregon, Eugene, Ore, USA.
- 2 Centre for Health Economics Research and Evaluation, University of Technology Sydney, Sydney, NSW.
We thank Jan Blustein and the anonymous reviewers for their helpful comments on earlier versions of the manuscript. Jessica Greene acknowledges the Centre for Health Economics Research and Evaluation at the University of Technology Sydney, where she was the 2010–2011 Australian-American Health Policy Fellow in 2010–2011. Jessica Greene’s fellowship was supported by the Australian Department of Health and Ageing and The Commonwealth Fund.
Jane Hall is a member of the board of the NSW Bureau of Health Information (BHI). The views presented here are those of the authors and not necessarily those of the Fellowship supporters or the BHI.
- 1. Council of Australian Governments. COAG Communique Attachment A: National Health Reform. 13 February 2011. http://www.coag.gov.au/coag_meeting_outcomes/2011-02-13/docs/communique_attachmentA-heads_of_agreement-national_health_reform_signatures.pdf (accessed Jul 2012).
- 2. Drape J. Hospitals website launched but no date. Sydney Morning Herald 2010; 16 Jul. http://news.smh.com.au/breaking-news-national/hospitals-website-launched-but-no-date-20100716-10dg2.html (accessed Jul 2012).
- 3. Cresswell A. Hospitals website hits early strife. The Australian 2010; 11 Dec. http://www.theaustralian.com.au/national-affairs/hospitals-website-hits-early-strife/story-fn59niix-1225969194016 (accessed Jul 2012).
- 4. Steering Committee for the Review of Government Service Provision. Report on government services 2011. Public hospitals. Canberra: Productivity Commission, 2011: Ch. 10. http://www.pc.gov.au/__data/assets/pdf_file/0020/105329/046-chapter10.pdf (accessed Jul 2012).
- 5. Australian Institute of Health and Welfare. Australian hospital statistics 2008–09. Canberra: AIHW, 2010. (AIHW Cat. No. HSE 84; Health Services Series No. 34.) http://www.aihw.gov.au/publication-detail/?id=6442468373&tab=2 (accessed Jul 2012).
- 6. Australian Institute of Health and Welfare. Australian hospital statistics 2009–10. Canberra: AIHW, 2010. (AIHW Cat. No. HSE 107; Health Services Series No. 40.) https://www.aihw.gov.au/publication-detail/?id=10737418863 (accessed Jul 2012).
- 7. Cameron PA, Bradt DA, Ashby R. Emergency medicine in Australia. Ann Emerg Med 1996; 28: 342-346.
- 8. Australasian College for Emergency Medicine. The Australasian Triage Scale. Emerg Med (Fremantle) 2002; 14: 335-336.
- 9. Pink B. Information paper: an introduction to socio-economic indexes for areas (SEIFA), 2006. Canberra: Australian Bureau of Statistics, 2008. (ABS Cat. No. 2039.0.) http://www.abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/2039.02006?Open Document (accessed Aug 2012).
- 10. Information and Research Branch, Department of Health and Aged Care. Measuring remoteness: accessibility/remoteness index of Australia (ARIA). Canberra: Commonwealth of Australia, 2001. http://www.health.gov.au/internet/main/publishing.nsf/Content/7B1A5FA525DD0D39CA25748200048131/$File/ocpanew14.pdf (accessed Aug 2012).
- 11. Kirk A. MyHospitals website set to be launched. The World Today [radio program] 2010; 16 Jul. http://www.abc.net.au/worldtoday/content/2010/s2955659.htm (accessed Jul 2012).
- 12. Gallagher MP, Krumholz HM. Public reporting of hospital outcomes: a challenging road ahead. Med J Aust 2011; 194: 658-660.
- 13. Fatovich DM, Nagree Y, Sprivulis P. Access block causes emergency department overcrowding and ambulance diversion in Perth, Western Australia. Emerg Med J 2005; 22: 351-354.
- 14. Curtis AJ, Stoelwinder JU, McNeil JJ. Management of waiting lists needs sound data. Med J Aust 2009; 191: 423-424.
- 15. British Medical Association. BMA survey of A&E waiting times. London: Health Policy and Economic Research Unit, BMA, 2005. www.collemergency med.ac.uk/code/document.asp?ID=3156 (accessed Jul 2012).
- 16. FitzGerald G, Jelinek GA, Scott D, Gerdtz MF. Emergency department triage revisited. Emerg Med J 2010; 27: 86-92.
- 17. Mehta RH, Liang L, Karve AM, et al. Association of patient case-mix adjustment, hospital process performance rankings, and eligibility for financial incentives. JAMA 2008; 300: 1897-1903.
- 18. Marshall MN, Shekelle PG, Davies HT, Smith PC. Public reporting on quality in the United States and the United Kingdom. Health Aff (Millwood) 2003; 22: 134-148.
- 19. Mannion R, Davies HT. Reporting health care performance: learning from the past, prospects for the future. J Eval Clin Pract 2002; 8: 215-228.
- 20. Pearson DDR. Access to public hospitals: measuring performance. Melbourne: Victorian Auditor-General’s Office, 2009. http://www.audit.vic.gov.au/reports__publications/reports_by_year/2009/20090401_hospital_indicators.aspx (accessed Jul 2012).
Abstract
Objective: To examine whether the reported urgency mix of an emergency department’s (ED’s) patients is associated with its waiting time performance.
Design and setting: Cross-sectional analysis of data on patient urgency mix and hospital ED performance reported on the MyHospitals website for July 2009 – June 2010.
Main outcome measures: ED performance assessed as the proportion of patients whose care was initiated within the recommended time frame for each of four triage categories.
Results: Data for 158 hospitals showed that EDs with a higher proportion of patients assigned to the emergency category have poorer waiting time performance, after adjusting for hospital characteristics. Conversely, EDs with a higher proportion of patients assigned to the non-urgent category perform better. If performance scores were adjusted for reported patient urgency mix and hospital peer group, mean adjustments would be modest in size (3.7–7.1 percentage points, depending on the category), but for individual EDs the differences could be large (as large as 31 percentage points) and hospital waiting time performance rankings would be substantively impacted.
Conclusion: Since ED performance is related to reported patient urgency mix, adjusting for casemix in the ED may be warranted to ensure valid comparisons between hospitals. Further investigation of the validity of performance measures and appropriate adjustment for differences in hospital and patient characteristics is required if public reporting is to meet its goals.