Various governments worldwide have established websites showing how long patients wait for elective surgery at different hospitals. The general aim of these services is to assist general practitioners and patients make referral decisions, and to improve system-wide performance by removing imbalances in surgeon workloads. When launched, some websites were criticised by the medical profession as being useless (because GPs already know the waiting times of local surgeons), and potentially misleading if doctors with long waiting times are assumed to be the best.1,2 Concerns were also expressed about data quality and the accuracy with which the statistics predicted waiting times.
Although all these criticisms raise valid issues, how accurately patient waiting times can be predicted is of intrinsic importance. The level of accuracy required depends on what waiting time information GPs and patients want. It is not clear how either group interprets its information needs, but three interpretations stand out in relation to the issue of accuracy. GPs and patients can be regarded as wanting information that
Predicts how long a patient might expect to wait for admission to a particular surgical unit;
Identifies units at which a patient will wait different lengths of time; or
Identifies units at which a patient will wait an acceptable time.
For interpretation 1, statistics have to meet an absolute standard of accuracy. For interpretations 2 and 3, their practical value will also depend on, respectively, the difference in waiting times between the various units, and the difference between the estimate and the threshold used to define an acceptable waiting time.
A typical user of a waiting time information service is unlikely to be aware of these issues. If a service is not explicit about how its information should be used, the figures could be used inappropriately.
We reviewed six Web-based waiting time information services to examine how they aimed to meet users' information needs, the potential accuracy of presented statistics, and how they advised users to interpret the presented statistics.
The websites we reviewed were the English and Welsh services,3,4 the British Columbia (BC) (Canada) service,5 and services for New South Wales, Queensland and Western Australia.6-8 These services were chosen because they were in English and provided statistics that enabled the situation at different surgical units to be compared at a surgeon or specialty level. The sites were reviewed on 22 October 2001.
Accuracy of the statistics was assessed indirectly against two potential sources of bias. The first was the type of data used. Statistics are typically derived from data on admitted patients (throughput data) or from data on patients on the waiting list (census data). Throughput data have the advantage of capturing complete waiting times, whereas census data only measure the time up to the census date. Census data can also contain the records of patients who will not subsequently be admitted.9 For these and other reasons10 throughput data statistics are often regarded as the more accurate measure of waiting time.10
The second potential source of bias was the adopted level of data aggregation. Ideally, this should account for factors that cause differences among patient waiting times, such as the level at which a list is managed (eg, surgeon, specialty) and urgency category. High levels of aggregation can hide problems of particular units, making statistics unresponsive. There is also the risk of making an ecologically biased inference about a patient's likely waiting time.11 This may arise for specialty-level statistics if lists are managed by individual surgeons and there are significant differences between surgeons within a specialty.
Box 1 summarises statements on the aim of the service, and about how statistics should be interpreted for each of the six services reviewed. Box 2 summarises the main statistics presented by the six services.
It was not often clear which interpretations of the information needs of users the reviewed services aimed to meet. Four services presented point estimates (median or mean), which allow users to adopt any of the three. Overall, these services might be viewed as encouraging change only when waiting times for a preferred surgical unit were too long (interpretation 3). However, just one service explicitly stated that the statistics were only intended as a guide. This lack of direction might lead users to draw inappropriate inferences. In particular, the objectives and the data presentation of the WA service might lead users to believe the statistics predict likely waiting times (interpretation 1).
The English and Queensland services included no statements about the aim of the service or whether statistics were predictive of a patient's likely wait. Both presented information as a frequency table, which essentially limits users to deriving the percentage of patients waiting longer than a certain time. Such statistics can be regarded as supporting only interpretation 3, as units with few patients waiting beyond the acceptable limit would be considered equal.
There was considerable diversity across the sites in the statistics presented. First, services varied in their use of throughput or census data. From a theoretical perspective, the use of census data statistics is a concern. However, their use may be evidence that throughput data statistics are also affected by substantial levels of bias (eg, when waiting times are long, changes in behaviour will take months to show up).
Second, as waiting lists are often managed by individual surgeons, the aggregation of data at a specialty level by some services may be problematic, for the reasons outlined above. On the other hand, the precision of surgeon-level statistics might be poor if they were derived from few observations.12 This potential danger seems particularly pertinent for two services (NSW and WA) that aggregated data by surgeon and procedure, especially as both used a classification of over 100 types of procedure.
For throughput statistics, this problem of precision can be tackled by increasing the period over which data are aggregated. This might explain some of the differences between services. However, doing this is a trade-off with potential bias due to time-dependent behaviour. The NSW service seems particularly susceptible to this bias given that data are aggregated over 12 months.
The concerns regarding accuracy highlight the need for services to assist users in interpreting their statistics. All services explained waiting list terms and data items, although the range of terms included varied.
There was less help with respect to statistical issues. No service presenting point estimates included sample size information, which could assist users to judge precision. In addition, no service provided an indication of what might constitute a real difference in performance between units, even though estimates would be affected by sampling error. Indeed, two services (WA and BC) reported waiting times in units of days (or equivalent), which could suggest to users that the statistics are very accurate measures of relative performance. For the statistically untrained, this (false) impression could be enhanced by the services generally stressing that every effort has been made to ensure that the data are accurate.
The simplicity of this study means that it cannot cover other issues arising from the creation of waiting time information services.13,14 The study also has various limitations. The interpretation of information needs by each service was subjective (although we all agreed on the findings presented here), and the accuracy of the statistics was examined only against methodological criteria. In addition, we reviewed only six services. However, this reflected the small number of services in existence at the time of the review. A search of all government websites in English-speaking countries with publicly funded hospital services produced no other candidates for inclusion in the study.
Our study raises various issues about the quality of waiting time information presented on government websites. That few services gave users instructions on how best to interpret the statistics is a concern because of the various ways in which GPs and patients might use the figures. This could lead users to draw unwarranted conclusions. Statements are needed about how the information should, and should not, be used. Predictive accuracy is a key issue, but there are other issues to consider. For example, we would argue that services should just support interpretation 3, because it encourages change in referral patterns only when there is a problem, and thus should not result in patterns becoming unstable.
The other main concern is the uncertainty about the accuracy of the statistics. This does not mean services cannot be used successfully to avoid units with a backlog of elective patients. If the waiting time of many patients exceeds one year, concerns about accuracy become less critical. However, where average waiting times are less than six months, we suggest the services should be used cautiously until they provide guidance on what might constitute a real difference in performance.
1: Summary of how services suggest their statistics should be interpreted
Service |
Stated aim of service |
Advice on how to use information |
Advice on whether statistics predict expected waiting time |
||||||||
England |
Not stated |
Not stated |
Not applicable* |
||||||||
Wales |
Guide for choice of surgeon at time of referral |
Not stated |
Times intended only as a guide |
||||||||
New South Wales |
Allow users to explore options to reduce a patient's wait |
Ask your doctor if you think your waiting time is too long |
Not necessarily the best estimate. Ask surgeon or hospital for best estimate |
||||||||
Queensland |
Not stated |
Patients who wish to discuss booking should contact GP |
Not applicable* |
||||||||
Western Australia |
Allow users to explore options to reduce a patient's wait |
Contact service if you think your wait is too long |
Not stated |
||||||||
British Columbia |
Allow patients to explore their health care choices |
Ask your doctor if surgeon suggested originally has a long list |
Not stated |
||||||||
* Data described using frequency tables and not as summary statistics. |
2: Characteristics of statistics contained within the information service
Service |
Main statistics presented (not all) |
Type of data |
Time interval |
Type of aggregation |
|||||||
England |
Number of outpatients seen in each of four waiting-time categories |
Throughput |
3 months |
Specialty |
|||||||
Number of inpatients in each of five waiting-time categories |
Census |
On census date |
Specialty, type of stay |
||||||||
Wales |
Expected waiting time of outpatients |
Throughput* |
3 months* |
Specialty/procedure, surgeon, urgency |
|||||||
Longest expected wait of inpatients |
Census* |
On census date* |
Specialty/procedure, surgeon, type of stay |
||||||||
New South Wales |
Median and 90th percentile waiting times of inpatients |
Throughput |
12 months |
Procedure, surgeon, urgency |
|||||||
Queensland |
Number of inpatients in each of 4, 3, or 2 waiting-time categories (depending upon urgency) |
Census |
On census date |
Specialty, urgency |
|||||||
Western Australia |
Median waiting times of inpatients |
Not stated |
Not stated |
Procedure, surgeon, urgency |
|||||||
British Columbia |
Median waiting time of inpatients |
Throughput |
3 months |
Specialty/procedure, surgeon, urgency, type of stay |
|||||||
* Not stated explicitly, but suggested by the text. |
- David A Cromwell1
- David A Griffiths2
- Irene A Kreis3
- University of Wollongong, Wollongong, NSW.
None identified.
- 1. Kent H. Waiting-list web site "inaccurate" and "misleading" BC doctors complain. CMAJ 1999; 161: 181-182.
- 2. Whelan J. Surfing for quicker surgery. Sydney Morning Herald 2000; May 12: 4 (col 1).
- 3. NHS Executive. Waiting lists and waiting times data for England. <http://www.performance.doh.gov.uk/waitingtimes/>. Accessed 22 October 2001, link updated 13 September 2005.
- 4. Health of Wales Information Service. NHS Wales waiting time information service for GPs and patients. <http://www.hsw.wales.nhs.uk/ipd/homepage.htm>. Accessed 22 October 2001.
- 5. Ministry of Health Government of British Columbia. Surgical waiting list registry. <http://www.healthservices.gov.bc.ca/waitlist>. Accessed 22 October 2001.
- 6. NSW Health. NSW Health Waiting Times Information. <http://www.health.nsw.gov.au/waitingtimes>. Accessed 22 October 2001.
- 7. Queensland Health. Elective surgery waiting list report as at 1 July 2001. <http://www.health.qld.gov.au/publications/eswlr.pdf>. Accessed 22 October 2001.
- 8. Health Department of Western Australia. Welcome to Central Wait List Bureau. <http://www.health.wa.gov.au/cwlb>. Accessed 22 October 2001.
- 9. Nicholl J. Comparison of two measures of waiting times. BMJ 1989; 296: 65.
- 10. Don B, Lee A, Goldacre MJ. Waiting list statistics III. Comparison of two measures of waiting times. BMJ 1987; 295: 1247-1248.
- 11. Morganstern H. Ecologic studies. In: Rothman KJ, Greenland S, editors. Modern epidemiology. 2nd ed. Philadelphia: Lippincott-Raven, 1998.
- 12. Altman DG. Practical statistics for medical research. London: Chapman and Hall, 1991.
- 13. Smith T. Waiting times: monitoring the total postreferral wait. BMJ 1994; 309: 593-596.
- 14. Worthington DJ. Queuing model for hospital waiting lists. J Operational Res Soc 1987; 38: 413-422.
Abstract
Objectives: To assess Web-based waiting time information services to identify how they aimed to meet the information needs of patients and general practitioners, and to evaluate how well waiting time information was presented.
Design: A cross-sectional survey of six government websites in English-speaking countries with publicly funded hospitals. Sites were evaluated on the clarity of instructions about how their information should be used, and the accuracy of the statistics they contained was assessed indirectly using methodological criteria.
Results: The services were judged to encourage GPs and patients to use the statistics to avoid surgical units with long waiting times, but overall advice was poor. Services did not state whether the statistics predicted expected waiting times, and just one stated that the statistics were only intended as a guide. Statistics were based on different types of data, and derived at different levels of aggregation, raising questions of accuracy. Most sites explained waiting list terms, but provided inadequate advice on the uncertainty associated with making statistical inferences.
Conclusions: GPs and patients should use Web-based waiting time information services cautiously because of a lack of guidance on how to appropriately interpret the presented information.