The recent pandemic (H1N1) 2009 influenza outbreak has highlighted the importance of timely surveillance data to monitor epidemiological trends for guiding public health control measures.1 Collecting high-quality surveillance data is needed to gauge the timing and peak of the influenza season, and is an important pandemic preparedness activity.
Laboratory-confirmed influenza became notifiable in most Australian states and territories in 2001,2 and is now nationally notifiable. In theory, it should now be possible to compare influenza activity across the country. States and territories also conduct surveillance for influenza-like illness (ILI) during the influenza season using sentinel sites. Results from one such system have provided important early findings about pandemic (H1N1) 2009 influenza.1 However, the type of data and the way they are collected vary throughout the nation, resulting in a fragmented surveillance system.3 Comprehensive sentinel systems require committed general practitioners and concerted efforts to establish and maintain. Australia should aspire to having a uniform, national sentinel surveillance system, although funding and long-term maintenance issues would need to be addressed. Alternative methods of capturing influenza information include the online survey “FluTracking”4 and Google’s “Flu Trends”.5 The former currently lacks national coverage and neither system incorporates laboratory confirmation, meaning “false alarms” caused by other respiratory viruses may occur.
According to national surveillance data of laboratory-confirmed influenza, Queensland has had the most severe influenza seasons of all Australian states and territories in recent years (Box, A).7,8 Australian data available at Google’s Flu Trends do not support differences among states in ILI activity,5 and there are no clear reasons why Queensland should consistently suffer effects of influenza disproportionate to other states. It is likely, therefore, that this finding is influenced by information bias.
Over time, all three laboratories increased influenza testing (Box, B), but with different patterns. Queensland had the highest number of tests each year, with a consistent increase over the 5 years, while both Victoria and WA showed slower growth but stepwise increases in 2007. A severe influenza season in 2007 saw deaths reported in healthy children across the country,9 including from Queensland10 and three early season deaths of young children from WA.11 In that year, all three laboratories reported increased numbers of laboratory-confirmed cases of influenza (Box, A). The consistently higher and increasing test numbers in Queensland may be due to several factors, including active promotion by public health authorities of influenza testing by GPs,12 increased use of point-of-care testing, and widespread availability, with rapid turnaround, at both public and private laboratories of highly sensitive molecular laboratory testing.
Each state’s data show a concordance between the amount of testing performed and the number of positive results (Box, A and B). One method to compensate for the impact of testing behaviour is to calculate the proportion of positive results, reducing the role the number of tests performed has on absolute counts. Regular calculation of this value shows a remarkable correlation between the timing and peak of the season at each of the laboratories (Box, C); the correlation is independent of variations in testing (Box, B). Source of notification — inpatient, outpatient, or sentinel surveillance — was not available for each laboratory, but where it was, removing sentinel specimens made no difference to the conclusions drawn (data not shown).
There were different patterns in the proportion of annual state notifications that each laboratory provided (Box, D).13 QH laboratories had the highest increase in testing, but their contribution to state notifications was low and flat during the period, consistent with substantial and increasing testing in private laboratories. VIDRL’s contribution was initially high but fell quickly, then stepwise, throughout the 5 years, suggesting an increasing role of other public and private providers. PathWest provided about half of the WA notifications early in the period, but contributed more than half in 2007, probably due to increased testing related to high influenza activity and community concern fuelled by the childhood deaths.
We propose that negative influenza test data be made notifiable for the monitoring of testing behaviour and the calculation of the proportion of test results that are positive. Influenza-negative notifications would be reported with the same set of basic demographic and test data (such as age, sex, type of test performed) that are notified with positive results.
The notification of influenza-negative test data will not cure all the ills of influenza surveillance — no surveillance mechanism is perfect — but it would provide improved, nationally consistent data and could be implemented easily and quickly. This may be of particular value given concern about the expected resurgence of pandemic influenza in 2010. Total test numbers could be captured from electronic databases in laboratories, minimising implementation costs and providing an ongoing sustainable data source. Public health units would require only an initial time outlay to modify receipt and handling of notifications, and associated data analysis. The total number of tests and the proportion of positive results would both be published and would provide a more robust system for comparing influenza activity across time and regions.
The proportion of positive test results from sentinel practice surveillance samples has been used for monitoring overseas, including by the United Kingdom’s Health Protection Agency14 and the European Influenza Surveillance Network.15 But such data are only available where a sentinel surveillance system is in place. The proportion of positive test results was used recently in Victoria to describe the influenza season for the pandemic A(H1N1) 2009 outbreak,1 and in the United States to examine influenza vaccine effectiveness for preventing death in older people, using a 10% cut-off to define the season.16 Further, a Canadian paper defined periods of peak influenza activity as those months when the percentage of positive test results exceeded 7%,6 giving a mean influenza season of 3 months. Using this measure, duration of annual influenza seasons in Queensland, Victoria and WA ranged from 2 to 4 months (Box, C).
Calculating the proportion of all positive test results would reduce bias caused by variation in the number of tests performed, but this value may not be completely free of bias itself. For example, in many laboratories, including our own,17,18 specimens submitted for any respiratory virus polymerase chain reaction (PCR) test are subjected to a panel of assays. This may mean that during an outbreak of another respiratory virus, such as respiratory syncytial virus, influenza testing (and testing for all viruses on the panel) may be increased. This reduction in the proportion of results that are positive, caused by a testing artefact, would be misleading, but such a bias, we argue, would need to be persistent, strong and remain neglected to result in problems as serious as those caused by interpreting notification data that do not include negative test values.
High-quality epidemiological surveillance is the cornerstone of monitoring seasonal activity of influenza and identifying new trends, including the emergence of potentially pandemic strains. It seems remarkable that, given the importance of these data, we currently have no way to account for regional or temporal variations in the number of tests performed.
Complex models have been suggested for monitoring influenza surveillance data in real-time.14 There is a myriad of ways influenza surveillance could be improved in Australia, such as implementing a GP- and hospital-based, uniform, nationwide, laboratory-supported sentinel surveillance scheme, and the reporting of influenza-related mortality in key age groups. We strongly support such proposals, but implementing negative-test result reporting should not be deferred while other reforms are being considered.
Influenza testing by laboratory, 2004–2008
![]() |
A: monthly number of positive tests; B: monthly number of tests performed; |
Abstract
Laboratory-confirmed influenza is a nationally notifiable disease in Australia. According to notification data, Queensland has experienced more severe influenza seasons than other states and territories. However, this method ignores available denominator data: the number of laboratory tests performed.
We propose that negative results of laboratory tests for influenza should be made notifiable, alongside laboratory-confirmed disease, and used to calculate the proportion of positive test results in real-time.
Using data from the public health pathology services of three Australian states — Queensland Health laboratories, the Victorian Infectious Diseases Reference Laboratory and Western Australia’s PathWest — for 2004 to 2008, we show that incorporating laboratory-negative test data into national surveillance data would add to and improve our understanding of influenza epidemiology.