MJA
MJA

Media reporting on research presented at scientific meetings: more caution needed

Steven Woloshin and Lisa M Schwartz
Med J Aust 2006; 184 (11): 576-580. || doi: 10.5694/j.1326-5377.2006.tb00384.x
Published online: 5 June 2006

Abstract

Objective: To examine media stories on research presented at scientific meetings to see if they reported basic study facts and cautions, and whether they were clear about the preliminary stage of the research.

Design and setting: Three physicians with clinical epidemiology training analysed front-page newspaper stories (n = 32), other newspaper stories (n = 142), and television/radio stories (n = 13) identified in LexisNexis and ProQuest searches for research reports from five scientific meetings in 2002–2003 (American Heart Association, 14th Annual International AIDS Conference, American Society of Clinical Oncology, Society for Neuroscience, and the Radiological Society of North America).

Main outcome measures: Media reporting of basic study facts (size, design, quantification of results); cautions about study designs with intrinsic limitations (animal/laboratory studies, studies with < 30 people, uncontrolled studies, controlled but not randomised studies) or downsides (adverse effects in intervention studies); warnings about the preliminary stage of the research presented at scientific meetings.

Results: 34% of the 187 stories did not mention study size, 18% did not mention study design (another 35% were so ambiguous that expert readers had to guess the design), and 40% did not quantify the main result. Only 6% of news stories about animal studies mentioned their limited relevance to human health; 21% of stories about small studies noted problems with the precision of the finding; 10% of stories about uncontrolled studies noted it was not possible to know if the outcome really related to the exposure; and 19% of stories about controlled but not randomised studies raised the possibility of confounding. Only 29% of the 142 news stories on intervention studies noted the possibility of any potential downside. Twelve stories mentioned a corresponding “in press” medical journal article; two of the remaining 175 noted that findings were unpublished, might not have undergone peer review, or might change.

Conclusions: News stories about scientific meeting research presentations often omit basic study facts and cautions. Consequently, the public may be misled about the validity and relevance of the science presented.

Scientific meetings are an important forum for researchers to exchange new ideas and present work in progress. Although this open exchange is a desirable and important part of the scientific process, much of the work presented is not ready for public consumption.,,

Nevertheless, scientific meeting research receives extensive media coverage and can influence clinical practice. This is troubling, as early results may change substantially by the time the final report is published. Furthermore, the results may never be published, as promising hypotheses fail to pan out or important methodological issues emerge. A quarter of meeting presentations garnering media attention (including on page 1 of major newspapers) are never published in the medical literature.

Although there are many anecdotal complaints about how well the media cover scientific meetings, we found no published, systematic evaluation. We analysed media coverage of scientific meetings and asked:

  • Are basic study facts reported?

  • Are cautions about inherent study weaknesses noted?

  • Are news stories clear about the preliminary stage of the research?

Methods
Sample
Sample frame

We analysed media coverage of research presentations at high profile scientific meetings. We chose scientific meetings previously identified as “high profile” (ie, likely to attract media attention) based on advice from science writers, editors, and media database searches. The five meetings were the American Heart Association, the International AIDS Conference, the American Society of Clinical Oncology, the Society for Neuroscience, and the Radiological Society of North America.

Search strategy

We identified media coverage of these five meetings in 2002–2003 (there was no International AIDS conference in 2003, so we included the 2002 meeting; all other meetings were held in 2003) by searching two media databases — LexisNexis (“guided news search” option of English language news reports in major international media outlets) and ProQuest (for the Wall Street Journal) — for stories appearing within 2 months of each meeting. We conducted full text searches with appropriate wildcards for combinations of phrases: (name of meeting) w/10 (scientific session OR conference OR meeting). (The “w/10” parameter identifies news stories in which “scientific session”, “conference” or “meeting” appears within 10 words before or after the name of each meeting.)

We identified 210 potentially eligible newspaper stories and 20 nationally syndicated television or radio transcripts from the United States or Canada that reported on research presented at these scientific meetings (ie, not general reports about the meetings or policy statements).

Inclusion and exclusion criteria

We included all news stories reporting on a single research presentation and stories reporting on more than one presentation if at least one of the presentations related to the story’s headline. We did this to avoid excluding in-depth stories focused on a single presentation that just happened to mention another presentation, and to avoid including stories that provided only one or two sentences about multiple presentations (eg, “What’s happening at this year’s RSNA”, listing a number of presentations). If the headline related to more than one research presentation, we only coded the first presentation mentioned.

After reviewing all potentially eligible stories, we coded the 174 newspaper stories and 13 television/radio transcripts meeting our criteria. To fully report what the public is exposed to, we did not exclude wire reports (which are edited and given headlines at the newspaper’s discretion).

Outcome measures

We used an explicit coding scheme (available at: http://www.vaoutcomes.org/research_tools.php) to analyse each news story. The one-page coding scheme was organised into the following three domains.

Basic study facts: Did the news story report study size, identify study subjects (eg, cells, animals, live humans) and study design, and quantify main results (and if so, were any absolute risks reported)? Most coding choices were framed as yes/no questions, although in some cases coders were asked if the information was stated explicitly or if they had to guess.

Cautions: Were relevant cautions provided about study designs with intrinsic limitations (ie, animal/lab studies, small [human] studies, uncontrolled studies, and controlled but not randomised studies), and were possible downsides noted for intervention studies?

Coders first indicated whether news stories noted (explicitly or by implication) any cautions about study design (eg, that the study was limited because of its small size) or about the interpretation of study results (eg, the study was preliminary and no one should change behaviour based on the findings). We further coded all caution statements to see if they addressed the key limitations intrinsic to particular study designs. Box 1 defines the study design-specific cautions we looked for (eg, did news stories on animal studies highlight that the results might not apply to humans). We applied these definitions liberally to give credit for any effort to provide the caution, even if the relevant statement was subtle. For example, a news story about an observational study was given credit for raising the possibility of confounding because it included the statement “this link is not proven by this study”.

Box 1

Cautions that should be highlighted

Preliminary stage of research: Were there warnings about the preliminary stage of the research? By preliminary, we meant research presentations not associated with an “in press” or published journal article at the time of the meeting. Specifically, we looked for statements about whether the preliminary research was unpublished, had not undergone peer review, might change as the study matured, or that these might not be the final study results.

Coding reliability and analysis

Two physicians with clinical epidemiology training independently coded all news stories (they were blinded to the name of the newspaper, TV or radio source and the journalist). One coder was a study author (L M S). The other coder was a clinician hired for this purpose, but was not involved in any other aspect of the study. We assessed the inter-rater reliability of the coding scheme using the κ statistic. κ values ranged from 0.74 to 1.0, with a mean of 0.88 (“almost perfect” agreement). The other study author (S W) served as the tie-breaker for disagreements. He independently coded all items for which there was disagreement. These codes were used to establish the final codes.

We used the χ2 test to compare differences in proportions. All analyses were done using Stata version 9.0 (StataCorp, College Station, Tex, USA) with α set at 0.05.

Results
Description of meeting presentations garnering media attention

Eighteen per cent of the newspaper stories appeared on page 1; 17% appeared in the top five circulation US newspapers (Box 2). The median length of newspaper stories was 473 words (interquartile range, 255–631 words).

Box 2

Media coverage of five major scientific meetings (number of articles)

Box 3 summarises the kinds of meeting presentations receiving media coverage. Most (89%) were identifiable as studies of live humans (9% were animal or laboratory studies; in 2% of the reports, study subjects could not be determined). Study size varied from just one subject to 75 000 subjects (median, 502); 17% of studies were small (< 30 subjects). Twenty per cent of meeting presentations were identified as being drug or industry funded. Among the 166 news stories about human studies, 39% covered observational studies (18% uncontrolled; 21% controlled). Only 3% described the most definitive kind of study: large (n ≥ 1000), human, randomised trials. About two-thirds of the 153 stories on human studies (other than surveys or unknown study designs) reported an intermediate outcome measure (eg, tumour size) rather than a patient outcome (eg, death).

Box 3

Description of the meeting presentations garnering media coverage

Basic study facts

The quality of media reporting on scientific meeting presentations is summarised in Box 4. Basic study facts were often missing. About a third of reports failed to mention study size, and 53% did not mention study design or were so ambiguous that expert readers could not determine the design with any certainty.

Box 4

Quality of media reporting of basic study facts

Forty per cent of stories did not quantify the main result. Twenty-one per cent quantified the main result, but used only relative change statistics without a base rate — a format known to exaggerate the perceived magnitude of findings.,,

Cautions

Important cautions about study designs with intrinsic limitations were rarely noted. Only 6% (1/17) of news stories about animal studies included a statement that the results might not apply to human health. Only 21% (5/24) of news stories about studies involving fewer than 30 people alerted readers to the imprecision of small studies. Ten per cent (3/31) of news stories about uncontrolled studies noted (or implied) that without a control group it is not possible to know if the outcome really relates to the exposure, and 19% (7/36) of stories about controlled but not randomised studies raised the possibility of confounding. Cautions about possible downsides of interventions were also missing: 142 news stories covered intervention studies, but only 29% noted any possible downsides (eg, side effects or other harms) or stated that there were none.

Box 5 lists cautions reported about study designs with intrinsic limitations. Although any attempt at caution is worthwhile, some are more helpful than others. Vague cautions (eg, “a larger, follow-up study is planned” about an uncontrolled study of 11 patients) may not raise sufficient concern about whether the findings are really true (or worth acting on). More specific and explicit cautions, like those reported in the news story “Cholesterol drugs cut cancer risk”, which raises the possibility of confounding and the need for a definitive study before acting on the results, are likely to be more helpful to readers.

Box 5

Selected cautions in news reports about study designs with important limitations

Preliminary stage of research

The preliminary stage of the research presented was rarely noted: 12 stories mentioned a corresponding “in press” medical journal article, but only two of the remaining 175 noted that the findings were unpublished, might not have undergone peer review, or might change as the study matured. The two statements about the preliminary stage of the research were “full results have been submitted for peer review to a scientific journal” and “[the drug company] will analyze the results in 3 years to see if the vaccine actually prevents HIV infection”. (Of note, when the later analysis was done, it turned out that the vaccine did not prevent HIV infection.)

Discussion

Work presented at scientific meetings is generally not ready for public consumption: results change, fatal problems emerge, and hypotheses fail to pan out. Nonetheless, the presentations are often big news. The five meetings we analysed received extensive coverage in the highest profile media outlets in the US. Unfortunately, the news stories often failed to report basic study facts and important cautions needed to avoid misleading the public about the meaning, validity and importance of the science highlighted.

Our study has two limitations. First, we only examined five meetings. It is possible that the coverage of other scientific meetings might have been better. We think this is unlikely, because these are extremely prominent meetings and the coverage appeared in well known media outlets. Next, as with any content analysis, some subjectivity is inherent in coding. We tried to minimise subjectivity by creating and pretesting an explicit coding scheme, using two independent coders (one blinded to our hypotheses) and reporting only elements with very high inter-rater reliability (ie, κ > 0.7). In addition, we did not evaluate accuracy — our goal was to see whether key elements were reported; we did not check to see whether the journalists reported correct facts.

It is not hard to understand why research presented at scientific meetings garners extensive media attention. Researchers benefit from the attention because it is a mark of academic success, their academic affiliates benefit because good publicity attracts patients and donors, and research funders (public and private) benefit when they can show a good return on their investments. The meeting organisers also benefit; extensive media coverage attracts more advertisers and higher profile scientists for the following year, guaranteeing more dramatic reports and, ultimately, more press. The importance of publicity is reflected in the fact that meeting organisers often pay more attention to courting the media (ie, issuing press releases, holding news briefings, and organising investigator interviews) than in vetting the science itself. Most importantly, the public has a strong appetite for medical news — particularly about new, “breakthrough” treatments and technologies. Scientific meetings provide the media with an easy source of such stories.

Unfortunately, the public does not always benefit from preliminary findings. When they turn out not to be true, patients can be hurt by exposure to ineffective or harmful treatments or by forgoing good alternatives. Consequently, it is important for the public to understand the inherent limitations of preliminary work.

The most direct way to improve the media coverage of scientific meetings would be to have less of it. This will not happen, of course, as too many interests are served by turning preliminary reports into health news. The next best thing would be to improve the way in which these stories are reported. Here, good reporting means providing basic study facts, highlighting cautions about study designs with intrinsic limitations, and being clear about the preliminary stage of the work under discussion.

Ideally, this effort would begin with improving the media’s sources. Press releases issued by meeting organisers, granting agencies and academic institutions should routinely include balanced data presentations (we favour tables with the absolute risks of outcomes for each study group) and study cautions. When interviewed, researchers should clearly and repeatedly note the preliminary nature of their work, the need to interpret results with caution and the importance of waiting for their work to undergo scientific peer review.

Although it is encouraging that more than half of the news stories reported basic study facts, there is much room for improvement. These facts are readily available, and journalists and their editors can ensure the facts are routinely reported. This problem is not specific to the coverage of scientific meeting presentations. For example, studies of the media coverage of medications in the US and Canada have found that 40% and 80%, respectively, of news stories failed to quantify benefit.

Highlighting cautions is more challenging, because this entails an appreciation of the inherent limitations of different study designs. In the news stories we analysed, cautions were routinely absent. When present, the cautions may have been too vague to be useful. We hope that Box 1 will help reporters (and those writing press releases for researchers, journals and meetings) focus on these issues, and that they will routinely use (or adapt) the language provided in the table to make these cautions explicit. Standardised language to describe common issues should help make the journalists’ job a little easier, and, more importantly, would help educate readers about these fundamental issues.

Of course, highlighting cautions is far less interesting than focusing on the novelty, human interest or possible implications of the research. Readers might even skip these stories altogether. But ignoring a preliminary report about a weak study is preferable to being misled.

Online responses are no longer available. Please refer to our instructions for authors page for more information.