Understanding how to interpret diagnostic test accuracy studies is a key skill that health practitioners need to develop in order to undertake evidence‐based practice.1 In this article we guide the reader through how to interpret a diagnostic test accuracy study, including the potential for bias. In subsequent articles we will discuss how diagnostic tests may be applied in clinical practice and consider key concepts in population screening and overdiagnosis.
The full article is accessible to AMA members and paid subscribers. Login to read more or purchase a subscription now.
Please note: institutional and Research4Life access to the MJA is now provided through Wiley Online Library.
- 1. Albarqouni L, Hoffmann T, Straus S, et al. Core competencies in evidence‐based practice for health professionals: consensus statement based on a systematic review and delphi survey. JAMA Netw Open 2018; 1: e180281.
- 2. Irwig L, Tosteson ANA, Gatsonis C, et al. Guidelines for meta‐analyses evaluating diagnostic tests. Ann Intern Med 1994; 120: 667–676.
- 3. Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. BMJ 2015; 351: h5527.
- 4. Mallouhi A, Felber S, Chemelli A, et al. Detection and characterization of intracranial aneurysms with MR angiography: comparison of volume‐rendering and maximum‐intensity‐projection algorithms. AJR Am J Roentgenol 2003; 180: 55–64.
- 5. Bossuyt PM, Irwig L, Craig J, Glasziou P. Comparative accuracy: assessing new tests against existing diagnostic pathways. BMJ 2006; 332: 1089–1092.
- 6. Miller TD, White PM, Davenport RJ, et al. Screening patients with a family history of subarachnoid haemorrhage for intracranial aneurysms: screening uptake, patient characteristics and outcome. J Neurol Neurosurg Psychiatry Res 2012; 83: 86.
- 7. Leeflang MM, Bossuyt PM, Irwig L. Diagnostic test accuracy may vary with prevalence: implications for evidence‐based diagnosis. J Clin Epidemiol 2009; 62: 5–12.
- 8. Choi BC. Slopes of a receiver operating characteristic curve and likelihood ratios for a diagnostic test. Am J Epidemiol 1998; 148: 1127–1132.
- 9. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982; 143: 29–36.
- 10. Macaskill P, Gatsonis C, Deeks JJ, Harbord RM, Y. T. Chapter 10: Analysing and presenting results. In: Deeks JJ, Bossuyt PM, Gatsonis C, editors. Handbook for systematic reviews of diagnostic test accuracy, version 10. The Cochrane Collaboration, 2010. https://methods.cochrane.org/sites/methods.cochrane.org.sdt/files/public/uploads/Chapter%2010%20-%20Version%201.0.pdf (viewed Nov 2019).
- 11. Bossuyt P, Davenport C, Deeks J, et al. Chapter 11: Interpreting results and drawing conclusions. In: Deeks JJ, Bossuyt PM, Gatsonis C, editors. Handbook for systematic reviews of diagnostic test accuracy, version 9 The Cochrane Collaboration, 2013. https://methods.cochrane.org/sites/methods.cochrane.org.sdt/files/public/uploads/DTA%20Handbook%20Chapter%2011%20201312.pdf (viewed Nov 2019).
- 12. Straus SE, Glasziou P, Richardson WS, Haynes RB. Evidence‐based medicine: how to practice and teach EBM. Amsterdam: Elselvier, 2019.
- 13. Leeflang MM, Deeks JJ, Gatsonis C, Bossuyt PM. Systematic reviews of diagnostic test accuracy. Ann Intern Med 2008; 149: 889–897.
- 14. Whiting PF, Rutjes AW, Westwood ME, Mallett S. A systematic review classifies sources of bias and variation in diagnostic test accuracy studies. J Clin Epidemiol 2013; 66: 1093–1104.
- 15. Walter SD, Macaskill P, Lord SJ, Irwig L. Effect of dependent errors in the assessment of diagnostic or screening test accuracy when the reference standard is imperfect. Stat Med 2012; 31: 1129–1138.
Series editors
Katy Bell receives funding from the National Health and Medical Research Council of Australia via a Centre of Research Excellence grant (1104136) and a project grant (1163054).
No relevant disclosures.