MJA
MJA

Testing medical school selection tests

Chris McManus and David Powis
Med J Aust 2007; 186 (3): . || doi: 10.5694/j.1326-5377.2007.tb00832.x
Published online: 5 February 2007

Why is so little known about what works in selecting medical students?

In the past decade, some 15 000 students entered Australian medical schools, and in the United Kingdom, four times that number were admitted. Such a large number should imply that much is known about what to select on, how to select and whom to select. The sad reality is that surprisingly little is known. Instead, strongly held opinions are rife, inertia predominates, and change occurs more because of necessity, external pressure, political force or mere whim, than because of coherent evidence-based policy or theorising. Selection sometimes seems more to ensure the correct number of entrants on day one, than to identify those best suited to the course and profession. As if to illustrate the problem, the University of Adelaide recently reduced its emphasis on selection interviews, the University of Sydney extended its use of interviews, the University of Queensland may be ending interviews, and a meta-analysis in Medical Teacher suggested that selection interviews have only “modest” predictive validity and “little” or “limited” practical value.1 However, interviews differ in many ways, and although the meta-analysis found no moderating effect of factors such as interview method, structure, training, or scoring, some forms of interview may still be valid, as found outside of medicine,2 particularly for situational interviewing.3

Online responses are no longer available. Please refer to our instructions for authors page for more information.