Over the past two decades there has been a significant change in the way Australian medical schools select their students. Where once a school leaver’s matriculation score was the predominant criterion,1 there is now a range of selection procedures for entry into school-leaver, graduate-entry and mixed-entry medical school programs. The change in selection procedures has in part been driven by a desire to assess broader suitability than just academic performance, and the need for medical schools to be socially accountable and reduce discrimination in selection procedures.2
Medical schools generally use a combination of academic and non-academic measures to assess suitability. Academic measures consist of prior academic performance and written problem-solving tests, while non-academic measures include interviews and written work to assess applicants’ values and personal characteristics. The wide range of tools currently in use in Australia, and their benefits and limitations are summarised in Box 1.
There is no debate that high-level academic ability is necessary to complete a medical course. The Australian Medical Education Study7 found prior academic performance (ie, matriculation scores and grade point average [GPA]) to be the main selection tool for admission to Australian medical schools. There is strong evidence that the best predictor of academic performance, both during the course and after graduation as a doctor, is prior academic performance.4,8,9
However, using academic performance as the sole selection criterion creates a bias against applicants of equal suitability from low socioeconomic backgrounds. Research has shown that this bias has its roots in secondary schooling. Disadvantaged students lack access to the courses, role models and support required to achieve their full academic potential and to develop the expectation of participating in higher education.10 In addition, academic performance does not measure an applicant’s values and personal characteristics.
To ensure that they admit a broad spectrum of students, including those from all socioeconomic backgrounds, medical schools have lowered cut-off scores for matriculation and GPA, and used other methods such as aptitude tests to rank students.3 In Australia, a consortium of medical schools, in collaboration with the Australian Council for Educational Research (ACER), purchased a selection instrument developed at the University of Newcastle (New South Wales) and produced the Undergraduate Medicine and Health Sciences Admission Test (UMAT) for entry into undergraduate medical programs (umat.acer.edu.au).
For graduate-entry schools, ACER developed the Graduate Australian Medical Schools Admissions Test (GAMSAT) (gamsat.acer.edu.au), which was modelled on the Medical College Admission Test (MCAT) (www.aamc.org/students/applying/mcat/) used in North America. Most, but not all, medical schools use either the UMAT or the GAMSAT. Interestingly, neither of these tests is fully computerised, unlike the United Kingdom Clinical Aptitude Test (UKCAT) (www.ukcat.ac.uk) developed by a consortium of British medical schools in conjunction with Pearson Educational; this test can be taken in any motor vehicle licence-testing centre in the UK.
Most Australian medical schools use interviews to assess non-academic skills and attributes. The traditional interview consists of a panel of two or three people questioning and discussing an applicant for 30–60 minutes. There is wide variation in the structuring of panel interviews, from minimally to highly structured, and they may be subject to bias,11 both towards applicants with characteristics similar to those of the interviewers and against those with different characteristics. There are also difficulties in standardising interviews so that all applicants have a similar experience.
To solve the problems associated with panel interviews, McMaster University (Canada) developed the Multiple Mini-Interview (MMI), derived from the Objective Structured Clinical Examination.4 In the MMI, applicants are interviewed across a number of stations for 5–10 minutes per station, usually with one interviewer per station. Each station is structured around a different theme and may include a group activity such as solving a problem as a group. The scoring system varies between schools, with some measuring each of their criteria at every station, while others measure only a selection at each station.12 There is also variation in whether the rating scales are anchored to descriptors or not.
Internationally, there are a number of other selection methods in use. Most notably, medical schools in the UK make extensive use of portfolios and referee reports, although these methods have been subject to criticism for the potential for impression management, and their limited ability to predict future performance.6,9,13 In addition, students from low socioeconomic backgrounds often lack the life experiences required to produce high-quality portfolios.
The World Health Organization regards medical schools as having an obligation to direct their education towards the health needs of the populations they serve.2 To meet these social commitments, universities have adjusted their selection processes to increase applications from candidates who could serve special-needs communities. A number of Australian medical schools have inbuilt systems to encourage the entry of students from areas of workforce need, most notably from rural regions, based on the premise that they are more likely to return to these areas to practise.14 In addition, federal government policy has promoted both entry of rural and regional students to medical training and, through quotas and scholarships, the requirement to work in areas of need (Medical Rural Bonded Scholarship Scheme <www.health.gov.au/mrbscholarships> and Bonded Medical Places Scheme <www.health.gov.au/bmpscheme>).
The way in which the final ranking of applicants is developed can provide a clear message to stakeholders of the relative importance of academic ability, aptitude, professional attitudes and social accountability. The approach to ranking varies significantly between medical schools, with policy being driven by a complex interplay of local, regional and national priorities.15 Some medical schools use a proportion of all scores in developing a final ranking, while others use certain measures as cut-off scores only and rely more on interviews to rank students.
There is substantial international interest in demonstrating the robustness and equity of medical school selection processes. Recent publications have reported on the utility, reliability and validity of the various test formats, particularly those for assessing non-academic criteria.12,16 - 19 Most of the research on selection comes from North America, with studies exploring the predictive validity of the MCAT, the GPA and, to a lesser extent, other aspects of the selection process. There have been fewer studies elsewhere, but increasingly researchers in Australia, the UK and the Netherlands are publishing in this area.
Research in student selection is replete with methodological difficulties and these are outlined in Box 2. The major issue is that there are few opportunities for truly randomised trials, as only successful applicants are available to be studied. As a result, research predominantly consists of correlation studies, examining how well selection scores correlate with desired outcomes in the medical course. Correlation studies are not able to infer causation, but are powerful if the correlation represents a significant proportion of common variance. Because selection processes choose only the high-scoring applicants, there is significant restriction of range, leading to weak correlations between student selection scores and in-program assessment. This range restriction can be adjusted for by using well established statistical techniques,20 but can never be completely overcome.
With relatively limited sample sizes and significant variation in selection processes across universities and over time, large multicentre studies of selection processes have, so far, not been possible in Australia. However, there is some interest in using the large databases derived from UMAT and GAMSAT scores, and integrating these with data available from the Medical Student Outcome Database and Longitudinal Tracking Project (www.medicaldeans.org.au/projects-activities/msod).
Published research that takes methodological factors into consideration supports the use of selection processes that include non-academic criteria.9,21 For example, both national and international research shows that MMIs are reliable,4,19 and there is early evidence of their ability to predict future performance.4 A recent Australian study showed that providing skills-based training to interviewers significantly decreased variance in scoring of MMIs.22 Research evidence also shows that the correlation between scores in measures of non-academic performance and clinical performance increases during a medical course.23
One of the limitations of selection research has been restricting investigation to individual tools while neglecting the overall process of selection. It has at least been shown that a structured selection process provides better results than random selection by lottery, a method that has been used in the Netherlands.24 However, there has been no research into the optimal mix of instruments and how scores are best combined. It is unlikely there will be a single answer, as the context in which selection occurs significantly influences the type(s) of students selected for a given program.
Selection into medicine is a high-stakes activity and individual applicants are highly motivated to maximise their chances of selection. The possibility of candidates receiving coaching is an issue for all selection committees, although there is mixed evidence about its effectiveness. However, one study found no impact of coaching on interview performance in the Australian context.25
“Faking good”, that is, candidates giving false answers to present themselves in a favourable light, is a particular problem in interviews and also for personality tests.26 Previous studies have shown that this can affect hiring decisions in business,27 but its impact on selection into medicine has not yet been investigated. It is unlikely that applicants with personality disorders, who will probably demonstrate unprofessional behaviour during the medical course or as practising clinicians, will be detected by existing selection processes. As major psychopathological conditions are best diagnosed by a psychiatric interview, it is not feasible, practically or financially, to use this method in selection processes; rather, this is best dealt with by sustained observation of students across the course. As a consequence, medical schools are developing rigorous systems for remediating or excluding students whose behaviour is persistently unprofessional.28
Recent research has focused on broadening the repertoire of tools and processes used for selection. Personality testing, for example, is being explored as a tool for medical school selection, based on the assumption that it is easier and more reliable to use a written psychometric test than conduct an interview.29 The critical issue for clarification is the relationship between an applicant’s personality traits and their subsequent performance, as there is a requirement for a variety of personality types to meet the demands of the different careers within the profession. So far, little correlation has been found between interview scores and the “Big Five” personality traits — openness, conscientiousness, agreeableness, extroversion and neuroticism.30 Specific instruments, such as the Personal Qualities Assessment (www.pqa.net.au),5 have been trialled and will be used in selection for entry into an Australian medical school this year.
The concept of using centralised assessment centres for selection has been developed by organisational psychologists and personnel experts for a range of managerial and non-managerial occupations.31 An assessment centre allows an applicant to participate in various simulated activities and be observed by several trained assessors. The advantages are enhanced standardisation and greater efficiency for applicants. Selection centres have been extensively used for recruitment for postgraduate positions in the UK,32 and more recently in Australia,33 but not as yet for Australian medical school selection.
2 Problems with correlation studies of selection criteria
Academic versus non-academic outcome measures
Non-academic selection criteria correlate poorly with academic outcomes16
Non-academic outcome measures lack precision and vary across time19
Correlation measures need a full range of scores for both the predictor and the outcome20
Selection processes choose only the best applicants, thus restricting the range of scores and weakening correlations of student selection scores with in-program assessment20
Various methods of correcting for range restriction are available20
Provenance: Commissioned; externally peer reviewed.
- Ian G Wilson1
- Christopher Roberts2
- Eleanor M Flynn3
- Barbara Griffin4
- 1 Medical Education Unit, University of Western Sydney, Sydney, NSW.
- 2 Northern Clinical School, University of Sydney, Sydney, NSW.
- 3 Medical Education Unit, University of Melbourne, Melbourne, VIC.
- 4 Department of Psychology, Macquarie University, Sydney, NSW.
Series Guest Editor,
Jennifer J Conn
MB BS, FRACP, MClinEd
No relevant disclosures.
- 1. Marley J, Carman I. Selecting medical students: a case report of the need for change. Med Educ 1999; 33: 455-459.
- 2. Boelen C. Adapting health care institutions and medical schools to societies’ needs. Acad Med 1999; 74 (8 Suppl): S11-S20.
- 3. Turnbull D, Buckley P, Robinson JS, et al. Increasing the evidence base for selection for undergraduate medicine: four case studies investigating process and interim outcomes. Med Educ 2003; 37: 1115-1120.
- 4. Eva KW, Reiter HI, Rosenfeld J, Norman GR. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med 2004; 79 (10 Suppl): S40-S42.
- 5. Lumsden M, Bore M, Jack R, Powis D. Assessment of personal qualities in relation to admission to medical school. Med Educ 2005; 39: 240-242.
- 6. Stewart W. Attack of the clones: plagiarism by university applicants soars. Times Higher Education 2011; 18 Feb. http://www.timeshighereducation.co.uk/story.asp?storycode=415233 (accessed Feb 2012).
- 7. Australian Medical Education Study. What makes for success in medical education? Synthesis report. Canberra: Department of Education, Employment and Workplace Relations, 2008. http://www.deewr.gov.au/HigherEducation/Publications/HEReports/Documents/SynthesisReport.pdf (accessed Jan 2012).
- 8. McManus C, Powis D. Testing medical student selection tests [editorial]. Med J Aust 2007; 186: 118-119. <MJA full text>
- 9. Quinlivan JA, Lam LT, Wan S, Petersen RW. Selecting medical students for academic and attitudinal outcomes in a Catholic medical school. Med J Aust 2010; 193: 347-350. <MJA full text>
- 10. Chowdry H, Crawford C, Dearden L, et al. Widening participation in higher education: analysis using linked administrative data. Bonn: IZA, 2010 (IZA Discussion Paper No. 4991). http://ftp.iza.org/dp4991.pdf (accessed Jan 2012).
- 11. Kreiter CD, Yin P, Solow C, Brennan RL. Investigating the reliability of the medical school admissions interview. Adv Health Sci Educ Theory Pract 2004; 9: 147-159.
- 12. Dodson M, Crotty B, Prideaux D, et al. The multiple mini-interview: how long is enough? Med Educ 2009; 43: 168-174.
- 13. Barr DA. Science as superstition: selecting medical students. Lancet 2010; 376: 678-679.
- 14. Dunbabin JS, Levitt L. Rural origin and rural medical exposure: their impact on the rural and remote medical workforce. Rural Remote Health [internet] 2003; 3: 212. Epub 2003 Jun 25.
- 15. Roberts C, Prideaux D. Selection for medical schools: re-imaging as an international discourse [commentary]. Med Educ 2010; 44: 1054-1056.
- 16. Groves MA, Gordon J, Ryan G. Entry tests for graduate medical programs: is it time to re-think? Med J Aust 2007; 186: 120-123. <MJA full text>
- 17. Lemay J, Lockyer J, Collin V, Brownwell A. Assessment of non-cognitive traits through admissions multiple mini-interview. Med Educ 2007; 41: 573-579.
- 18. Roberts C, Walton M, Rothnie I, et al. Factors affecting the utility of the multiple mini-interview in selecting candidates for graduate-entry medical school. Med Educ 2008; 42: 396-404.
- 19. Wilkinson D, Zhang J, Byrne GJ, et al. Medical school selection criteria and the prediction of academic performance: evidence leading to change in policy and practice at the University of Queensland. Med J Aust 2008; 188: 349-354. <MJA full text>
- 20. Schmidt F, Oh I, Le H. Increasing the accuracy of corrections for range restriction: implications for selection procedure validities and other research results. Pers Psychol 2006; 59: 281-305.
- 21. Lievens F, Coetsier P. Situational tests in student selection: an examination of predictive validity, adverse impact, and construct validity. Int J Select Assess 2002; 10: 245-257.
- 22. Griffin BN, Wilson IG. Interviewer bias in medical student selection. Med J Aust 2010; 193: 343-346. <MJA full text>
- 23. Lievens F, Ones DS, Dilchert S. Personality scale validities increase throughout medical school. J Appl Psychol 2009; 94: 1514-1535.
- 24. Urlings-Strop L, Stijnen T, Themmen A, Splinter T. Selection of medical students: a controlled experiment. Med Educ 2009; 43: 175-183.
- 25. Griffin B, Harding DW, Wilson IG, Yeomans ND. Does practice make perfect? The effect of coaching and retesting on selection tests used for admission to an Australian medical school. Med J Aust 2008; 189: 270-273. <MJA full text>
- 26. Griffin B, Hesketh B, Grayson D. Applicants faking good: evidence of item bias in the NEO PI-R. Pers Individual Differences 2004; 36: 1545-1558.
- 27. Christiansen N, Goffin R, Johnston N, Rothstein M. Correcting for faking: effects on criterion-related validity and individual hiring decisons. Pers Psychol 1994; 47: 847-860.
- 28. Parker MH, Turner J, McGurgan P, et al. The difficult problem: assessing medical students’ professional attitudes and behaviour. Med J Aust 2010; 193: 662-664. <MJA full text>
- 29. Bore M, Munro D, Powis D. A comprehensive model for the selection of medical students. Med Teach 2009; 31: 1066-1072.
- 30. Kulasegaram K, Reiter H, Wiesner W, et al. Non-association between Neo-5 personality tests and multiple mini-interview. Adv Health Sci Educ Theory Pract 2010; 15: 415-423.
- 31. Lievens F, Thornton G. Assessment centers: recent developments in practice and research. In: Evers A, Anderson N, Smit-Voskuijl O, editors. The Blackwell handbook of personal selection. Oxford: Blackwell, 2005: 243-264.
- 32. Patterson F, Ferguson E, Norfolk T, Lane P. A new selection system to recruit general practice registrars: preliminary findings from a validation study. BMJ 2005; 330: 711-714.
- 33. Roberts C, Togno JM. Selection into specialist training programs: an approach from general practice. Med J Aust 2011; 194: 93-95.
Abstract
Selection processes for medical schools need to be unbiased, valid, and psychometrically reliable, as well as evidence-based and transparent to all stakeholders.
A range of academic and non-academic criteria are used for selection, including matriculation scores, aptitude tests and interviews.
Research into selection is fraught with methodological difficulties; however, it shows positive benefits for structured selection processes.
Pretest coaching and “faking good” are potential limitations of current selection procedures.
Developments in medical school selection include the use of personality tests, centralised selection centres and programs to increase participation by socially disadvantaged students.