In Australia, most medical schools use a combination of prior academic performance (prior degree grade point average [GPA] for graduate-entry programs), performance on a specific admissions test (GAMSAT [Graduate Australian Medical School Admissions Test]) and an interview or other psychometric technique.1 There is little consistency between schools in the combination of, or the weight given to, each component in the decision-making process.1 Similar variation in practice has been reported from the United Kingdom.2 A systematic review3 indicated that prior academic performance is the best predictor of subsequent academic performance and that interviews add little to the selection process. There is only very limited published literature on the value of GAMSAT.
The MB BS program at the School of Medicine, University of Queensland, is Australia’s largest, admitting 375 students in 2007. This is a 4-year graduate-entry program, and entry is based on selection according to an initial hurdle of attaining a GPA of 5 or more in any prior degree (any Masters or PhD degree is deemed as meeting this requirement). Final ranking and an offer of a place is based on a combination of GAMSAT and interview scores (Box 1).
GAMSAT is a written examination developed by the Australian Council for Educational Research with the consortium of graduate-entry medical schools,1 and was designed to assess the capacity of students to undertake high-level intellectual studies in a demanding course. It evaluates the nature and extent of abilities and skills gained through prior experience and learning, including the mastery and use of concepts in basic science as well as the acquisition of more general skills in problem solving, critical thinking, and writing.1 GAMSAT Section 1 focuses on reasoning in humanities and social sciences, Section 2 focuses on written communication, and Section 3 on reasoning in biological and physical sciences.
The three student cohorts had similar characteristics and were balanced in sex, mostly aged over 25 years, and of Australian birth (Box 2). Most (64.2%) had a biological science background, and 17.3% had a previous degree higher than Bachelor level. Most students had done their previous study at the University of Queensland. Box 3 shows the values associated with all study variables; the number of subjects in Year 4 is different to Year 1 because of variation in the number of students admitted to the program in each year.
From the multivariate model (Box 4), the three selection criteria (GPA, GAMSAT and interview score) combined explained 21.9% of variation in student performance across all 4 years for the three cohorts combined. This variation fell from 28.2% in Year 1 to 17.7% in Year 4. This explanation of variation was highest for the written examination in Year 1 (30.5%) and lowest for the clinical examination in Year 4 (10.9%). The explained variation in performance fell from Year 1 to Year 4 for overall, written examination and clinical examination scores, but not for the ethics examination, in which it increased slightly (Box 4).
GPA was consistently significantly (and independently) associated with performance in each cohort (data not shown), each examination, and each examination component (Box 4). β Values fell from Year 1 to Year 4 for each examination, but were consistently higher for the total and the written examination, than for the clinical and the ethical examinations in turn.
Consistent with the above results, the correlation coefficients for GPA with each examination and its component were significant. Further, the partial correlation coefficients changed little, and remained significant (Box 5).
β Values for interview scores were consistently lower than those for GPA (except for Year 4 clinical examination) (Box 4), indicating that GPA is relatively more important in explaining variation in academic performance. For overall examination score, the β value for interview score is about three times lower than that for GPA (both are significant).
Interestingly, the β values for interview score increased substantially from Year 1 to Year 4 for each examination, showing that the predictive value of interview performance is higher for academic performance at the end of the medical program. Consistent findings are displayed in Box 5 for unadjusted and adjusted correlation coefficients.
Adjusted correlation coefficients for GAMSAT total score (Box 5) were consistently close to or lower than the values for interview score, and only reached significance for the Year 1 total and written examination. This is different from the unadjusted coefficients, which, while still modest in absolute values, did reach significance in several cases.
Similarly, β values for GAMSAT total score are close to zero, except for Year 4 clinical examination (Box 4). GAMSAT section scores are also mainly close to zero or of small absolute value, and only reach significance for Year 4 ethics examination (Section 1) and Year 4 clinical examination (Section 2; negative association).
Box 6 shows the associations between each selection criterion and the overall total academic performance.
Our findings confirm, and importantly extend, the existing literature on factors that are associated with academic performance in medical school, and hence that may have value in the selection of students.3 Our results show that the selection criteria we (and many other schools) use predict about 20% (ranging from about 10% to 30%) of student academic performance (as measured by examination), depending on year within the program and examination component.
The largest and most recent systematic review on this topic concluded that prior academic performance accounted for 23% of variance in undergraduate medical performance,3 a figure consistent with our findings. It is important to stress therefore that most variation in academic performance is not explained by selection criteria and is presumably a consequence of both intrinsic personal factors and the effect of the teaching itself.
The systematic review3 also concluded that further studies on the value of the interview are needed, and indicated that, in the studies reviewed, interviews seemingly added little or nothing to the selection process. At best, they were associated with only weak to modest independent prediction (0.11–0.14) of performance. In our study, the interview was correlated with overall total examination performance and performance in each Year 4 component (Box 6), but only at modest levels. The high levels of statistical significance (low P values) reflect the large dataset we studied, and it is important to focus on the absolute value of the adjusted correlation coefficient when interpreting our findings. For the interview, these ranged from 0.05 to 0.22, and were consistently substantially lower than the adjusted coefficients for GPA, except for Year 4 clinical and ethics examination performance, in which they were similar.
Although widely used in Australia, and now used by some schools in the UK and Ireland, there is only limited literature on the value of GAMSAT in predicting medical school performance. A PubMed search identified only two studies (one that is 10 years old and outlines a rationale for GAMSAT, and a second that explores association with clinical reasoning skills in a small sample of students). There are many more articles that validate the North American equivalent, the Medical College Admission Test.4 Our data indicate that GAMSAT is poor in predicting academic performance; all of the adjusted correlation coefficients (Box 6) for GAMSAT total score are close to zero. These are the first published data on the validity of GAMSAT in an entire student cohort. Our findings suggest that GAMSAT may have only limited value in predicting academic performance.
An exploratory meta-analysis showed that the predictive power of interviews for academic success was only 0.06, and for clinical success (after graduation) was 0.17, indicating a modest effect.5 Part of the reason for this may be that interviews are inherently unreliable. The authors of a literature review and empirical study called into doubt the fairness of interviews as a highly influential component of an admissions process.6 We acknowledge that the interview process we used may, in and of itself, have influenced the results of our analysis. For example, the training and standardisation that we sought may have limited the ability to discriminate between candidates. However, the data in Box 3 indicate that a wide range of scores was attained in our interviews, and our results are consistent with those previously reported.3
Another important limitation is that our analysis only included students with a relatively high GAMSAT score (mean, 66.2) and so our findings do not test the whole range of GAMSAT scores. Nevertheless, this is a “real world” use of GAMSAT and our findings should be interpreted in that light. A strength of our study is that we examined a range of selection criteria in association with each other, not in isolation (which is a problem in many previously published studies examining individual components of the selection process3).
One stated desire of a selection process is to seek to include non-academic, non-cognitive factors. It is important to acknowledge that academic ability and other key (non-cognitive) attributes are not necessarily inversely correlated,7 or mutually exclusive. Indeed, there is evidence that the two are positively correlated.8 Selecting on academic performance alone, or predominantly, may in fact also lead to the admission of students with attractive non-cognitive attributes.
We acknowledge that other approaches to selection do exist or are being developed, such as the Personal Qualities Assessment9 and the Multiple Mini-Interview.10 These methods may have value, but need to be formally assessed in longitudinal studies. We also acknowledge that selection criteria may influence the learning behaviour of potential applicants (such as studying particular material in preparation for GAMSAT, which may then influence future performance at medical school); this may be seen as a useful or useless influence. Further, the “threat” of an interview may dissuade some potential applicants (such as those with inherently poor communication skills) from even applying to medical school.
- David Wilkinson1
- Jianzhen Zhang2
- Gerard J Byrne3
- Haida Luke4
- Ieva Z Ozolins5
- Malcolm H Parker6
- Raymond F Peterson7
- School of Medicine, University of Queensland, Brisbane, QLD.
None identified.
- 1. Australian Council for Educational Research. GAMSAT Graduate Australian Medical School Admissions Test information booklet 2008. Melbourne: ACER, 2007. http://www.gamsat.acer.edu.au/images/infobook/GAMSAT_InfoBook.pdf (accessed Jan 2008).
- 2. Parry J, Mathers J, Stevens A, et al. Admissions processes for five year medical courses at English schools: review. BMJ 2006; 332: 1005-1009.
- 3. Ferguson E, James D, Madeley L. Factors associated with success in medical school: systematic review of the literature. BMJ 2002; 324: 952-957.
- 4. Julian ER. Validity of the Medical College Admission Test for predicting medical school performance. Acad Med 2005; 80: 910-917.
- 5. Goho J, Blackman A. The effectiveness of academic admission interviews: an exploratory meta-analysis. Med Teach 2006; 28: 335-340.
- 6. Kreiter CD, Yin P, Solow C, Brennan RL. Investigating the reliability of the medical school admissions interview. Adv Health Sci Educ Theory Pract 2004; 9: 147-159.
- 7. Norman G. The morality of medical school admissions. Adv Health Sci Educ Theory Pract 2004; 9: 79-82.
- 8. Eva KW, Reiter HI. Where judgement fails: pitfalls in the selection process for medical personnel. Adv Health Sci Educ Theory Pract 2004; 9: 161-174.
- 9. Powis DA. Selecting medical students. Med Educ 2003; 37: 1064-1065.
- 10. Harris S, Owen C. Discerning quality: using the multiple mini-interview in student selection for the Australian National University Medical School. Med Educ 2007; 41: 234-241.
Abstract
Objective: To assess how well prior academic performance, admission tests, and interviews predict academic performance in a graduate medical school.
Design, setting and participants: Analysis of academic performance of 706 students in three consecutive cohorts of the 4-year graduate-entry medical program at the University of Queensland.
Main outcome measures: Proportion of academic performance during the medical program explained by selection criteria, and correlation between selection criteria and performance. Selection criteria were grade point average (GPA), GAMSAT (Graduate Australian Medical School Admissions Test) score, and interview score. Academic performance was defined as overall total in all examinations combined, in first and fourth year examinations, and in individual written, ethics and clinical components.
Results: Selection criteria explained 21.9% of variation in overall total score, falling from 28.2% in Year 1 to 17.7% in Year 4. This was highest for the written examination in Year 1 (30.5%) and lowest for the clinical examination in Year 4 (10.9%). GPA was most strongly correlated with academic performance (eg, for overall score, partial Spearman’s correlation coefficient [pSCC], 0.47; P < 0.001), followed by interviews (pSCC, 0.12; P = 0.004) and GAMSAT (pSCC, 0.07; P = 0.08). The association between GPA and performance waned from Year 1 to Year 4, while the association between interview score and performance increased from Year 1 to Year 4.
Conclusion: The school’s selection criteria only modestly predict academic performance. GPA is most strongly associated with performance, followed by interview score and GAMSAT score. The school has changed its selection process as a result.