How to predict how medical student will do on future high stakes exams?
It is widely assumed by many admissions councilors and by the general public that performance on previous college and standardized exams, both of which weigh heavily toward requirements for entry to medical school, highly predicts future performance by medical students on their high stakes qualifying exams. This assumption has been recently challenged in a series of manuscripts we published with fairly large data sets from the Joan C. Edwards School of Medicine. When investigated, using a widely used prediction tool, (known as step-wise multivariate linear regression), we found that data from previous exams such as those taken in undergraduate settings and the Medical College Admissions Test/MCAT were very poor at predicting the future performance of national standardized exams that are required by all medical students wishing to practice medicine in the US.
In fact, when we looked at a dozen or so predictors (values that may or may not predict future performance), we found that performance on internal exams fared much better than exams taken prior to medical school when predicting future outcomes of these required medical licensing exams. Although there are a series of different exams, we focused on the two exams that the majority of medical students are most concerned with in their first four years of medical school, Step 1 and Step 2 Clinical Knowledge (CK).
Although this may not be surprising to many people who teach medical students, the closer the assessment is to the Step exam in question, generally the more powerful it is in predicting the outcome of that standardized exam. In other words, we found that student exams taken in their second year were better at predicting Step 1 than the exams taken in year one of medical school (note: Step 1 is typically taken at the end of the second year in our medical program). Furthermore, performance of Step 1 was quite strong at predicting the future performance of Step 2CK (this exam is usually taken in the fourth year of our program). This is most likely due to the fact that exams taken early in a student’s training are more basic and less applied than exams taken later in one’s training. The Step exams are generally more applied and more integrated than many exams experienced by medical students for a variety of reasons.
Finally, the authors of this work embarked on this type of analysis not just out of pure interest, but because they wanted to see how they might be able to help students- especially those who struggle early on with their coursework. In fact, remediation and identification of struggling students before it is too late is an important and difficult task for medical school administrators. Much resources are often placed on a few students who struggle and assisting those student much earlier in their lifecycle has many advantages. We now use data from our prediction models directly in our educational dashboards that are shared with the dean of students, dean of medical education and other senior administrators. Data is also available for those students who are interested in seeing their estimated future performance. Although still a new process for our program, it is starting to pay dividends and has enabled us to be much more proactive than we were in the past. It is our hope that other medical schools consider performing these types of analysis with their own internal data.
Joan C. Edwards School of Medicine, Marshall University
Associate Dean for Medical Education, USA
The future is in the numbers: the power of predictive analysis in the biomedical educational environment.
Med Educ Online. 2016 Jul 1