Abstract
Objective To investigate how accurately doctors estimated their performance on the General Medical Council's Tests of Competence pilot examinations.Design A cross-sectional survey design using a questionnaire method.Setting University College London Medical School.Participants 524 medical doctors working in a range of clinical specialties between foundation year two and consultant level.Main outcome measures Estimated and actual total scores on a knowledge test and Observed Structured Clinical Examination (OSCE).Results The pattern of results for OSCE performance differed from the results for knowledge test performance. The majority of doctors significantly underestimated their OSCE performance. Whereas estimated knowledge test performance differed between high and low performers. Those who did particularly well significantly underestimated their knowledge test performance (t (196)=−7.70, p<0.01) and those who did less well significantly overestimated (t (172)=6.09, p<0.01). There were also significant differences between estimated and/or actual performance by gender, ethnicity and region of Primary Medical Qualification.Conclusions Doctors were more accurate in predicating their knowledge test performance than their OSCE performance. The association between estimated and actual knowledge test performance supports the established differences between high and low performers described in the behavioural sciences literature. This was not the case for the OSCE. The implications of the results to the revalidation process are discussed.
Original language | English |
---|---|
Journal | BMJ Open |
Volume | 4 |
Issue number | 2 |
DOIs | |
Publication status | Published - Feb 2014 |