University of Hertfordshire

Standard

How Well do Doctors Think They Perform on the General Medical Council’s Tests of Competence Pilot Examinations? A Cross-sectional Study. / Mehdizadeh, Leila; Sturrock, Alison ; Myers, Gill; Khatib, Yasmin; Dacre, Jane.

In: BMJ Open, Vol. 4, No. 2, 02.2014.

Research output: Contribution to journalArticlepeer-review

Harvard

APA

Vancouver

Author

Mehdizadeh, Leila ; Sturrock, Alison ; Myers, Gill ; Khatib, Yasmin ; Dacre, Jane. / How Well do Doctors Think They Perform on the General Medical Council’s Tests of Competence Pilot Examinations? A Cross-sectional Study. In: BMJ Open. 2014 ; Vol. 4, No. 2.

Bibtex

@article{8b0a23e0ba0b4145af152d59b3d66e55,
title = "How Well do Doctors Think They Perform on the General Medical Council{\textquoteright}s Tests of Competence Pilot Examinations? A Cross-sectional Study",
abstract = "Objective To investigate how accurately doctors estimated their performance on the General Medical Council's Tests of Competence pilot examinations.Design A cross-sectional survey design using a questionnaire method.Setting University College London Medical School.Participants 524 medical doctors working in a range of clinical specialties between foundation year two and consultant level.Main outcome measures Estimated and actual total scores on a knowledge test and Observed Structured Clinical Examination (OSCE).Results The pattern of results for OSCE performance differed from the results for knowledge test performance. The majority of doctors significantly underestimated their OSCE performance. Whereas estimated knowledge test performance differed between high and low performers. Those who did particularly well significantly underestimated their knowledge test performance (t (196)=−7.70, p<0.01) and those who did less well significantly overestimated (t (172)=6.09, p<0.01). There were also significant differences between estimated and/or actual performance by gender, ethnicity and region of Primary Medical Qualification.Conclusions Doctors were more accurate in predicating their knowledge test performance than their OSCE performance. The association between estimated and actual knowledge test performance supports the established differences between high and low performers described in the behavioural sciences literature. This was not the case for the OSCE. The implications of the results to the revalidation process are discussed.",
author = "Leila Mehdizadeh and Alison Sturrock and Gill Myers and Yasmin Khatib and Jane Dacre",
note = "This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/",
year = "2014",
month = feb,
doi = "10.1136/bmjopen-2013-004131",
language = "English",
volume = "4",
journal = "BMJ Open",
issn = "2044-6055",
publisher = "BMJ Publishing Group",
number = "2",

}

RIS

TY - JOUR

T1 - How Well do Doctors Think They Perform on the General Medical Council’s Tests of Competence Pilot Examinations? A Cross-sectional Study

AU - Mehdizadeh, Leila

AU - Sturrock, Alison

AU - Myers, Gill

AU - Khatib, Yasmin

AU - Dacre, Jane

N1 - This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/

PY - 2014/2

Y1 - 2014/2

N2 - Objective To investigate how accurately doctors estimated their performance on the General Medical Council's Tests of Competence pilot examinations.Design A cross-sectional survey design using a questionnaire method.Setting University College London Medical School.Participants 524 medical doctors working in a range of clinical specialties between foundation year two and consultant level.Main outcome measures Estimated and actual total scores on a knowledge test and Observed Structured Clinical Examination (OSCE).Results The pattern of results for OSCE performance differed from the results for knowledge test performance. The majority of doctors significantly underestimated their OSCE performance. Whereas estimated knowledge test performance differed between high and low performers. Those who did particularly well significantly underestimated their knowledge test performance (t (196)=−7.70, p<0.01) and those who did less well significantly overestimated (t (172)=6.09, p<0.01). There were also significant differences between estimated and/or actual performance by gender, ethnicity and region of Primary Medical Qualification.Conclusions Doctors were more accurate in predicating their knowledge test performance than their OSCE performance. The association between estimated and actual knowledge test performance supports the established differences between high and low performers described in the behavioural sciences literature. This was not the case for the OSCE. The implications of the results to the revalidation process are discussed.

AB - Objective To investigate how accurately doctors estimated their performance on the General Medical Council's Tests of Competence pilot examinations.Design A cross-sectional survey design using a questionnaire method.Setting University College London Medical School.Participants 524 medical doctors working in a range of clinical specialties between foundation year two and consultant level.Main outcome measures Estimated and actual total scores on a knowledge test and Observed Structured Clinical Examination (OSCE).Results The pattern of results for OSCE performance differed from the results for knowledge test performance. The majority of doctors significantly underestimated their OSCE performance. Whereas estimated knowledge test performance differed between high and low performers. Those who did particularly well significantly underestimated their knowledge test performance (t (196)=−7.70, p<0.01) and those who did less well significantly overestimated (t (172)=6.09, p<0.01). There were also significant differences between estimated and/or actual performance by gender, ethnicity and region of Primary Medical Qualification.Conclusions Doctors were more accurate in predicating their knowledge test performance than their OSCE performance. The association between estimated and actual knowledge test performance supports the established differences between high and low performers described in the behavioural sciences literature. This was not the case for the OSCE. The implications of the results to the revalidation process are discussed.

U2 - 10.1136/bmjopen-2013-004131

DO - 10.1136/bmjopen-2013-004131

M3 - Article

VL - 4

JO - BMJ Open

JF - BMJ Open

SN - 2044-6055

IS - 2

ER -