Advertisement

Assessing Surgical Skills Among Urology Resident Applicants: Can Crowd-Sourcing Identify the Next Generation of Surgeons?

Login to Access Video or Poster Abstract: MP51-10
Sources of Funding: none

Introduction

Surgical skills are key determinants of patient outcomes and as such there is an increasing interest in skills assessment. However using expert surgeon reviewers is both costly and time-consuming. Recently, crowdsourcing has been shown to provide an accurate assessment of surgical skills. We hypothesized that an assessment of surgical skills by experts and the “crowd� might be helpful in the selection of medical student applicants to our urology residency program.

Methods

After obtaining UC Irvine Institutional Board Review approval, our 2015, residency applicants performed four tasks: open square knot tying, laparoscopic peg transfer, and robotic suturing, and skill task 8 on the LAP Mentorâ„¢ (Simbionix Ltd., Lod, Israel). All interviewees were informed about the nature of the study and consented to such, two weeks prior to the interview date. Faculty experts and crowd workers (Crowd-Sourced Assessment of Technical Skills [C-SATS], Seattle, WA) assessed recorded performances using the Objective Structured Assessment of Technical Skills (OSATS), Global Evaluative Assessment of Robotic Skills (GEARS), and the Global Operative Assessment of Laparoscopic Skills (GOALS) validated assessment tools.

Results

A total of 25 resident interviewees were included and completed the study tasks. A total of 3938 crowd assessments and 150 expert assessments were obtained for the four tasks, requiring 3.5 hours and 22 days to gather all data, respectively. Inter-rater agreement between expert and crowd assessment scores for open knot tying, laparoscopic peg transfer, and robotic suturing was 0.62, 0.92 and 0.86 respectively. Agreement between applicant rank on skill task 8 on the LAP Mentor assessment and crowd assessment was poor, at only 0.32. The crowd match rank based solely on skills performance did not compare well with the final faculty match rank list (0.46); however, none of the bottom five crowd-rated applicants appeared in the top five expert-rated applicants and none of the top five crowd-rated applicants appeared in the bottom five expert-rated applicants. The crowd and experts agreed on 3 of the 5 lowest ranked applicants.

Conclusions

Crowd-source assessment of resident applicant surgical skills has good inter-rater agreement with expert physician raters but not with a computer-based objective motion metrics software assessment. The crowd was able to determine poor performers nearly as well as the experts.

Funding

none

Authors
Zhamshid Okhunov
Simone L. Vernez
Victor Huynh
Kathryn Osann
Jaime Landman
Ralph V. Clayman
back to top