Advertisement

Validation of the Digital Rectal Exam Clinical Tool (DiRECT)

Abstract: PD46-10
Sources of Funding: University of Virginia Academy of Distinguished Educators

Introduction

There is increased emphasis on the measurement of competency in medical education and in maintenance of certification. The digital rectal exam (DRE) is an essential and important component of the physical examination but medical students are graduating with minimal experience due to the intimate nature of the exam and difficulty articulating the skill. We previously used a modified Delphi method with 10 experts to create a novel, validated assessment instrument for measuring medical student DRE proficiency, termed the Digital Rectal Exam Clinical Tool (DiRECT). We sought to demonstrate construct validity of the DiRECT in medical students and residents at different training levels.

Methods

The DiRECT instrument was developed using a modified Delphi method with 5 radiation oncologists and 5 urologists. The consensus panel identified 5 pertinent domains and determined levels of distinction for each. To validate the instrument, patients gave consent for paired digital rectal exams. The attending and trainee (medical student or resident) independently completed the DiRECT and the trainee's responses were referenced against the expert's. The DiRECT was scored using a partial credit model assigned by the study team. Training years were assigned to all participants, beginning at 1 for 3rd year medical students. The relationship between DiRECT score and training years was analyzed with linear regression.

Results

The DiRECT was completed 34 times by medical students and 15 times by urology resident physicians (PGY2-6). One of five attending urologists completed the corresponding DRE for scoring reference. Each trainee's result as a percent score using our partial credit model is seen in Figure 1. The relationship between training years and partial credit score was statistically significant (p = 0.0141) and with a medium to large effect size (r2 = 12.4%). As level of training increased, the scores closely approximated those of the attending physician. When adjusting for attending physician, training level approached but did not reach statistical significance (p = 0.087).

Conclusions

We previously showed the ability of the DiRECT to reflect the nuances of complex versus benign exams in second year medical students. These additional data suggest validity of the instrument to differentiate between trainees of differing experience levels.

Funding

University of Virginia Academy of Distinguished Educators

Authors
Matthew Clements
Karen Schmidt
Tracey Krupski
back to top