Objective
Often the assessment of mastoidectomy performance requires time‐consuming manual rating. Virtual reality (VR) simulators offer potentially useful automated assessment and feedback but should be supported by validity evidence. We aimed to investigate simulator metrics for automated assessment based on the expert performance approach, comparison with an established assessment tool, and the consequences of standard setting.
Methods
The performances of 11 experienced otosurgeons and 37 otorhinolaryngology residents. Participants performed three mastoidectomies in the Visible Ear Simulator. Nine residents contributed additional data on repeated practice in the simulator. One hundred and twenty‐nine different performance metrics were collected by the simulator and final‐product files were saved. These final products were analyzed using a modified Welling Scale by two blinded raters.
Results
Seventeen metrics could discriminate between resident and experienced surgeons' performances. These metrics mainly expressed various aspects of efficiency: Experts demonstrated more goal‐directed behavior and less hesitancy, used less time, and selected large and sharp burrs more often. The combined metrics‐based score (MBS) demonstrated significant discriminative ability between experienced surgeons and residents with a mean difference of 16.4% (95% confidence interval [12.6–20.2], P << 0.001). A pass/fail score of 83.6% was established. The MBS correlated poorly with the final‐product score but excellently with the final‐product score per time.
Conclusion
The MBS mainly reflected efficiency components of the mastoidectomy procedure, and although it could have some uses in self‐directed training, it fails to measure and encourage safe routines. Supplemental approaches and feedback are therefore required in VR simulation training of mastoidectomy.
Level of Evidence
2b. Laryngoscope, 2019
http://bit.ly/2M0efT0
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου