Tags

Type your tag names separated by a space and hit enter

A Comparison of Robotic Simulation Performance on Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools.
J Minim Invasive Gynecol. 2017 Nov - Dec; 24(7):1184-1189.JM

Abstract

STUDY OBJECTIVE

To answer the question of whether there is a difference between robotic virtual reality simulator performance assessment and validated human reviewers. Current surgical education relies heavily on simulation. Several assessment tools are available to the trainee, including the actual robotic simulator assessment metrics and the Global Evaluative Assessment of Robotic Skills (GEARS) metrics, both of which have been independently validated. GEARS is a rating scale through which human evaluators can score trainees' performances on 6 domains: depth perception, bimanual dexterity, efficiency, force sensitivity, autonomy, and robotic control. Each domain is scored on a 5-point Likert scale with anchors. We used 2 common robotic simulators, the dV-Trainer (dVT; Mimic Technologies Inc., Seattle, WA) and the da Vinci Skills Simulator (dVSS; Intuitive Surgical, Sunnyvale, CA), to compare the performance metrics of robotic surgical simulators with the GEARS for a basic robotic task on each simulator.

DESIGN

A prospective single-blinded randomized study.

SETTING

A surgical education and training center.

PARTICIPANTS

Surgeons and surgeons in training.

INTERVENTIONS

Demographic information was collected including sex, age, level of training, specialty, and previous surgical and simulator experience. Subjects performed 2 trials of ring and rail 1 (RR1) on each of the 2 simulators (dVSS and dVT) after undergoing randomization and warm-up exercises. The second RR1 trial simulator performance was recorded, and the deidentified videos were sent to human reviewers using GEARS. Eight different simulator assessment metrics were identified and paired with a similar performance metric in the GEARS tool. The GEARS evaluation scores and simulator assessment scores were paired and a Spearman rho calculated for their level of correlation.

MEASUREMENTS AND MAIN RESULTS

Seventy-four subjects were enrolled in this randomized study with 9 subjects excluded for missing or incomplete data. There was a strong correlation between the GEARS score and the simulator metric score for time to complete versus efficiency, time to complete versus total score, economy of motion versus depth perception, and overall score versus total score with rho coefficients greater than or equal to 0.70; these were significant (p < .0001). Those with weak correlation (rho ≥0.30) were bimanual dexterity versus economy of motion, efficiency versus master workspace range, bimanual dexterity versus master workspace range, and robotic control versus instrument collisions.

CONCLUSION

On basic VR tasks, several simulator metrics are well matched with GEARS scores assigned by human reviewers, but others are not. Identifying these matches/mismatches can improve the training and assessment process when using robotic surgical simulators.

Authors+Show Affiliations

Department of Obstetrics and Gynecology, Columbia University Medical Center, New York, New York. Electronic address: akhdubin@gmail.com.Florida Hospital Nicholson Center, Celebration, Florida.Florida Hospital Nicholson Center, Celebration, Florida.Florida Hospital Nicholson Center, Celebration, Florida.Department of Obstetrics and Gynecology, Columbia University Medical Center, New York, New York.

Pub Type(s)

Comparative Study
Journal Article
Randomized Controlled Trial

Language

eng

PubMed ID

28757439

Citation

Dubin, Ariel K., et al. "A Comparison of Robotic Simulation Performance On Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools." Journal of Minimally Invasive Gynecology, vol. 24, no. 7, 2017, pp. 1184-1189.
Dubin AK, Smith R, Julian D, et al. A Comparison of Robotic Simulation Performance on Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools. J Minim Invasive Gynecol. 2017;24(7):1184-1189.
Dubin, A. K., Smith, R., Julian, D., Tanaka, A., & Mattingly, P. (2017). A Comparison of Robotic Simulation Performance on Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools. Journal of Minimally Invasive Gynecology, 24(7), 1184-1189. https://doi.org/10.1016/j.jmig.2017.07.019
Dubin AK, et al. A Comparison of Robotic Simulation Performance On Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools. J Minim Invasive Gynecol. 2017 Nov - Dec;24(7):1184-1189. PubMed PMID: 28757439.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - A Comparison of Robotic Simulation Performance on Basic Virtual Reality Skills: Simulator Subjective Versus Objective Assessment Tools. AU - Dubin,Ariel K, AU - Smith,Roger, AU - Julian,Danielle, AU - Tanaka,Alyssa, AU - Mattingly,Patricia, Y1 - 2017/07/27/ PY - 2017/05/22/received PY - 2017/07/21/revised PY - 2017/07/23/accepted PY - 2017/8/2/pubmed PY - 2018/4/10/medline PY - 2017/8/1/entrez KW - Minimally invasive surgery KW - Performance assessment KW - Robotic surgery KW - Surgical education KW - Surgical simulation KW - Virtual reality robotic simulator SP - 1184 EP - 1189 JF - Journal of minimally invasive gynecology JO - J Minim Invasive Gynecol VL - 24 IS - 7 N2 - STUDY OBJECTIVE: To answer the question of whether there is a difference between robotic virtual reality simulator performance assessment and validated human reviewers. Current surgical education relies heavily on simulation. Several assessment tools are available to the trainee, including the actual robotic simulator assessment metrics and the Global Evaluative Assessment of Robotic Skills (GEARS) metrics, both of which have been independently validated. GEARS is a rating scale through which human evaluators can score trainees' performances on 6 domains: depth perception, bimanual dexterity, efficiency, force sensitivity, autonomy, and robotic control. Each domain is scored on a 5-point Likert scale with anchors. We used 2 common robotic simulators, the dV-Trainer (dVT; Mimic Technologies Inc., Seattle, WA) and the da Vinci Skills Simulator (dVSS; Intuitive Surgical, Sunnyvale, CA), to compare the performance metrics of robotic surgical simulators with the GEARS for a basic robotic task on each simulator. DESIGN: A prospective single-blinded randomized study. SETTING: A surgical education and training center. PARTICIPANTS: Surgeons and surgeons in training. INTERVENTIONS: Demographic information was collected including sex, age, level of training, specialty, and previous surgical and simulator experience. Subjects performed 2 trials of ring and rail 1 (RR1) on each of the 2 simulators (dVSS and dVT) after undergoing randomization and warm-up exercises. The second RR1 trial simulator performance was recorded, and the deidentified videos were sent to human reviewers using GEARS. Eight different simulator assessment metrics were identified and paired with a similar performance metric in the GEARS tool. The GEARS evaluation scores and simulator assessment scores were paired and a Spearman rho calculated for their level of correlation. MEASUREMENTS AND MAIN RESULTS: Seventy-four subjects were enrolled in this randomized study with 9 subjects excluded for missing or incomplete data. There was a strong correlation between the GEARS score and the simulator metric score for time to complete versus efficiency, time to complete versus total score, economy of motion versus depth perception, and overall score versus total score with rho coefficients greater than or equal to 0.70; these were significant (p < .0001). Those with weak correlation (rho ≥0.30) were bimanual dexterity versus economy of motion, efficiency versus master workspace range, bimanual dexterity versus master workspace range, and robotic control versus instrument collisions. CONCLUSION: On basic VR tasks, several simulator metrics are well matched with GEARS scores assigned by human reviewers, but others are not. Identifying these matches/mismatches can improve the training and assessment process when using robotic surgical simulators. SN - 1553-4669 UR - https://www.unboundmedicine.com/medline/citation/28757439/A_Comparison_of_Robotic_Simulation_Performance_on_Basic_Virtual_Reality_Skills:_Simulator_Subjective_Versus_Objective_Assessment_Tools_ L2 - https://linkinghub.elsevier.com/retrieve/pii/S1553-4650(17)30407-7 DB - PRIME DP - Unbound Medicine ER -