Abstract
Purpose
Most evaluations of surgical workflow or surgeon skill use simple, descriptive statistics (e.g., time) across whole procedures, thereby deemphasizing critical steps and potentially obscuring critical inefficiencies or skill deficiencies. In this work, we examine off-line, temporal clustering methods that chunk training procedures into clinically relevant surgical tasks or steps during robot-assisted surgery.
Methods
Features calculated from the isogony principle are used to train four common machine learning algorithms from dry-lab laparoscopic data gathered from three common training exercises. These models are used to predict the binary or ternary skill level of a surgeon. K-fold and leave-one-user-out cross-validation are used to assess the accuracy of the generated models.
Results
It is shown that the proposed scalar features can be trained to create 2-class and 3-class classification models that map to fundamentals of laparoscopic surgery skill level with median 85 and 63% accuracy in cross-validation, respectively, for the targeted dataset. Also, it is shown that the 2-class models can discern class at 90% of best-case mean accuracy with only 8 s of data from the start of the task.
Conclusion
Novice and expert skill levels of unobserved trials can be discerned using a state vector machine trained with parameters based on the isogony principle. The accuracy of this classification comes within 90% of the classification accuracy from observing the full trial within 10 s of task initiation on average.
http://ift.tt/2qsAUOK
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου