Following the shift from time-based medical education to a competency-based approach, a computer-assisted training platform would help relieve some of the new time burden placed on physicians. A vital component of these platforms is the computation of competency metrics which are based on surgical tool motion. Recognizing the class and motion of surgical tools is one step in the development of a training platform. Object detection can achieve tool recognition. While previous literature has reported on tool recognition in minimally invasive surgeries, open surgeries have not received the same attention. Open Inguinal Hernia Repair (OIHR), a common surgery that general surgery residents must learn, is an example of such surgeries. We present a method for object detection to recognize surgical tools in simulated OIHR. Images were extracted from six video recordings of OIHR performed on phantoms. Tools were labelled with bounding boxes. A YOLOV3 object-detection model was trained to recognize the tools used in OIHR. The Average Precision scores per class and the mean Average Precision (mAP) were reported to benchmark the model’s performance. The mAP of the tool classes was 0.61, with individual Average Precision scores reaching up to 0.98. Tools with poor visibility or similar shapes such as the forceps, or scissors achieved lower precision scores. With an object detection network that can identify tools, research can be done on tissue-tool interactions to achieve workflow recognition. Workflow recognition would allow a training platform to detect the tasks performed in hernia repair surgeries.
As medical education adopts a competency-based training approach, assessment of skills and timely provision of formative feedback is required. Provision of such assessment and feedback places a substantial time burden on surgeons. To reduce this time burden, we look to develop a computer-assisted training platform to provide both instruction and feedback to residents learning open Inguinal Hernia Repairs (IHR). To provide feedback on residents’ technical skills, we must first find a method of workflow recognition of the IHR. We thus aim to recognize and distinguish between workflow steps of an open IHR based on the presence and frequencies of different tool-tissue interactions occurring during each step. Based on ground truth tissue segmentations and tool bounding boxes, we identify the visible tissues within a bounding box. This provides an estimation of which tissues a tool is interacting with. The presence and frequencies of the interactions during each step are compared to determine whether this information can be used to distinguish between steps. Based on the ground truth tool-tissue interactions, the presence and frequencies of interactions during each step in the IHR show clear, distinguishable patterns. In conclusion, due to the distinct differences in the presence and frequencies of the tool-tissue interactions between steps, this offers a viable method of step recognition of an open IHR performed on a phantom.
PURPOSE: As medical education adopts a competency-based training method, experts are spending substantial amounts of time instructing and assessing trainees’ competence. In this study, we look to develop a computer-assisted training platform that can provide instruction and assessment of open inguinal hernia repairs without needing an expert observer. We recognize workflow tasks based on the tool-tissue interactions, suggesting that we first need a method to identify tissues. This study aims to train a neural network in identifying tissues in a low-cost phantom as we work towards identifying the tool-tissue interactions needed for task recognition. METHODS: Eight simulated tissues were segmented throughout five videos from experienced surgeons who performed open inguinal hernia repairs on phantoms. A U-Net was trained using leave-one-user-out cross validation. The average F-score, false positive rate and false negative rate were calculated for each tissue to evaluate the U-Net’s performance. RESULTS: Higher F-scores and lower false negative and positive rates were recorded for the skin, hernia sac, spermatic cord, and nerves, while slightly lower metrics were recorded for the subcutaneous tissue, Scarpa’s fascia, external oblique aponeurosis and superficial epigastric vessels. CONCLUSION: The U-Net performed better in recognizing tissues that were relatively larger in size and more prevalent, while struggling to recognize smaller tissues only briefly visible. Since workflow recognition does not require perfect segmentation, we believe our U-Net is sufficient in recognizing the tissues of an inguinal hernia repair phantom. Future studies will explore combining our segmentation U-Net with tool detection as we work towards workflow recognition.
Purpose: Surgical training could be improved by automatic detection of workflow steps, and similar applications of image processing. A platform to collect and organize tracking and video data would enable rapid development of image processing solutions for surgical training. The purpose of this research is to demonstrate 3D Slicer / PLUS Toolkit as a platform for automatic labelled data collection and model deployment. Methods: We use PLUS and 3D Slicer to collect a labelled dataset of tools interacting with tissues in simulated hernia repair, comprised of optical tracking data and video data from a camera. To demonstrate the platform, we train a neural network on this data to automatically identify tissues, and the tracking data is used to identify what tool is in use. The solution is deployed with a custom Slicer module. Results: This platform allowed the collection of 128,548 labelled frames, with 98.5% correctly labelled. A CNN was trained on this data and applied to new data with an accuracy of 98%. With minimal code, this model was deployed in 3D Slicer on real-time data at 30fps. Conclusion: We found the 3D Slicer and PLUS Toolkit platform to be a viable platform for collecting labelled training data and deploying a solution that combines automatic video processing and optical tool tracking. We designed an accurate proof-of-concept system to identify tissue-tool interactions with a trained CNN and optical tracking.
PURPOSE: Gaining proficiency in technical skills involving specific hand motions is prevalent across all disciplines of medicine and particularly relevant in learning surgical skills such as knot tying. We propose a new form of self-directed learning where a pair of holographic hands is projected in front of the trainee using the Microsoft HoloLens and guides them through learning various basic hand motions relevant to surgery and medicine. This study looks at the feasibility and effectiveness of using holographic hands as a skills training modality for learning hand motions compared to the traditional methods of apprenticeship and video-based learning. METHODS: 9 participants were recruited and each learned 6 different hand motions from 3 different modalities (video, apprenticeship, HoloLens). Results of successful completion and feedback on effectiveness was obtained through a questionnaire. RESULTS: Participants had a considerable preference for learning from HoloLens and apprenticeship and a higher success rate of learning hand motions compared to video-based learning. Furthermore, learning with holographic hands was shown to be comparable to apprenticeship in terms of both effectiveness and success rate. However, more participants still selected apprenticeship as a preferred learning method compared to HoloLens. CONCLUSION: This initial pilot study shows promising results for using holographic hands as a new effective form of self-directed apprenticeship learning that can be applied to learning a wide variety of skills requiring hand motions in medicine. Work continues toward implementing this technology in knot tying and suture tutoring modules in our undergraduate medical curriculum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.