Training non-surgical experts to annotate open-source surgical videos for machine learning
OA Version
Citation
Abstract
The use of video annotation for utilization in machine learning computer programs is an area of medicine that has shown increased demand for research in recent years. The limiting factor for the use of video annotation in surgery is the scale and efficiency in which videos can be labelled. The challenge in surgical contexts is the current notion that only surgical experts can provide accurate video annotations. To challenge this notion, we have conceived a survey to test non-surgical experts’ abilities to accurately annotate open-source surgical videos. This test has been published on the crowdsourcing platform Amazon mTurk. A learning module was created to provide relevant and concise information necessary to accurately annotate the surgical video and complete the survey. This learning module illustrates important instructions on differentiating between three surgical activities of focus: cutting, suturing, and tying. The survey includes free-response and multiple-choice questions that test the accuracy of respondent’s video annotation. Analyzing the results from 50 participants, more data from larger scale studies must be acquired, greater data validation systems must be implemented, and instructions in the survey and learning module must be adapted. These changes are due to high rates of inaccurate annotation for all three surgical activities. The data showed no clear indication that cutting, suturing, or tying could be accurately identified but further investigation would be prudent in the future.