Jared Todd Sokol
Clinical Scholar, Ophthalmology
Fellow in Ophthalmology
Academic Appointments
-
Clinical Scholar, Ophthalmology
All Publications
-
Comparison of Ergonomics in Vitreoretinal Surgery With Heads-up Visualization Versus the Standard Operating Microscope as Measured by a Wearable Device
OPHTHALMIC SURGERY LASERS & IMAGING RETINA
2024: 1-8
Abstract
Three-dimensional heads-up display (HUD) systems have emerged as an alternative to standard operating microscope (SOM) in the operating room. The goal of this study was to quantitatively measure vitreoretinal surgeon posture across visualization methods.Ergonomic data was collected from 64 cases at two tertiary eye care centers. Surgeons wore an Upright Go 2TM posture training device while operating either using the NGENUITY 3D heads-up display visualization system or the SOM.Total percentage of time with upright posture as primary surgeon was significantly higher in surgeries performed using HUD (median 100%, interquartile range [IQR], 85.1% to 100.0%) as compared to surgeries performed using the SOM (median 60.0%, IQR 1.8% to 98.8%) (P = 0.001, Wilcoxon rank-sum test). Percent time with upright posture was significantly higher in surgeries performed using HUD for two of the three surgeons when assessed independently across systems. Results remained significant when accounting for length of surgery (P < 0.001, multiple linear regression).Ergonomic positioning was improved for surgeons operating using HUD. Given the high prevalence of back and neck pain among vitreoretinal surgeons, increased use of HUD systems may limit musculoskeletal pain and long-term disability from poor ergonomics. [Ophthalmic Surg Lasers Imaging Retina 2024;55:XX-XX.].
View details for DOI 10.3928/23258160-20240508-02
View details for Web of Science ID 001322763600001
View details for PubMedID 39037360
-
EYE-Llama, an in-domain large language model for ophthalmology.
bioRxiv : the preprint server for biology
2024
Abstract
Background: Training Large Language Models (LLMs) with in-domain data can significantly enhance their performance, leading to more accurate and reliable question-answering (QA) systems essential for supporting clinical decision-making and educating patients.Methods: This study introduces LLMs trained on in-domain, well-curated ophthalmic datasets. We also present an open-source substantial ophthalmic language dataset for model training. Our LLMs (EYE-Llama), first pre-trained on an ophthalmology-specific dataset, including paper abstracts, textbooks, EyeWiki, and Wikipedia articles. Subsequently, the models underwent fine-tuning using a diverse range of QA datasets. The LLMs at each stage were then compared to baseline Llama 2, ChatDoctor, and ChatGPT (GPT3.5) models, using four distinct test sets, and evaluated quantitatively (Accuracy, F1 score, and BERTScore) and qualitatively by two ophthalmologists.Results: Upon evaluating the models using the American Academy of Ophthalmology (AAO) test set and BERTScore as the metric, our models surpassed both Llama 2 and ChatDoctor in terms of F1 score and performed equally to ChatGPT, which was trained with 175 billion parameters (EYE-Llama: 0.57, Llama 2: 0.56, ChatDoctor: 0.56, and ChatGPT: 0.57). When evaluated on the MedMCQA test set, the fine-tuned models demonstrated a higher accuracy compared to the Llama 2 and ChatDoctor models (EYE-Llama: 0.39, Llama 2: 0.33, ChatDoctor: 0.29). However, ChatGPT outperformed EYE-Llama with an accuracy of 0.55. When tested with the PubmedQA set, the fine-tuned model showed improvement in accuracy over both the Llama 2, ChatGPT, and ChatDoctor models (EYE-Llama: 0.96, Llama 2: 0.90, ChatGPT: 0.93, ChatDoctor: 0.92).Conclusion: The study shows that pre-training and fine-tuning LLMs like EYE-Llama enhances their performance in specific medical domains. Our EYE-Llama models surpass baseline Llama 2 in all evaluations, highlighting the effectiveness of specialized LLMs in medical QA systems. (Funded by NEI R15EY035804 (MNA) and UNC Charlotte Faculty Research Grant (MNA).).
View details for DOI 10.1101/2024.04.26.591355
View details for PubMedID 38746183