Trishia El Chemaly
Ph.D. Student in Bioengineering, admitted Autumn 2019
Masters Student in Bioengineering, admitted Summer 2024
All Publications
-
From microscope to head-mounted display: integrating hand tracking into microsurgical augmented reality.
International journal of computer assisted radiology and surgery
2024
Abstract
The operating microscope plays a central role in middle and inner ear procedures that involve working within tightly confined spaces under limited exposure. Augmented reality (AR) may improve surgical guidance by combining preoperative computed tomography (CT) imaging that can provide precise anatomical information, with intraoperative microscope video feed. With current technology, the operator must manually interact with the AR interface using a computer. The latter poses a disruption in the surgical flow and is suboptimal for maintaining the sterility of the operating environment. The purpose of this study was to implement and evaluate free-hand interaction concepts leveraging hand tracking and gesture recognition as an attempt to reduce the disruption during surgery and improve human-computer interaction.An electromagnetically tracked surgical microscope was calibrated using a custom 3D printed calibration board. This allowed the augmentation of the microscope feed with segmented preoperative CT-derived virtual models. Ultraleap's Leap Motion Controller 2 was coupled to the microscope and used to implement hand-tracking capabilities. End-user feedback was gathered from a surgeon during development. Finally, users were asked to complete tasks that involved interacting with the virtual models, aligning them to physical targets, and adjusting the AR visualization.Following observations and user feedback, we upgraded the functionalities of the hand interaction system. User feedback showed the users' preference for the new interaction concepts that provided minimal disruption of the surgical workflow and more intuitive interaction with the virtual content.We integrated hand interaction concepts, typically used with head-mounted displays (HMDs), into a surgical stereo microscope system intended for AR in otologic microsurgery. The concepts presented in this study demonstrated a more favorable approach to human-computer interaction in a surgical context. They hold potential for a more efficient execution of surgical tasks under microscopic AR guidance.
View details for DOI 10.1007/s11548-024-03224-w
View details for PubMedID 39162975
View details for PubMedCentralID 4710572
-
Leveraging the Apple Ecosystem: Easy Viewing and Sharing of Three-dimensional Perforator Visualizations via iPad/iPhone-based Augmented Reality.
Plastic and reconstructive surgery. Global open
2024; 12 (7): e5940
Abstract
We introduce a novel technique using augmented reality (AR) on smartphones and tablets, making it possible for surgeons to review perforator anatomy in three dimensions on the go. Autologous breast reconstruction with abdominal flaps remains challenging due to the highly variable anatomy of the deep inferior epigastric artery. Computed tomography angiography has mitigated some but not all challenges. Previously, volume rendering and different headsets were used to enable better three-dimensional (3D) review for surgeons. However, surgeons have been dependent on others to provide 3D imaging data. Leveraging the ubiquity of Apple devices, our approach permits surgeons to review 3D models of deep inferior epigastric artery anatomy segmented from abdominal computed tomography angiography directly on their iPhone/iPad. Segmentation can be performed in common radiology software. The models are converted to the universal scene description zipped format, which allows immediate use on Apple devices without third-party software. They can be easily shared using secure, Health Insurance Portability and Accountability Act-compliant sharing services already provided by most hospitals. Surgeons can simply open the file on their mobile device to explore the images in 3D using "object mode" natively without additional applications or can switch to AR mode to pin the model in their real-world surroundings for intuitive exploration. We believe patient-specific 3D anatomy models are a powerful tool for intuitive understanding and communication of complex perforator anatomy and would be a valuable addition in routine clinical practice and education. Using this one-click solution on existing devices that is simple to implement, we hope to streamline the adoption of AR models by plastic surgeons.
View details for DOI 10.1097/GOX.0000000000005940
View details for PubMedID 38957720
View details for PubMedCentralID PMC11216661
-
The Reconstructive Metaverse - Collaboration in Real-Time Shared Mixed Reality Environments for Microsurgical Reconstruction.
Surgical innovation
2024: 15533506241262946
Abstract
Plastic surgeons routinely use 3D-models in their clinical practice, from 3D-photography and surface imaging to 3D-segmentations from radiological scans. However, these models continue to be viewed on flattened 2D screens that do not enable an intuitive understanding of 3D-relationships and cause challenges regarding collaboration with colleagues. The Metaverse has been proposed as a new age of applications building on modern Mixed Reality headset technology that allows remote collaboration on virtual 3D-models in a shared physical-virtual space in real-time. We demonstrate the first use of the Metaverse in the context of reconstructive surgery, focusing on preoperative planning discussions and trainee education. Using a HoloLens headset with the Microsoft Mesh application, we performed planning sessions for 4 DIEP-flaps in our reconstructive metaverse on virtual patient-models segmented from routine CT angiography. In these sessions, surgeons discuss perforator anatomy and perforator selection strategies whilst comprehensively assessing the respective models. We demonstrate the workflow for a one-on-one interaction between an attending surgeon and a trainee in a video featuring both viewpoints as seen through the headset. We believe the Metaverse will provide novel opportunities to use the 3D-models that are already created in everyday plastic surgery practice in a more collaborative, immersive, accessible, and educational manner.
View details for DOI 10.1177/15533506241262946
View details for PubMedID 38905568
-
Deep Learning Method for Rapid Simultaneous Multistructure Temporal Bone Segmentation.
Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery
2024
Abstract
To develop and validate a deep learning algorithm for the automated segmentation of key temporal bone structures from clinical computed tomography (CT) data sets.Cross-sectional study.A total of 325 CT scans from a clinical database.A state-of-the-art deep learning (DL) algorithm (SwinUNETR) was used to train a prediction model for rapid segmentation of 9 key temporal bone structures in a data set of 325 clinical CTs. The data set was manually annotated by a specialist to serve as the ground truth. The data set was randomly split into training (n = 260) and testing (n = 65) sets. The model's performance was objectively assessed through external validation on the test set using metrics including Dice, Balanced accuracy, Hausdorff distances, and processing time.The model achieved an average Dice coefficient of 0.87 for all structures, an average balanced accuracy of 0.94, an average Hausdorff distance of 0.79 mm, and an average processing time of 9.1 seconds per CT.The present DL model for the automated simultaneous segmentation of multiple structures within the temporal bone from CTs achieved high accuracy according to currently commonly employed objective analysis. The results demonstrate the potential of the method to improve preoperative evaluation and intraoperative guidance in otologic surgery.
View details for DOI 10.1002/ohn.764
View details for PubMedID 38769857
-
Increasing DIEA Perforator Detail in 3D Photorealistic Volume Rendering Visualizations with Skin-masking and Cinematic Anatomy.
Plastic and reconstructive surgery
2024
Abstract
Preoperative CT angiography (CTA) is increasingly performed prior to perforator flap-based reconstruction. However, radiological 2D thin-slices do not allow for intuitive interpretation and translation to intraoperative findings. 3D volume rendering has been used to alleviate the need for mental 2D-to-3D abstraction. Even though volume rendering allows for a much easier understanding of anatomy, it currently has limited utility as the skin obstructs the view of critical structures. Using free, open-source software, we introduce a new skin-masking technique that allows surgeons to easily create a segmentation mask of the skin that can later be used to toggle the skin on and off. Additionally, the mask can be used in other rendering applications. We use Cinematic Anatomy for photorealistic volume rendering and interactive exploration of the CTA with and without skin. We present results from using this technique to investigate perforator anatomy in deep inferior epigastric perforator flaps and demonstrate that the skin-masking workflow is performed in less than 5 minutes. In Cinematic Anatomy, the view onto the abdominal wall and especially onto perforators becomes significantly sharper and more detailed when no longer obstructed by the skin. We perform a virtual, partial muscle dissection to show the intramuscular and submuscular course of the perforators. The skin-masking workflow allows surgeons to improve arterial and perforator detail in volume renderings easily and quickly by removing skin and could alternatively also be performed solely using open-source and free software. The workflow can be easily expanded to other perforator flaps without the need for modification.
View details for DOI 10.1097/PRS.0000000000011359
View details for PubMedID 38351515
-
Interactive Shape Sonification for Tumor Localization in Breast Cancer Surgery
ASSOC COMPUTING MACHINERY. 2024
View details for DOI 10.1145/3613904.3642257
View details for Web of Science ID 001255317906042
-
Automated Radiomic Analysis of Vestibular Schwannomas and Inner Ears Using Contrast-Enhanced T1-Weighted and T2-Weighted Magnetic Resonance Imaging Sequences and Artificial Intelligence.
Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
2023
Abstract
To objectively evaluate vestibular schwannomas (VSs) and their spatial relationships with the ipsilateral inner ear (IE) in magnetic resonance imaging (MRI) using deep learning.Cross-sectional study.A total of 490 adults with VS, high-resolution MRI scans, and no previous neurotologic surgery.MRI studies of VS patients were split into training (390 patients) and test (100 patients) sets. A three-dimensional convolutional neural network model was trained to segment VS and IE structures using contrast-enhanced T1-weighted and T2-weighted sequences, respectively. Manual segmentations were used as ground truths. Model performance was evaluated on the test set and on an external set of 100 VS patients from a public data set (Vestibular-Schwannoma-SEG).Dice score, relative volume error, average symmetric surface distance, 95th-percentile Hausdorff distance, and centroid locations.Dice scores for VS and IE volume segmentations were 0.91 and 0.90, respectively. On the public data set, the model segmented VS tumors with a Dice score of 0.89 ± 0.06 (mean ± standard deviation), relative volume error of 9.8 ± 9.6%, average symmetric surface distance of 0.31 ± 0.22 mm, and 95th-percentile Hausdorff distance of 1.26 ± 0.76 mm. Predicted VS segmentations overlapped with ground truth segmentations in all test subjects. Mean errors of predicted VS volume, VS centroid location, and IE centroid location were 0.05 cm3, 0.52 mm, and 0.85 mm, respectively.A deep learning system can segment VS and IE structures in high-resolution MRI scans with excellent accuracy. This technology offers promise to improve the clinical workflow for assessing VS radiomics and enhance the management of VS patients.
View details for DOI 10.1097/MAO.0000000000003959
View details for PubMedID 37464458
-
Stereoscopic calibration for augmented reality visualization in microscopic surgery.
International journal of computer assisted radiology and surgery
2023
Abstract
Middle and inner ear procedures target hearing loss, infections, and tumors of the temporal bone and lateral skull base. Despite the advances in surgical techniques, these procedures remain challenging due to limited haptic and visual feedback. Augmented reality (AR) may improve operative safety by allowing the 3D visualization of anatomical structures from preoperative computed tomography (CT) scans on real intraoperative microscope video feed. The purpose of this work was to develop a real-time CT-augmented stereo microscope system using camera calibration and electromagnetic (EM) tracking.A 3D printed and electromagnetically tracked calibration board was used to compute the intrinsic and extrinsic parameters of the surgical stereo microscope. These parameters were used to establish a transformation between the EM tracker coordinate system and the stereo microscope image space such that any tracked 3D point can be projected onto the left and right images of the microscope video stream. This allowed the augmentation of the microscope feed of a 3D printed temporal bone with its corresponding CT-derived virtual model. Finally, the calibration board was also used for evaluating the accuracy of the calibration.We evaluated the accuracy of the system by calculating the registration error (RE) in 2D and 3D in a microsurgical laboratory setting. Our calibration workflow achieved a RE of 0.11 ± 0.06 mm in 2D and 0.98 ± 0.13 mm in 3D. In addition, we overlaid a 3D CT model on the microscope feed of a 3D resin printed model of a segmented temporal bone. The system exhibited small latency and good registration accuracy.We present the calibration of an electromagnetically tracked surgical stereo microscope for augmented reality visualization. The calibration method achieved accuracy within a range suitable for otologic procedures. The AR process introduces enhanced visualization of the surgical field while allowing depth perception.
View details for DOI 10.1007/s11548-023-02980-5
View details for PubMedID 37450175
View details for PubMedCentralID 4634572
-
The user experience design of a novel microscope within SurgiSim, a virtual reality surgical simulator.
International journal of computer assisted radiology and surgery
2022
Abstract
PURPOSE: Virtual reality (VR) simulation has the potential to advance surgical education, procedural planning, and intraoperative guidance. "SurgiSim" is a VR platform developed for the rehearsal of complex procedures using patient-specific anatomy, high-fidelity stereoscopic graphics, and haptic feedback. SurgiSim is the first VR simulator to include a virtual operating room microscope. We describe the process of designing and refining the VR microscope user experience (UX) and user interaction (UI) to optimize surgical rehearsal and education.METHODS: Human-centered VR design principles were applied in the design of the SurgiSim microscope to optimize the user's sense of presence. Throughout the UX's development, the team of developers met regularly with surgeons to gather end-user feedback. Supplemental testing was performed on four participants.RESULTS: Through observation and participant feedback, we made iterative design upgrades to the SurgiSim platform. We identified the following key characteristics of the VR microscope UI: overall appearance, hand controller interface, and microscope movement.CONCLUSION: Our design process identified challenges arising from the disparity between VR and physical environments that pertain to microscope education and deployment. These roadblocks were addressed using creative solutions. Future studies will investigate the efficacy of VR surgical microscope training on real-world microscope skills as assessed by validated performance metrics.
View details for DOI 10.1007/s11548-022-02727-8
View details for PubMedID 35933491