
Alaa Eldin Abdelaal
Postdoctoral Scholar, Mechanical Engineering
Professional Education
-
PhD, University of British Columbia, Electrical and Computer Engineering (2022)
-
MSc, Simon Fraser University, Computing Science (2017)
-
BSc, Mansoura University, Computer and Control Systems Engineering (2012)
All Publications
-
Parallelism in Autonomous Robotic Surgery
IEEE ROBOTICS AND AUTOMATION LETTERS
2021; 6 (2): 1824-1831
View details for DOI 10.1109/LRA.2021.3060402
View details for Web of Science ID 000629028400017
-
Robotics In Vivo: A Perspective on Human-Robot Interaction in Surgical Robotics
ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS, VOL 3, 2020
2020; 3: 221-242
View details for DOI 10.1146/annurev-control-091219-013437
View details for Web of Science ID 000534341200009
-
A "pickup" stereoscopic camera with visual-motor aligned control for the da Vinci surgical system: a preliminary study
SPRINGER HEIDELBERG. 2019: 1197-1206
Abstract
The current state-of-the-art surgical robotic systems use only a single endoscope to view the surgical field. Research has been conducted to introduce additional cameras to the surgical system, giving rise to new camera angles that cannot be achieved using the endoscope alone. While this additional visualization certainly aids in surgical performance, current systems lack visual-motor compatibility with respect to the additional camera views. We propose a new system that overcomes this limitation.In this paper, we introduce a novel design of an additional "pickup" camera that can be integrated into the da Vinci Surgical System. We also introduce a solution to work comfortably in the various arbitrary views this camera provides by eliminating visual-motor misalignment. This is done by changing the working frame of the surgical instruments to work with respect to the coordinate system at the "pickup" camera instead of the endoscope.Human user trials ([Formula: see text]) were conducted to evaluate the effect of visual-motor alignment with respect to the "pickup" camera on surgical performance. An inanimate surgical peg transfer task from the validated Fundamentals of Laparoscopic Surgery (FLS) Training Curriculum was used, and an improvement of 73% in task completion time and 80% in accuracy was observed with the visual-motor alignment over the case without it.Our study shows that there is a requirement to achieve visual-motor alignment when utilizing views from external cameras in current clinical surgical robotics setups. We introduce a complete system that provides additional camera views with visual-motor aligned control. Such a system would be useful in existing surgical procedures and could also impact surgical planning and navigation.
View details for DOI 10.1007/s11548-019-01955-9
View details for Web of Science ID 000471635000010
View details for PubMedID 31056727
-
Play Me Back: A Unified Training Platform for Robotic and Laparoscopic Surgery
IEEE ROBOTICS AND AUTOMATION LETTERS
2019; 4 (2): 554–61
View details for DOI 10.1109/LRA.2018.2890209
View details for Web of Science ID 000457917800001
-
Orientation Matters: 6-DoF Autonomous Camera Movement for Video-based Skill Assessment in Robot-Assisted Surgery
IEEE. 2022
View details for DOI 10.1109/BIOROB52689.2022.9925374
View details for Web of Science ID 000920393600050
-
A multi-camera, multi-view system for training and skill assessment for robot-assisted surgery
INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY
2020; 15 (8): 1369-1377
Abstract
This paper introduces the concept of using an additional intracorporeal camera for the specific goal of training and skill assessment and explores the benefits of such an approach. This additional camera can provide an additional view of the surgical scene, and we hypothesize that this additional view would improve surgical training and skill assessment in robot-assisted surgery.We developed a multi-camera, multi-view system, and we conducted two user studies ([Formula: see text]) to evaluate its effectiveness for training and skill assessment. In the training user study, subjects were divided into two groups: a single-view group and a dual-view group. The skill assessment study was a within-subject study, in which every subject was shown single- and dual view recorded videos of a surgical training task, and the goal was to count the number of errors committed in each video.The results show the effectiveness of using an additional intracorporeal camera view for training and skill assessment. The benefits of this view are modest for skill assessment as it improves the assessment accuracy by approximately 9%. For training, the additional camera view is clearly more effective. Indeed, the dual-view group is 57% more accurate than the single-view group in a retention test. In addition, the dual-view group is 35% more accurate and 25% faster than the single-view group in a transfer test.A multi-camera, multi-view system has the potential to significantly improve training and moderately improve skill assessment in robot-assisted surgery. One application of our work is to include an additional camera view in existing virtual reality surgical training simulators to realize its benefits in training. The views from the additional intracorporeal camera can also be used to improve on existing surgical skill assessment criteria used in training systems for robot-assisted surgery.
View details for DOI 10.1007/s11548-020-02176-1
View details for Web of Science ID 000534201600001
View details for PubMedID 32430693
-
Evaluation of Increasing Camera Baseline on Depth Perception in Surgical Robotics
IEEE. 2020: 5509-5515
View details for Web of Science ID 000712319503119
-
Multimodal Training by Demonstration for Robot-Assisted Surgery
ASSOC COMPUTING MACHINERY. 2020: 549-551
View details for DOI 10.1145/3371382.3377448
View details for Web of Science ID 000643728500181
-
Event-based Control as a Cloud Service
IEEE. 2017: 1017-1023
View details for Web of Science ID 000427033301011
-
LOST Highway: a Multiple-Lane Ant-Trail Algorithm to Reduce Congestion in Large-Population Multi-Robot Systems
IEEE. 2017: 161-167
View details for DOI 10.1109/CRV.2017.24
View details for Web of Science ID 000425894100022