Bio


Alaa Eldin Abdelaal is a postdoctoral scholar at the Collaborative Haptics and Robotics in Medicine Lab at Stanford University, working with Prof. Allison Okamura and Prof. Jeannette Bohg. He received his PhD in Electrical and Computer Engineering from the University of British Columbia (UBC) in December 2022. He was also a visiting graduate scholar at the Computational Interaction and Robotics Lab at Johns Hopkins University. During his PhD, he was co-advised by Prof. Tim Salcudean and Prof. Gregory Hager. He holds a M.Sc. in Computing Science from Simon Fraser University and a B.Sc. in Computer and Systems Engineering from Mansoura University in Egypt. His research interests are at the intersection of automation and human-robot interaction for human skill augmentation and decision support with application to surgical robotics. His research has been recognized with the Best Bench-to-Bedside Paper Award at the International Conference on Information Processing in Computer-Assisted Interventions (IPCAI) 2019. His research has been funded by a Vanier Canada Graduate Scholarship, an NSERC Postdoctoral Fellowship, Intuitive Surgical Inc., and the Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University.

Honors & Awards


  • RSS Pioneer, Robotics: Science and Systems (RSS) Conference (2023)
  • Stanford Emerging Technology Review Fellow, Stanford University (2023)
  • NSERC Postdoctoral Fellowship, Natural Sciences and Engineering Research Council of Canada (2023-2024)
  • Canada Graduate Scholarships – Michael Smith Foreign Study Supplements, Natural Sciences and Engineering Research Council of Canada (2022)
  • Vanier Canada Graduate Scholarship, Natural Sciences and Engineering Research Council of Canada (2019-2022)
  • HRI Pioneer, The ACM/IEEE International Conference on Human-Robot Interaction (2020)
  • Best Bench-to-Bedside Paper Award, The International Conference on Information Processing in Computer-Assisted Interventions (IPCAI) (2019)

Professional Education


  • Master of Science, Simon Fraser University (2017)
  • Bachelor of Science, Mansoura University (2012)
  • Doctor of Philosophy, University of British Columbia (2023)
  • PhD, University of British Columbia, Electrical and Computer Engineering (2022)
  • MSc, Simon Fraser University, Computing Science (2017)
  • BSc, Mansoura University, Computer and Control Systems Engineering (2012)

Stanford Advisors


All Publications


  • Parallelism in Autonomous Robotic Surgery IEEE ROBOTICS AND AUTOMATION LETTERS Abdelaal, A., Liu, J., Hong, N., Hager, G. D., Salcudean, S. E. 2021; 6 (2): 1824-1831
  • Robotics In Vivo: A Perspective on Human-Robot Interaction in Surgical Robotics ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS, VOL 3, 2020 Abdelaal, A., Mathur, P., Salcudean, S. E., Leonard, N. E. 2020; 3: 221-242
  • Play Me Back: A Unified Training Platform for Robotic and Laparoscopic Surgery IEEE ROBOTICS AND AUTOMATION LETTERS Abdelaal, A., Sakr, M., Avinash, A., Mohammed, S. K., Bajwa, A., Sahni, M., Hor, S., Fels, S., Salcudean, S. E. 2019; 4 (2): 554–61
  • Head motion-corrected eye gaze tracking with the da Vinci surgical system INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY Banks, A., Abdelaal, A., Salcudean, S. 2024

    Abstract

    To facilitate the integration of point of gaze (POG) as an input modality for robot-assisted surgery, we introduce a robust head movement compensation gaze tracking system for the da Vinci Surgical System. Previous surgical eye gaze trackers require multiple recalibrations and suffer from accuracy loss when users move from the calibrated position. We investigate whether eye corner detection can reduce gaze estimation error in a robotic surgery context.A polynomial regressor is first used to estimate POG after an 8-point calibration, and then, using another regressor, the POG error from head movement is estimated from the shift in 2D eye corner location. Eye corners are computed by first detecting regions of interest using the You Only Look Once (YOLO) object detector trained on 1600 annotated eye images (open dataset included). Contours are then extracted from the bounding boxes and a derivative-based curvature detector refines the eye corner.Through a user study (n = 24), our corner-contingent head compensation algorithm showed an error reduction in degrees of visual angle of 1.20 ∘ (p = 0.037) for the left eye and 1.26 ∘ (p = 0.079) for the right compared to the previous gold-standard POG error correction method. In addition, the eye corner pipeline showed a root-mean-squared error of 3.57 (SD = 1.92) pixels in detecting eye corners over 201 annotated frames.We introduce an effective method of using eye corners to correct for eye gaze estimation, enabling the practical acquisition of POG in robotic surgery.

    View details for DOI 10.1007/s11548-024-03173-4

    View details for Web of Science ID 001250319900003

    View details for PubMedID 38888820

  • Haptics: The Science of Touch as a Foundational Pathway to Precision Education and Assessment. Academic medicine : journal of the Association of American Medical Colleges Perrone, K., Abdelaal, A. E., Pugh, C., Okamura, A. 2023

    Abstract

    Clinical touch is the cornerstone of the doctor-patient relationship and can impact patient experience and outcomes. In the current era, driven by an ever-increasing infusion of point of care technologies, physical exam skills have become undervalued. Moreover, touch and hands-on skills have been difficult to teach due to inaccurate assessments and difficulty with learning transfer through observation. In this article, the authors argue that haptics, the science of touch, provides a unique opportunity to explore new pathways to facilitate touch training. Furthermore, haptics can dramatically increase the density of touch-based assessments without increasing human rater burden-essential for realizing precision assessment. The science of haptics is reviewed, including the benefits of using haptics-informed language for objective structured clinical examinations. The authors describe how haptic devices and haptic language have and can be used to facilitate learning, communication, documentation and a much-needed reinvigoration of physical examination and touch excellence at the point of care. The synergy of haptic devices, artificial intelligence, and virtual reality environments are discussed. The authors conclude with challenges of scaling haptic technology in medical education, such as cost and translational needs, and opportunities to achieve wider adoption of this transformative approach to precision education.

    View details for DOI 10.1097/ACM.0000000000005607

    View details for PubMedID 38109654

  • Orientation Matters: 6-DoF Autonomous Camera Movement for Video-based Skill Assessment in Robot-Assisted Surgery Abdelaal, A., Hong, N., Avinash, A., Budihal, D., Sakr, M., Hager, G. D., Salcudean, S. E., IEEE IEEE. 2022
  • A multi-camera, multi-view system for training and skill assessment for robot-assisted surgery INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY Abdelaal, A., Avinash, A., Kalia, M., Hager, G. D., Salcudean, S. E. 2020; 15 (8): 1369-1377

    Abstract

    This paper introduces the concept of using an additional intracorporeal camera for the specific goal of training and skill assessment and explores the benefits of such an approach. This additional camera can provide an additional view of the surgical scene, and we hypothesize that this additional view would improve surgical training and skill assessment in robot-assisted surgery.We developed a multi-camera, multi-view system, and we conducted two user studies ([Formula: see text]) to evaluate its effectiveness for training and skill assessment. In the training user study, subjects were divided into two groups: a single-view group and a dual-view group. The skill assessment study was a within-subject study, in which every subject was shown single- and dual view recorded videos of a surgical training task, and the goal was to count the number of errors committed in each video.The results show the effectiveness of using an additional intracorporeal camera view for training and skill assessment. The benefits of this view are modest for skill assessment as it improves the assessment accuracy by approximately 9%. For training, the additional camera view is clearly more effective. Indeed, the dual-view group is 57% more accurate than the single-view group in a retention test. In addition, the dual-view group is 35% more accurate and 25% faster than the single-view group in a transfer test.A multi-camera, multi-view system has the potential to significantly improve training and moderately improve skill assessment in robot-assisted surgery. One application of our work is to include an additional camera view in existing virtual reality surgical training simulators to realize its benefits in training. The views from the additional intracorporeal camera can also be used to improve on existing surgical skill assessment criteria used in training systems for robot-assisted surgery.

    View details for DOI 10.1007/s11548-020-02176-1

    View details for Web of Science ID 000534201600001

    View details for PubMedID 32430693

  • Evaluation of Increasing Camera Baseline on Depth Perception in Surgical Robotics Avinash, A., Abdelaal, A., Salcudean, S. E., IEEE IEEE. 2020: 5509-5515
  • Multimodal Training by Demonstration for Robot-Assisted Surgery Abdelaal, A., Hager, G. D., Salcudean, S. E., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2020: 549-551
  • Event-based Control as a Cloud Service Abdelaal, A., Hegazy, T., Hefeeda, M., IEEE IEEE. 2017: 1017-1023
  • LOST Highway: a Multiple-Lane Ant-Trail Algorithm to Reduce Congestion in Large-Population Multi-Robot Systems Abdelaal, A., Sakr, M., Vaughan, R., IEEE IEEE. 2017: 161-167