Soheil Hor
Ph.D. Student in Electrical Engineering, admitted Autumn 2017
All Publications
-
A Data-Driven Waveform Adaptation Method for Mm-Wave Gait Classification at the Edge
IEEE SIGNAL PROCESSING LETTERS
2022; 29: 26-30
View details for DOI 10.1109/LSP.2021.3122355
View details for Web of Science ID 000745491900006
-
Single-Snapshot Pedestrian Gait Recognition at the Edge: A Deep Learning Approach to High-Resolution mmWave Sensing
IEEE. 2022
View details for DOI 10.1109/RADARCONF2248738.2022.9764196
View details for Web of Science ID 000821555200057
-
A Real-Time, Frame-Level Platform Vibration Compensation Approach for mmWave Radar Systems
IEEE. 2021: 181-184
View details for Web of Science ID 000838709300045
-
A partial augmented reality system with live ultrasound and registered preoperative MRI for guiding robot-assisted radical prostatectomy.
Medical image analysis
2019; 60: 101588
Abstract
We propose an image guidance system for robot assisted laparoscopic radical prostatectomy (RALRP). A virtual 3D reconstruction of the surgery scene is displayed underneath the endoscope's feed on the surgeon's console. This scene consists of an annotated preoperative Magnetic Resonance Image (MRI) registered to intraoperative 3D Trans-rectal Ultrasound (TRUS) as well as real-time sagittal 2D TRUS images of the prostate, 3D models of the prostate, the surgical instrument and the TRUS transducer. We display these components with accurate real-time coordinates with respect to the robot system. Since the scene is rendered from the viewpoint of the endoscope, given correct parameters of the camera, an augmented scene can be overlaid on the video output. The surgeon can rotate the ultrasound transducer and determine the position of the projected axial plane in the MRI using one of the registered da Vinci instruments. This system was tested in the laboratory on custom-made agar prostate phantoms. We achieved an average total registration accuracy of 3.2 ± 1.3mm. We also report on the successful application of this system in the operating room in 12 patients. The average registration error between the TRUS and the da Vinci system for the last 8 patients was 1.4 ± 0.3mm and average target registration error of 2.1 ± 0.8mm, resulting in an in vivo overall robot system to MRI mean registration error of 3.5mm or less, which is consistent with our laboratory studies.
View details for DOI 10.1016/j.media.2019.101588
View details for PubMedID 31739281
-
A New Wireless Power-Transfer Circuit for Retinal Prosthesis
IEEE TRANSACTIONS ON POWER ELECTRONICS
2019; 34 (7): 6438–52
View details for DOI 10.1109/TPEL.2018.2872844
View details for Web of Science ID 000466924800038
-
Play Me Back: A Unified Training Platform for Robotic and Laparoscopic Surgery
IEEE ROBOTICS AND AUTOMATION LETTERS
2019; 4 (2): 554–61
View details for DOI 10.1109/LRA.2018.2890209
View details for Web of Science ID 000457917800001
-
A New Wireless Power and Data Transmission Circuit for Cochlear Implants
IEEE. 2019: 16–19
View details for Web of Science ID 000569531100004
-
Automatic grading of prostate cancer in digitized histopathology images: Learning from multiple experts
MEDICAL IMAGE ANALYSIS
2018; 50: 167-180
View details for DOI 10.1016/j.media.2018.09.005
View details for Web of Science ID 000449896900012
-
Automatic grading of prostate cancer in digitized histopathology images: Learning from multiple experts.
Medical image analysis
2018; 50: 167–80
Abstract
Prostate cancer (PCa) is a heterogeneous disease that is manifested in a diverse range of histologic patterns and its grading is therefore associated with an inter-observer variability among pathologists, which may lead to an under- or over-treatment of patients. In this work, we develop a computer aided diagnosis system for automatic grading of PCa in digitized histopathology images using supervised learning methods. Our pipeline comprises extraction of multi-scale features that include glandular, cellular, and image-based features. A number of novel features are proposed based on intra- and inter-nuclei properties; these features are shown to be among the most important ones for classification. We train our classifiers on 333 tissue microarray (TMA) cores that were sampled from 231 radical prostatectomy patients and annotated in detail by six pathologists for different Gleason grades. We also demonstrate the TMA-trained classifier's performance on additional 230 whole-mount slides of 56 patients, independent of the training dataset, by examining the automatic grading on manually marked lesions and randomly sampled 10% of the benign tissue. For the first time, we incorporate a probabilistic approach for supervised learning by multiple experts to account for the inter-observer grading variability. Through cross-validation experiments, the overall grading agreement of the classifier with the pathologists was found to be an unweighted kappa of 0.51, while the overall agreements between each pathologist and the others ranged from 0.45 to 0.62. These results suggest that our classifier's performance is within the inter-observer grading variability levels across the pathologists in our study, which are also consistent with those reported in the literature.
View details for PubMedID 30340027
-
Learning in data-limited multimodal scenarios: Scandent decision forests and tree-based features
ELSEVIER SCIENCE BV. 2016: 30–41
Abstract
Incomplete and inconsistent datasets often pose difficulties in multimodal studies. We introduce the concept of scandent decision trees to tackle these difficulties. Scandent trees are decision trees that optimally mimic the partitioning of the data determined by another decision tree, and crucially, use only a subset of the feature set. We show how scandent trees can be used to enhance the performance of decision forests trained on a small number of multimodal samples when we have access to larger datasets with vastly incomplete feature sets. Additionally, we introduce the concept of tree-based feature transforms in the decision forest paradigm. When combined with scandent trees, the tree-based feature transforms enable us to train a classifier on a rich multimodal dataset, and use it to classify samples with only a subset of features of the training data. Using this methodology, we build a model trained on MRI and PET images of the ADNI dataset, and then test it on cases with only MRI data. We show that this is significantly more effective in staging of cognitive impairments compared to a similar decision forest model trained and tested on MRI only, or one that uses other kinds of feature transform applied to the MRI data.
View details for DOI 10.1016/j.media.2016.07.012
View details for Web of Science ID 000385320800004
View details for PubMedID 27498016
-
Scandent Tree: A Random Forest Learning Method for Incomplete Multimodal Datasets
SPRINGER INT PUBLISHING AG. 2015: 694–701
View details for DOI 10.1007/978-3-319-24553-9_85
View details for Web of Science ID 000366205700085