Education & Certifications
Semester abroad, Charité – Universitätsmedizin Berlin (2019)
Research Year, Stanford University (2020)
Medical Doctor, Aarhus University (2021)
Bridging the gap between prostate radiology and pathology through machine learning.
Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, Magnetic Resonance Imaging (MRI) is considered the most sensitive non-invasive imaging modality that enables visualization, detection and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter-reader agreements.Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI.Four different deep learning models (SPCNet, U-Net, branched U-Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel-level Gleason patterns) on whole-mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre-operative MRI using an automated MRI-histopathology registration platform.Radiologist labels missed cancers (ROC-AUC: 0.75 - 0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24 - 0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC-AUC: 0.97 - 1, lesion Dice: 0.75 - 0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC-AUC: 0.91 - 0.94), and had generalizable and comparable performance to pathologist label trained-models in the targeted biopsy cohort (aggressive lesion ROC-AUC: 0.87 - 0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel-level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human-annotated label type.Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label-trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter- and intra-reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI. This article is protected by copyright. All rights reserved.
View details for DOI 10.1002/mp.15777
View details for PubMedID 35633505
Correlation of 68Ga-RM2 PET with Post-Surgery Histopathology Findings in Patients with Newly Diagnosed Intermediate- or High-Risk Prostate Cancer.
Journal of nuclear medicine : official publication, Society of Nuclear Medicine
Rationale: 68Ga-RM2 targets gastrin-releasing peptide receptors (GRPR), which are overexpressed in prostate cancer (PC). Here, we compared pre-operative 68Ga-RM2 PET to post-surgery histopathology in patients with newly diagnosed intermediate- or high-risk PC. Methods: Forty-one men, 64.0+/-6.7-year-old, were prospectively enrolled. PET images were acquired 42 - 72 (median+/-SD 52.5+/-6.5) minutes after injection of 118.4 - 247.9 (median+/-SD 138.0+/-22.2)MBq of 68Ga-RM2. PET findings were compared to pre-operative mpMRI (n = 36) and 68Ga-PSMA11 PET (n = 17) and correlated to post-prostatectomy whole-mount histopathology (n = 32) and time to biochemical recurrence. Nine participants decided to undergo radiation therapy after study enrollment. Results: All participants had intermediate (n = 17) or high-risk (n = 24) PC and were scheduled for prostatectomy. Prostate specific antigen (PSA) was 8.8+/-77.4 (range 2.5 - 504) ng/mL, and 7.6+/-5.3 (range 2.5 - 28.0) ng/mL when excluding participants who ultimately underwent radiation treatment. Pre-operative 68Ga-RM2 PET identified 70 intraprostatic foci of uptake in 40/41 patients. Post-prostatectomy histopathology was available in 32 patients in which 68Ga-RM2 PET identified 50/54 intraprostatic lesions (detection rate = 93%). 68Ga-RM2 uptake was recorded in 19 non-enlarged pelvic lymph nodes in 6 patients. Pathology confirmed lymph node metastases in 16 lesions, and follow-up imaging confirmed nodal metastases in 2 lesions. 68Ga-PSMA11 and 68Ga-RM2 PET identified 27 and 26 intraprostatic lesions, respectively, and 5 pelvic lymph nodes each in 17 patients. Concordance between 68Ga-RM2 and 68Ga-PSMA11 PET was found in 18 prostatic lesions in 11 patients, and 4 lymph nodes in 2 patients. Non-congruent findings were observed in 6 patients (intraprostatic lesions in 4 patients and nodal lesions in 2 patients). Both 68Ga-RM2 and 68Ga-PSMA11 had higher sensitivity and accuracy rates with 98%, 89%, and 95%, 89%, respectively, compared to mpMRI at 77% and 77%. Specificity was highest for mpMRI with 75% followed by 68Ga-PSMA11 (67%), and 68Ga-RM2 (65%). Conclusion: 68Ga-RM2 PET accurately detects intermediate- and high-risk primary PC with a detection rate of 93%. In addition, it showed significantly higher specificity and accuracy compared to mpMRI and similar performance to 68Ga-PSMA11 PET. These findings need to be confirmed in larger studies to identify which patients will benefit from one or the other or both radiopharmaceuticals.
View details for DOI 10.2967/jnumed.122.263971
View details for PubMedID 35552245
- Dual X-ray Absorptiometry Screening for Men Receiving Androgen Deprivation Therapy-Hiding in Plain (Film) Sight. JAMA network open 2022; 5 (4): e225439
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy
JOURNAL OF UROLOGY
2021; 206 (3): 605-612
View details for Web of Science ID 000711819100035
Reply to the Editorial Comment on: Using an Automated Electronic Health Record Score To Estimate Life Expectancy In Men Diagnosed With Prostate Cancer In The Veterans Health Administration. Urology. 2021.
OBJECTIVES: To determine if an automatically calculated electronic health record score can estimate intermediate-term life expectancy in men with prostate cancer to provide guideline concordant care.METHODS: We identified all men (n=36,591) diagnosed with prostate cancer in 2013-2015 in the VHA. Of the 36,591, 35,364 (96.6%) had an available Care Assessment Needs (CAN) score (range: 0-99) automatically calculated in the 30 days prior to the date of diagnosis. It was designed to estimate short-term risks of hospitalization and mortality. We fit unadjusted and multivariable Cox proportional hazards regression models to determine the association between the CAN score and overall survival among men with prostate cancer. We compared CAN score performance to two established comorbidity measures: The Charlson Comorbidity Index and Prostate Cancer Comorbidity Index (PCCI).RESULTS: Among 35,364 men, the CAN score correlated with overall stage, with mean scores of 46.5 (±22.4), 58.0 (±24.4), and 68.1 (±24.3) in localized, locally advanced, and metastatic disease, respectively. In both unadjusted and adjusted models for prostate cancer risk, the CAN score was independently associated with survival (HR=1.23 95%CI 1.22-1.24 & adjusted HR=1.17 95%CI 1.16-1.18 per 5-unit change, respectively). The CAN score (overall C-Index 0.74) yielded better discrimination (AUC=0.76) than PCCI (AUC=0.65) or Charlson Comorbidity Index (AUC=0.66) for 5-year survival.CONCLUSIONS: The CAN score is strongly associated with intermediate-term survival following a prostate cancer diagnosis. The CAN score is an example of how learning health care systems can implement multi-dimensional tools to provide fully automated life expectancy estimates to facilitate patient-centered cancer care.
View details for DOI 10.1016/j.urology.2021.05.056
View details for PubMedID 34139251
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on MRI for Targeted Biopsy.
The Journal of urology
PURPOSE: Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on MRI is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine MR-US fusion biopsy in the clinic.MATERIALS AND METHODS: 905 subjects underwent multiparametric MRI at 29 institutions, followed by MR-US fusion biopsy at one institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to two deep learning networks (U-Net and HED) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests.RESULTS: ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), HED (DSC=0.80, p< 0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs. 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file.CONCLUSIONS: This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urologic clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.
View details for DOI 10.1097/JU.0000000000001783
View details for PubMedID 33878887
Removing Race from eGFR calculations: Implications for Urologic Care.
Equations estimating the glomerular filtration rate are important clinical tools in detecting and managing kidney disease. Urologists extensively use these equations in clinical decision making. For example, the estimated glomerular function rate is used when considering the type of urinary diversion following cystectomy, selecting systemic chemotherapy in managing urologic cancers, and deciding the type of cross-sectional imaging in diagnosing or staging urologic conditions. However, these equations, while widely accepted, are imprecise and adjust for race which is a social, not a biologic construct. The recent killings of unarmed Black Americans in the US have amplified the discussion of racism in healthcare and has prompted institutions to reconsider the role of race in eGFR equations and raced-based medicine. Urologist should be aware of the consequences of removing race from these equations, potential alternatives, and how these changes may affect Black patients receiving urologic care.
View details for DOI 10.1016/j.urology.2021.03.018
View details for PubMedID 33798557
Automated Detection of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging.
PURPOSE: While multi-parametric Magnetic Resonance Imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy.METHODS: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtainedby registering MRI with whole-mount digital histopathology images from patients that underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients that underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including: 6 patients with normal MRI and no cancer, 23 patients that underwent radical prostatectomy, and 293 patients that underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists.RESULTS: Our model detected clinically significant lesions with an Area Under the Receiver Operator Characteristics Curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer.CONCLUSIONS: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
View details for DOI 10.1002/mp.14855
View details for PubMedID 33760269
- Clinically significant prostate cancer detection on MRI with self-supervised learning using image context restoration SPIE-INT SOC OPTICAL ENGINEERING. 2021
- ProGNet: Prostate Gland Segmentation on MRI with Deep Learning SPIE-INT SOC OPTICAL ENGINEERING. 2021
- Weakly Supervised Registration of Prostate MRI and Histopathology Images SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 98-107
3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction.
Medical image analysis
2021; 69: 101957
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
View details for DOI 10.1016/j.media.2021.101957
View details for PubMedID 33550008
- AUTHOR REPLY. Urology 2021; 155: 76
Registration of pre-surgical MRI and histopathology images from radical prostatectomy via RAPSODI.
PURPOSE: Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis, however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with pre-operative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align pre-surgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI.METHODS: Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a 3D reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the pre-operative MRI.RESULTS: We tested RAPSODI in a phantom study where we simulated various conditions, e.g., tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97±0.01 for the prostate, a Hausdorff distance of 1.99±0.70 mm for the prostate boundary, a urethra deviation of 3.09±1.45 mm, and a landmark deviation of 2.80±0.59 mm between registered histopathology images and MRI.CONCLUSION: Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.
View details for DOI 10.1002/mp.14337
View details for PubMedID 32564359
CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from Radiology and Pathology Images for Improved Computer Aided Diagnosis
Medical Image Computing and Computer Assisted Intervention
View details for DOI 10.1007/978-3-030-59713-9_31
ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate.
Medical image analysis
2020; 68: 101919
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.
View details for DOI 10.1016/j.media.2020.101919
View details for PubMedID 33385701
Risk of Depression after 5 Alpha Reductase Inhibitor Medication: Meta-Analysis.
The world journal of men's health
Although five-alpha reductase inhibitor (5-ARI) is one of standard treatment for benign prostatic hyperplasia (BPH) or alopecia, potential complications after 5-ARI have been issues recently. This study aimed to investigate the risk of depression after taking 5-ARI and to quantify the risk using meta-analysis.A total of 209,940 patients including 207,798 in 5-ARI treatment groups and 110,118 in control groups from five studies were included for final analysis. Inclusion criteria for finial analysis incudes clinical outcomes regarding depression risk in BPH or alopecia patients. Overall hazard ratio (HR) and odds ratio (OR) for depression were analyzed. Moderator analysis and sensitivity analysis were performed to determine whether HR or OR could be affected by any variables, including number of patients, age, study type, and control type.The pooled overall HRs for the 5-ARI medication was 1.23 (95% confidence interval [CI], 0.99-1.54) in a random effects model. The pooled overall ORs for the 5-ARI medication was 1.19 (95% CI, 0.95-1.49) in random effects model. The sub-group analysis showed that non-cohort studies had higher values of HR and OR than cohort studies. Moderator analysis using meta-regression showed that there were no variables that affect the significant difference in HR and OR outcomes. However, in sensitivity analysis, HR was significantly increased by age (p=0.040).Overall risk of depression after 5-ARI was significantly not high, however its clinical importance needs validation by further studies. These quantitative results could provide useful information for both clinicians and patients.
View details for DOI 10.5534/wjmh.190046
View details for PubMedID 31190484