Honors & Awards


  • Rising Stars in EECS, U. C.Berkeley (Nov. 2020)
  • MICCAI NIH Award, MICCAI (Oct. 2020)
  • Founders' Award of Excellence, Rensselaer Polytechnic Institute (Oct. 2018)

Professional Education


  • Bachelor of Engineering, Jadavpur University, India, Electrical Engineering (2011)
  • Master of Science, Rensselaer Polytechnic Institute, Electrical Engineering (2016)
  • Doctor of Philosophy, Rensselaer Polytechnic Institute, Electrical Engineering (2019)

Stanford Advisors


Current Research and Scholarly Interests


My current research interests include medical image processing and multimodal data fusion and analysis for disease diagnosis.

All Publications


  • Bridging the gap between prostate radiology and pathology through machine learning. Medical physics Bhattacharya, I., Lim, D. S., Aung, H. L., Liu, X., Seetharaman, A., Kunder, C. A., Shao, W., Soerensen, S. J., Fan, R. E., Ghanouni, P., To'o, K. J., Brooks, J. D., Sonn, G. A., Rusu, M. 2022

    Abstract

    Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, Magnetic Resonance Imaging (MRI) is considered the most sensitive non-invasive imaging modality that enables visualization, detection and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter-reader agreements.Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI.Four different deep learning models (SPCNet, U-Net, branched U-Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel-level Gleason patterns) on whole-mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre-operative MRI using an automated MRI-histopathology registration platform.Radiologist labels missed cancers (ROC-AUC: 0.75 - 0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24 - 0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC-AUC: 0.97 - 1, lesion Dice: 0.75 - 0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC-AUC: 0.91 - 0.94), and had generalizable and comparable performance to pathologist label trained-models in the targeted biopsy cohort (aggressive lesion ROC-AUC: 0.87 - 0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel-level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human-annotated label type.Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label-trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter- and intra-reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15777

    View details for PubMedID 35633505

  • DETAILED ANALYSIS OF MRI CONCORDANCE WITH PROSTATECTOMY HISTOPATHOLOGY USING DEEP LEARNING-BASED DIGITAL PATHOLOGY Hockman, L., Fan, R., Schmidt, B., Bhattacharya, I., Rusu, M., Sonn, G. LIPPINCOTT WILLIAMS & WILKINS. 2021: E813-E814
  • Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy JOURNAL OF UROLOGY Soerensen, S., Fan, R. E., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Kim, Y., Sood, R., Borre, M., Chung, B., To'o, K. J., Rusu, M., Sonn, G. A. 2021; 206 (3): 605-612
  • Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on MRI for Targeted Biopsy. The Journal of urology Soerensen, S. J., Fan, R. E., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Kim, Y., Sood, R., Borre, M., Chung, B. I., To'o, K. J., Rusu, M., Sonn, G. A. 2021: 101097JU0000000000001783

    Abstract

    PURPOSE: Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on MRI is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine MR-US fusion biopsy in the clinic.MATERIALS AND METHODS: 905 subjects underwent multiparametric MRI at 29 institutions, followed by MR-US fusion biopsy at one institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to two deep learning networks (U-Net and HED) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests.RESULTS: ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), HED (DSC=0.80, p< 0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs. 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file.CONCLUSIONS: This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urologic clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.

    View details for DOI 10.1097/JU.0000000000001783

    View details for PubMedID 33878887

  • Automated Detection of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging. Medical physics Seetharaman, A., Bhattacharya, I., Chen, L. C., Kunder, C. A., Shao, W., Soerensen, S. J., Wang, J. B., Teslovich, N. C., Fan, R. E., Ghanouni, P., Brooks, J. D., To'o, K. J., Sonn, G. A., Rusu, M. 2021

    Abstract

    PURPOSE: While multi-parametric Magnetic Resonance Imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy.METHODS: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtainedby registering MRI with whole-mount digital histopathology images from patients that underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients that underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including: 6 patients with normal MRI and no cancer, 23 patients that underwent radical prostatectomy, and 293 patients that underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists.RESULTS: Our model detected clinically significant lesions with an Area Under the Receiver Operator Characteristics Curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer.CONCLUSIONS: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.

    View details for DOI 10.1002/mp.14855

    View details for PubMedID 33760269

  • Clinically significant prostate cancer detection on MRI with self-supervised learning using image context restoration Bolous, A., Seetharaman, A., Bhattacharya, I., Fan, R. E., Soerensen, S., Chen, L., Ghanouni, P., Sonn, G. A., Rusu, M., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581557

    View details for Web of Science ID 000672800100052

  • ProGNet: Prostate Gland Segmentation on MRI with Deep Learning Soerensen, S., Fan, R., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Borre, M., Chung, B., To'o, K., Sonn, G., Rusu, M., Isgum, Landman, B. A. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2580448

    View details for Web of Science ID 000672800200091

  • Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework. Medical image analysis Bhattacharya, I., Seetharaman, A., Kunder, C., Shao, W., Chen, L. C., Soerensen, S. J., Wang, J. B., Teslovich, N. C., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M. 2021; 75: 102288

    Abstract

    Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.

    View details for DOI 10.1016/j.media.2021.102288

    View details for PubMedID 34784540

  • Weakly Supervised Registration of Prostate MRI and Histopathology Images Shao, W., Bhattacharya, I., Soerensen, S. C., Kunder, C. A., Wang, J. B., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M., DeBruijne, M., Cattin, P. C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 98-107
  • Intensity Normalization of Prostate MRIs using Conditional Generative Adversarial Networks for Cancer Detection DeSilvio, T., Moroianu, S., Bhattacharya, I., Seetharaman, A., Sonn, G., Rusu, M., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2582297

    View details for Web of Science ID 000672800100016

  • CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from Radiology and Pathology Images for Improved Computer Aided Diagnosis Medical Image Computing and Computer Assisted Intervention Bhattacharya, I., et al 2020