Bio


Dr. Rusu is an Assistant Professor, in the Department of Radiology, and, by courtesy, Department of Urology and Biomedical Data Science, at Stanford University, where she leads the Personalized Integrative Medicine Laboratory (PIMed). The PIMed Laboratory has a multi-disciplinary direction and focuses on developing analytic methods for biomedical data integration, with a particular interest in radiology-pathology fusion to facilitate radiology image labeling. The radiology-pathology fusion allows the creation of detailed spatial labels, that later on can be used as input for advanced machine learning, such as deep learning. The recent focus of the lab has been on applying deep learning methods to detect and differentiate aggressive from indolent prostate cancers on MRI using the pathology information (both labels and the image content), work that was recently published in Medical Physics and Medical Image Analysis Journals. Moreover, our project are interested in further develop these approaches for ultrasound images.

Dr. Rusu received a Master of Engineering in Bioinformatics from the National Institute of Applied Sciences in Lyon, France. She continued her training at the University of Texas Health Science Center in Houston, where she received a Master of Science and PhD degree in Health Informatics for her work in biomolecular structural data integration of cryo-electron micrographs and X-ray crystallography models.

During her postdoctoral training at Rutgers and Case Western Reserve University, Dr. Rusu has developed computational tools for the integration and interpretation of multi-modal medical imaging data and focused on studying prostate and lung cancers. Prior to joining Stanford, Dr. Rusu was a Lead Engineer and Medical Image Analysis Scientist at GE Global Research Niskayuna NY where she was involved in the development of analytic methods to characterize biological samples in microscopy images and pathologic conditions in MRI or CT.

Academic Appointments


Honors & Awards


  • R37 MERIT Award, NIH NCI (2022-2026)
  • Above and Beyond (6), GE Global Research (2015-2017)
  • School of Engineering Innovation Award, Case Western Reserve University (2014)
  • Postdoctoral Award for poster presentation at the Research ShowCASE, Case Western Reserve University (2013)
  • Winner, Grand Challenge Automated SEgmentation of Prostate Structures, NCI-ISBI (2013)
  • James T. and Nancy Beamer Willerson Endowed Scholarship, University of Texas Health Science Center in Houston (2010)
  • Paul Boyle Award for Excellence in Student Research, University of Texas Health Science Center in Houston (2007)
  • Undergraduate Research Fellowship, Keck Center for Computational and Structural Biology, Houston (2006)
  • International Mobility Fellowship, Rhone-Alpes Region, France (2005)

Professional Education


  • PhD, University of Texas Health Science Center at Houston, Health Informatics | Structural Bioinformatics (2011)
  • MS, University of Texas Health Science Center at Houston, Health Informatics | Structural Bioinformatics (2008)
  • Master of Engineering, National Institute of Applied Sciences, BioSciences | Bioinformatics and Modeling (2006)

Patents


  • Anant Madabhushi, Mirabela Rusu. "United States Patent US9767555B2 Disease characterization from fused pathology and radiology data", Case Western Reserve University

Current Research and Scholarly Interests


Dr. Mirabela Rusu focuses on developing analytic methods for biomedical data integration, with a particular interest in radiology-pathology fusion. Such integrative methods may be applied to create comprehensive multi-scale representations of biomedical processes and pathological conditions, thus enabling their in-depth characterization.

2024-25 Courses


Stanford Advisees


All Publications


  • PViT-AIR: Puzzling vision transformer-based affine image registration for multi histopathology and faxitron images of breast tissue. Medical image analysis Golestani, N., Wang, A., Moallem, G., Bean, G. R., Rusu, M. 2024; 99: 103356

    Abstract

    Breast cancer is a significant global public health concern, with various treatment options available based on tumor characteristics. Pathological examination of excision specimens after surgery provides essential information for treatment decisions. However, the manual selection of representative sections for histological examination is laborious and subjective, leading to potential sampling errors and variability, especially in carcinomas that have been previously treated with chemotherapy. Furthermore, the accurate identification of residual tumors presents significant challenges, emphasizing the need for systematic or assisted methods to address this issue. In order to enable the development of deep-learning algorithms for automated cancer detection on radiology images, it is crucial to perform radiology-pathology registration, which ensures the generation of accurately labeled ground truth data. The alignment of radiology and histopathology images plays a critical role in establishing reliable cancer labels for training deep-learning algorithms on radiology images. However, aligning these images is challenging due to their content and resolution differences, tissue deformation, artifacts, and imprecise correspondence. We present a novel deep learning-based pipeline for the affine registration of faxitron images, the x-ray representations of macrosections of ex-vivo breast tissue, and their corresponding histopathology images of tissue segments. The proposed model combines convolutional neural networks and vision transformers, allowing it to effectively capture both local and global information from the entire tissue macrosection as well as its segments. This integrated approach enables simultaneous registration and stitching of image segments, facilitating segment-to-macrosection registration through a puzzling-based mechanism. To address the limitations of multi-modal ground truth data, we tackle the problem by training the model using synthetic mono-modal data in a weakly supervised manner. The trained model demonstrated successful performance in multi-modal registration, yielding registration results with an average landmark error of 1.51 mm (±2.40), and stitching distance of 1.15 mm (±0.94). The results indicate that the model performs significantly better than existing baselines, including both deep learning-based and iterative models, and it is also approximately 200 times faster than the iterative approach. This work bridges the gap in the current research and clinical workflow and has the potential to improve efficiency and accuracy in breast cancer evaluation and streamline pathology workflow.

    View details for DOI 10.1016/j.media.2024.103356

    View details for PubMedID 39378568

  • Trends in pre-biopsy MRI usage for prostate cancer detection, 2007-2022. Prostate cancer and prostatic diseases Soerensen, S. J., Li, S., Langston, M. E., Fan, R. E., Rusu, M., Sonn, G. A. 2024

    Abstract

    Clinical guidelines favor MRI before prostate biopsy due to proven benefits. However, adoption patterns across the US are unclear.This study used the Merative™ Marketscan® Commercial & Medicare Databases to analyze 872,829 prostate biopsies in 726,663 men from 2007-2022. Pre-biopsy pelvic MRI within 90 days was the primary outcome. Descriptive statistics and generalized estimating equations assessed changes over time, urban-rural differences, and state-level variation.Pre-biopsy MRI utilization increased significantly from 0.5% in 2007 to 35.5% in 2022, with faster adoption in urban areas (36.1% in 2022) versus rural areas (28.3% in 2022). Geographic disparities were notable, with higher utilization in California, New York, and Minnesota, and lower rates in the Southeast and Mountain West.The study reveals a paradigm shift in prostate cancer diagnostics towards MRI-guided approaches, influenced by evolving guidelines and clinical evidence. Disparities in access, particularly in rural areas and specific regions, highlight the need for targeted interventions to ensure equitable access to advanced diagnostic techniques.

    View details for DOI 10.1038/s41391-024-00896-y

    View details for PubMedID 39306635

    View details for PubMedCentralID 9084630

  • Intraprocedural Diffusion-weighted Imaging for Predicting Ablation Zone during MRI-guided Focused Ultrasound of Prostate Cancer. Radiology. Imaging cancer Bitton, R. R., Shao, W., Chodakeiwitz, Y., Brunsing, R. L., Sonn, G., Rusu, M., Ghanouni, P. 2024; 6 (5): e240009

    Abstract

    Purpose To compare diffusion-weighted imaging (DWI) with thermal dosimetry as a noncontrast method to predict ablation margins in individuals with prostate cancer treated with MRI-guided focused ultrasound (MRgFUS) ablation. Materials and Methods This secondary analysis of a prospective trial (ClinicalTrials.gov no. NCT01657942) included 17 participants (mean age, 64 years ± 6 [SD]; all male) who were treated for prostate cancer using MRgFUS in whom DWI was performed immediately after treatment. Ablation contours from computed thermal dosimetry and DWI as drawn by two blinded radiologists were compared against the reference standard of ablation assessment, posttreatment contrast-enhanced nonperfused volume (NPV) contours. The ability of each method to predict the ablation zone was analyzed quantitively using Dice similarity coefficients (DSCs) and mean Hausdorff distances (mHDs). Results DWI revealed a hyperintense rim at the margin of the ablation zone. While DWI accurately helped predict treatment margins, thermal dose contours underestimated the extent of the ablation zone compared with the T1-weighted NPV imaging reference standard. Quantitatively, contour assessment between methods showed that DWI-drawn contours matched postcontrast NPV contours (mean DSC = 0.84 ± 0.05 for DWI, mHD = 0.27 mm ± 0.13) better than the thermal dose contours did (mean DSC = 0.64 ± 0.12, mHD = 1.53 mm ± 1.20) (P < .001). Conclusion This study demonstrates that DWI, which can visualize the ablation zone directly, is a promising noncontrast method that is robust to treatment-related bulk motion compared with thermal dosimetry and correlates better than thermal dosimetry with the reference standard T1-weighted NPV. Keywords: Interventional-Body, Ultrasound-High-Intensity Focused (HIFU), Genital/Reproductive, Prostate, Oncology, Imaging Sequences, MRI-guided Focused Ultrasound, MR Thermometry, Diffusionweighted Imaging, Prostate Cancer ClinicalTrials.gov Identifier no. NCT01657942 Supplemental material is available for this article. © RSNA, 2024.

    View details for DOI 10.1148/rycan.240009

    View details for PubMedID 39212524

  • Inter-reader Agreement for Prostate Cancer Detection Using Micro-ultrasound: A Multi-institutional Study EUROPEAN UROLOGY OPEN SCIENCE Zhou, S. R., Choi, M., Vesal, S., Kinnaird, A., Brisbane, W. G., Lughezzani, G., Maffei, D., Fasulo, V., Albers, P., Zhang, L., Kornberg, Z., Fan, R. E., Shao, W., Rusu, M., Sonn, G. A. 2024; 66: 93-100
  • External validation of an artificial intelligence model for Gleason grading of prostate cancer on prostatectomy specimens. BJU international Schmidt, B., Soerensen, S. J., Bhambhvani, H. P., Fan, R. E., Bhattacharya, I., Choi, M. H., Kunder, C. A., Kao, C. S., Higgins, J., Rusu, M., Sonn, G. A. 2024

    Abstract

    To externally validate the performance of the DeepDx Prostate artificial intelligence (AI) algorithm (Deep Bio Inc., Seoul, South Korea) for Gleason grading on whole-mount prostate histopathology, considering potential variations observed when applying AI models trained on biopsy samples to radical prostatectomy (RP) specimens due to inherent differences in tissue representation and sample size.The commercially available DeepDx Prostate AI algorithm is an automated Gleason grading system that was previously trained using 1133 prostate core biopsy images and validated on 700 biopsy images from two institutions. We assessed the AI algorithm's performance, which outputs Gleason patterns (3, 4, or 5), on 500 1-mm2 tiles created from 150 whole-mount RP specimens from a third institution. These patterns were then grouped into grade groups (GGs) for comparison with expert pathologist assessments. The reference standard was the International Society of Urological Pathology GG as established by two experienced uropathologists with a third expert to adjudicate discordant cases. We defined the main metric as the agreement with the reference standard, using Cohen's kappa.The agreement between the two experienced pathologists in determining GGs at the tile level had a quadratically weighted Cohen's kappa of 0.94. The agreement between the AI algorithm and the reference standard in differentiating cancerous vs non-cancerous tissue had an unweighted Cohen's kappa of 0.91. Additionally, the AI algorithm's agreement with the reference standard in classifying tiles into GGs had a quadratically weighted Cohen's kappa of 0.89. In distinguishing cancerous vs non-cancerous tissue, the AI algorithm achieved a sensitivity of 0.997 and specificity of 0.88; in classifying GG ≥2 vs GG 1 and non-cancerous tissue, it demonstrated a sensitivity of 0.98 and specificity of 0.85.The DeepDx Prostate AI algorithm had excellent agreement with expert uropathologists and performance in cancer identification and grading on RP specimens, despite being trained on biopsy specimens from an entirely different patient population.

    View details for DOI 10.1111/bju.16464

    View details for PubMedID 38989669

  • Artificial intelligence and radiologists in prostate cancer detection on MRI (PI-CAI): an international, paired, non-inferiority, confirmatory study. The Lancet. Oncology Saha, A., Bosma, J. S., Twilt, J. J., van Ginneken, B., Bjartell, A., Padhani, A. R., Bonekamp, D., Villeirs, G., Salomon, G., Giannarini, G., Kalpathy-Cramer, J., Barentsz, J., Maier-Hein, K. H., Rusu, M., Rouvière, O., van den Bergh, R., Panebianco, V., Kasivisvanathan, V., Obuchowski, N. A., Yakar, D., Elschot, M., Veltman, J., Fütterer, J. J., de Rooij, M., Huisman, H. 2024

    Abstract

    Artificial intelligence (AI) systems can potentially aid the diagnostic pathway of prostate cancer by alleviating the increasing workload, preventing overdiagnosis, and reducing the dependence on experienced radiologists. We aimed to investigate the performance of AI systems at detecting clinically significant prostate cancer on MRI in comparison with radiologists using the Prostate Imaging-Reporting and Data System version 2.1 (PI-RADS 2.1) and the standard of care in multidisciplinary routine practice at scale.In this international, paired, non-inferiority, confirmatory study, we trained and externally validated an AI system (developed within an international consortium) for detecting Gleason grade group 2 or greater cancers using a retrospective cohort of 10 207 MRI examinations from 9129 patients. Of these examinations, 9207 cases from three centres (11 sites) based in the Netherlands were used for training and tuning, and 1000 cases from four centres (12 sites) based in the Netherlands and Norway were used for testing. In parallel, we facilitated a multireader, multicase observer study with 62 radiologists (45 centres in 20 countries; median 7 [IQR 5-10] years of experience in reading prostate MRI) using PI-RADS (2.1) on 400 paired MRI examinations from the testing cohort. Primary endpoints were the sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC) of the AI system in comparison with that of all readers using PI-RADS (2.1) and in comparison with that of the historical radiology readings made during multidisciplinary routine practice (ie, the standard of care with the aid of patient history and peer consultation). Histopathology and at least 3 years (median 5 [IQR 4-6] years) of follow-up were used to establish the reference standard. The statistical analysis plan was prespecified with a primary hypothesis of non-inferiority (considering a margin of 0·05) and a secondary hypothesis of superiority towards the AI system, if non-inferiority was confirmed. This study was registered at ClinicalTrials.gov, NCT05489341.Of the 10 207 examinations included from Jan 1, 2012, through Dec 31, 2021, 2440 cases had histologically confirmed Gleason grade group 2 or greater prostate cancer. In the subset of 400 testing cases in which the AI system was compared with the radiologists participating in the reader study, the AI system showed a statistically superior and non-inferior AUROC of 0·91 (95% CI 0·87-0·94; p<0·0001), in comparison to the pool of 62 radiologists with an AUROC of 0·86 (0·83-0·89), with a lower boundary of the two-sided 95% Wald CI for the difference in AUROC of 0·02. At the mean PI-RADS 3 or greater operating point of all readers, the AI system detected 6·8% more cases with Gleason grade group 2 or greater cancers at the same specificity (57·7%, 95% CI 51·6-63·3), or 50·4% fewer false-positive results and 20·0% fewer cases with Gleason grade group 1 cancers at the same sensitivity (89·4%, 95% CI 85·3-92·9). In all 1000 testing cases where the AI system was compared with the radiology readings made during multidisciplinary practice, non-inferiority was not confirmed, as the AI system showed lower specificity (68·9% [95% CI 65·3-72·4] vs 69·0% [65·5-72·5]) at the same sensitivity (96·1%, 94·0-98·2) as the PI-RADS 3 or greater operating point. The lower boundary of the two-sided 95% Wald CI for the difference in specificity (-0·04) was greater than the non-inferiority margin (-0·05) and a p value below the significance threshold was reached (p<0·001).An AI system was superior to radiologists using PI-RADS (2.1), on average, at detecting clinically significant prostate cancer and comparable to the standard of care. Such a system shows the potential to be a supportive tool within a primary diagnostic setting, with several associated benefits for patients and radiologists. Prospective validation is needed to test clinical applicability of this system.Health~Holland and EU Horizon 2020.

    View details for DOI 10.1016/S1470-2045(24)00220-1

    View details for PubMedID 38876123

  • Using Machine Learning Models to Identify Factors Associated With 30-Day Readmissions After Posterior Cervical Fusions: A Longitudinal Cohort Study. Neurospine Gonzalez-Suarez, A. D., Rezaii, P. G., Herrick, D., Tigchelaar, S. S., Ratliff, J. K., Rusu, M., Scheinker, D., Jeon, I., Desai, A. M. 2024

    Abstract

    Readmission rates after posterior cervical fusion (PCF) significantly impact patients and healthcare, with complication rates at 15%-5% and up to 12% 90-day readmission rates. In this study, we aim to test whether machine learning (ML) models that capture interfactorial interactions outperform traditional logistic regression (LR) in identifying readmission-associated factors.The Optum Clinformatics Data Mart database was used to identify patients who underwent PCF between 2004-2017. To determine factors associated with 30-day readmissions, 5 ML models were generated and evaluated, including a multivariate LR (MLR) model. Then, the best-performing model, Gradient Boosting Machine (GBM), was compared to the LACE (Length patient stay in the hospital, Acuity of admission of patient in the hospital, Comorbidity, and Emergency visit) index regarding potential cost savings from algorithm implementation.This study included 4,130 patients, 874 of which were readmitted within 30 days. When analyzed and scaled, we found that patient discharge status, comorbidities, and number of procedure codes were factors that influenced MLR, while patient discharge status, billed admission charge, and length of stay influenced the GBM model. The GBM model significantly outperformed MLR in predicting unplanned readmissions (mean area under the receiver operating characteristic curve, 0.846 vs. 0.829; p<0.001), while also projecting an average cost savings of 50% more than the LACE index.Five models (GBM, XGBoost [extreme gradient boosting], RF [random forest], LASSO [least absolute shrinkage and selection operator], and MLR) were evaluated, among which, the GBM model exhibited superior predictive performance, robustness, and accuracy. Factors associated with readmissions impact LR and GBM models differently, suggesting that these models can be used complementarily. When analyzing PCF procedures, the GBM model resulted in greater predictive performance and was associated with higher theoretical cost savings for readmissions associated with PCF complications.

    View details for DOI 10.14245/ns.2347340.670

    View details for PubMedID 38768945

  • PREDICTORS OF TREATMENT FAILURE AFTER FOCAL HIGH-INTENSITY FOCUSED ULTRASOUND (HIFU) OF LOCALIZED PROSTATE CANCER Soerensen, S., Sommer, E. R., Zhou, S. R., Rusu, M., Fan, R. E., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2024: E411-E412
  • ARTIFICIAL INTELLIGENCE-ASSISTED PROSTATE CANCER DETECTION ON B-MODE TRANSRECTAL ULTRASOUND IMAGES Bhattacharya, I., Vesal, S., Jahanandish, H., Choi, M., Zhou, S., Kornberg, Z., Sommer, E., Fan, R. E., Brooks, J. D., Rusu, M., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2024: E511
  • AI VS. UROLOGISTS: A COMPARATIVE ANALYSIS FOR PROSTATE CANCER DETECTION ON TRANSRECTAL B-MODE ULTRASOUND Vesal, S., Bhattacharya, I., Jahanandish, H., Choi, M., Zhou, S., Kornberg, Z., Sommer, E., Fan, R. E., Rusu, M., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2024: E1056
  • RAPHIA: A deep learning pipeline for the registration of MRI and whole-mount histopathology images of the prostate. Computers in biology and medicine Shao, W., Vesal, S., Soerensen, S. J., Bhattacharya, I., Golestani, N., Yamashita, R., Kunder, C. A., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M. 2024; 173: 108318

    Abstract

    Image registration can map the ground truth extent of prostate cancer from histopathology images onto MRI, facilitating the development of machine learning methods for early prostate cancer detection. Here, we present RAdiology PatHology Image Alignment (RAPHIA), an end-to-end pipeline for efficient and accurate registration of MRI and histopathology images. RAPHIA automates several time-consuming manual steps in existing approaches including prostate segmentation, estimation of the rotation angle and horizontal flipping in histopathology images, and estimation of MRI-histopathology slice correspondences. By utilizing deep learning registration networks, RAPHIA substantially reduces computational time. Furthermore, RAPHIA obviates the need for a multimodal image similarity metric by transferring histopathology image representations to MRI image representations and vice versa. With the assistance of RAPHIA, novice users achieved expert-level performance, and their mean error in estimating histopathology rotation angle was reduced by 51% (12 degrees vs 8 degrees), their mean accuracy of estimating histopathology flipping was increased by 5% (95.3% vs 100%), and their mean error in estimating MRI-histopathology slice correspondences was reduced by 45% (1.12 slices vs 0.62 slices). When compared to a recent conventional registration approach and a deep learning registration approach, RAPHIA achieved better mapping of histopathology cancer labels, with an improved mean Dice coefficient of cancer regions outlined on MRI and the deformed histopathology (0.44 vs 0.48 vs 0.50), and a reduced mean per-case processing time (51 vs 11 vs 4.5 min). The improved performance by RAPHIA allows efficient processing of large datasets for the development of machine learning models for prostate cancer detection on MRI. Our code is publicly available at: https://github.com/pimed/RAPHIA.

    View details for DOI 10.1016/j.compbiomed.2024.108318

    View details for PubMedID 38522253

  • Improving Automated Prostate Cancer Detection and Classification Accuracy with Multi-scale Cancer Information Li, C., Bhattacharya, I., Vesal, S., Saunders, S., Soerensen, S., Fan, R. E., Sonn, G. A., Rusu, M., Cao, Xu, Rekik, Cui, Z., Ouyang SPRINGER INTERNATIONAL PUBLISHING AG. 2024: 341-350
  • A deep learning framework to assess the feasibility of localizing prostate cancer on b-mode transrectal ultrasound images Jahanandish, H., Vesal, S., Bhattacharya, I., Li, C., Fan, R. E., Sonn, G. A., Rusu, M., Boehm, B., Bottenus, N. SPIE-INT SOC OPTICAL ENGINEERING. 2024

    View details for DOI 10.1117/12.3008819

    View details for Web of Science ID 001223524400023

  • Deep Learning for Prostate and Central Gland Segmentation on Micro-Ultrasound Images Zhang, L., Zhou, S., Choi, M., Fan, R. E., Sang, S., Sonn, G. A., Rusu, M., Boehm, B., Bottenus, N. SPIE-INT SOC OPTICAL ENGINEERING. 2024

    View details for DOI 10.1117/12.3008845

    View details for Web of Science ID 001223524400005

  • SwinTransformer-Based Affine Registration of MRI and Ultrasound Images of the Prostate Sang, S., Jahanandish, H., Li, X., Vesal, S., Bhattacharya, I., Zhang, L., Fan, R. E., Sonn, G., Rusu, M., Boehm, B., Bottenus, N. SPIE-INT SOC OPTICAL ENGINEERING. 2024

    View details for DOI 10.1117/12.3008797

    View details for Web of Science ID 001223524400006

  • Assessing breast cancer chemotherapy response in radiology and pathology reports via a Large Language Model Dodhia, P., Meepagala, S., Moallem, G., Rubin, D., Bean, G., Rusu, M., Yoshida, H., Wu, S. SPIE-INT SOC OPTICAL ENGINEERING. 2024

    View details for DOI 10.1117/12.3006495

    View details for Web of Science ID 001219280700001

  • Automated Labeling of Spondylolisthesis Cases through Spinal MRI Radiology Report Interpretation using ChatGPT Moallem, G., Gonzalez, A., Desai, A., Rusu, M., Chen, W., Astley, S. M. SPIE-INT SOC OPTICAL ENGINEERING. 2024

    View details for DOI 10.1117/12.3006999

    View details for Web of Science ID 001208134600098

  • ArtHiFy: Artificial Histopathology-style Features for Improving MRI-Based Prostate Cancer Detection Bhattacharya, I., Shao, W., Li, X., Soerensen, S. C., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M., Chen, W., Astley, S. M. SPIE-INT SOC OPTICAL ENGINEERING. 2024

    View details for DOI 10.1117/12.3006879

    View details for Web of Science ID 001208134600061

  • Prediction and Mapping of Intraprostatic Tumor Extent with Artificial Intelligence. European urology open science Priester, A., Fan, R. E., Shubert, J., Rusu, M., Vesal, S., Shao, W., Khandwala, Y. S., Marks, L. S., Natarajan, S., Sonn, G. A. 2023; 54: 20-27

    Abstract

    Background: Magnetic resonance imaging (MRI) underestimation of prostate cancer extent complicates the definition of focal treatment margins.Objective: To validate focal treatment margins produced by an artificial intelligence (AI) model.Design setting and participants: Testing was conducted retrospectively in an independent dataset of 50 consecutive patients who had radical prostatectomy for intermediate-risk cancer. An AI deep learning model incorporated multimodal imaging and biopsy data to produce three-dimensional cancer estimation maps and margins. AI margins were compared with conventional MRI regions of interest (ROIs), 10-mm margins around ROIs, and hemigland margins. The AI model also furnished predictions of negative surgical margin probability, which were assessed for accuracy.Outcome measurements and statistical analysis: Comparing AI with conventional margins, sensitivity was evaluated using Wilcoxon signed-rank tests and negative margin rates using chi-square tests. Predicted versus observed negative margin probability was assessed using linear regression. Clinically significant prostate cancer (International Society of Urological Pathology grade ≥2) delineated on whole-mount histopathology served as ground truth.Results and limitations: The mean sensitivity for cancer-bearing voxels was higher for AI margins (97%) than for conventional ROIs (37%, p<0.001), 10-mm ROI margins (93%, p=0.24), and hemigland margins (94%, p<0.001). For index lesions, AI margins were more often negative (90%) than conventional ROIs (0%, p<0.001), 10-mm ROI margins (82%, p=0.24), and hemigland margins (66%, p=0.004). Predicted and observed negative margin probabilities were strongly correlated (R2=0.98, median error=4%). Limitations include a validation dataset derived from a single institution's prostatectomy population.Conclusions: The AI model was accurate and effective in an independent test set. This approach could improve and standardize treatment margin definition, potentially reducing cancer recurrence rates. Furthermore, an accurate assessment of negative margin probability could facilitate informed decision-making for patients and physicians.Patient summary: Artificial intelligence was used to predict the extent of tumors in surgically removed prostate specimens. It predicted tumor margins more accurately than conventional methods.

    View details for DOI 10.1016/j.euros.2023.05.018

    View details for PubMedID 37545845

  • Identification of Factors Associated With 30-Day Readmissions After Posterior Lumbar Fusion Using Machine Learning and Traditional Models: A National Longitudinal Database Study. Spine Rezaii, P. G., Herrick, D., Ratliff, J. K., Rusu, M., Scheinker, D., Desai, A. M. 2023

    Abstract

    STUDY DESIGN: Retrospective cohort study.OBJECTIVE: To identify factors associated with readmissions after PLF using machine learning and logistic regression (LR) models.SUMMARY OF BACKGROUND DATA: Readmissions following posterior lumbar fusion (PLF) place significant health and financial burden on the patient and overall healthcare system.METHODS: The Optum Clinformatics Data Mart database was used to identify patients who underwent posterior lumbar laminectomy, fusion, and instrumentation between 2004 and 2017. Four machine learning models and a multivariable LR model were used to assess factors most closely associated with 30-day readmission. These models were also evaluated in terms of ability to predict unplanned 30-day readmissions. The top performing model (Gradient Boosting Machine; GBM) was then compared to the validated LACE index in terms of potential cost savings associated with implementation of the model.RESULTS: A total of 18,981 patients were included, of which 3,080 (16.2%) were readmitted within 30 days of initial admission. Discharge status, prior admission, and geographic division were most influential for the LR model, while discharge status, length of stay, and prior admissions had greatest relevance for the GBM model. GBM outperformed LR in predicting unplanned 30-day readmission (mean AUC 0.865 vs. 0.850, P<0.0001). Use of GBM also achieved a projected 80% decrease in readmission-associated costs relative to those achieved by the LACE index model.CONCLUSIONS: Factors associated with readmission vary in terms of predictive influence based on standard logistic regression and machine learning models used, highlighting the complementary roles these models have in identifying relevant factors for prediction of 30-day readmissions. For posterior lumbar fusion procedures, Gradient Boosting Machine yielded greatest predictive ability and associated cost savings for readmission.LEVEL OF EVIDENCE: 3.

    View details for DOI 10.1097/BRS.0000000000004664

    View details for PubMedID 37027190

  • DETECTION OF CLINICALLY SIGNIFICANT PROSTATE CANCER ON MRI: A COMPARISON OF AN ARTIFICIAL INTELLIGENCE MODEL VERSUS RADIOLOGISTS Soerensen, S., Fan, R. E., Bhattacharya, I., Lim, D. S., Ahmadi, S., Li, X., Vesal, S., Rusu, M., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2023: E103
  • IMPROVING PROSTATE CANCER DETECTION ON MRI WITH DEEP LEARNING, CLINICAL VARIABLES, AND RADIOMICS Saunders, S., Li, X., Vesal, S., Bhattacharya, I., Soerensen, S. C., Fan, R. E., Rusu, M., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2023: E665
  • Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning IEEE TRANSACTIONS ON MEDICAL IMAGING Hering, A., Hansen, L., Mok, T. W., Chung, A. S., Siebert, H., Hager, S., Lange, A., Kuckertz, S., Heldmann, S., Shao, W., Vesal, S., Rusu, M., Sonn, G., Estienne, T., Vakalopoulou, M., Han, L., Huang, Y., Yap, P., Brudfors, M., Balbastre, Y., Joutard, S., Modat, M., Lifshitz, G., Raviv, D., Lv, J., Li, Q., Jaouen, V., Visvikis, D., Fourcade, C., Rubeaux, M., Pan, W., Xu, Z., Jian, B., De Benetti, F., Wodzinski, M., Gunnarsson, N., Sjolund, J., Grzech, D., Qiu, H., Li, Z., Thorley, A., Duan, J., Grossbroehmer, C., Hoopes, A., Reinertsen, I., Xiao, Y., Landman, B., Huo, Y., Murphy, K., Lessmann, N., van Ginneken, B., Dalca, A. V., Heinrich, M. P. 2023; 42 (3): 697-712

    Abstract

    Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.

    View details for DOI 10.1109/TMI.2022.3213983

    View details for Web of Science ID 000971629600011

    View details for PubMedID 36264729

  • MIC-CUSP: Multimodal Image Correlations for Ultrasound-Based Prostate Cancer Detection Bhattacharya, I., Vesal, S., Jahanandish, H., Choi, M., Zhou, S., Kornberg, Z., Sommer, E., Fan, R., Brooks, J., Sonn, G., Rusu, M., Kainz, B., Noble, A., Schnabel, J., Khanal, B., Muller, J. P., Day, T. SPRINGER INTERNATIONAL PUBLISHING AG. 2023: 121-131
  • BreastRegNet: A Deep Learning Framework for Registration of Breast Faxitron and Histopathology Images Golestani, N., Wang, A., Bean, G. R., Rusu, M., Hering, A., Woo, J., Silva, W., Li, Fu, H., Liu, Xing, F., Purushotham, S., Mathai, T. S., Mukherjee, P., DeGrauw, M., Tan, R. B., Corbetta, Kotter, E., Reyes, M., Baumgartner, C. F., Li, Q., Leahy, R., Dong, B., Chen, H., Huo, Y., Lv, J., Xu, Li, Mahapatra, D., Cheng, L., Petitjean, C., Presles, B. SPRINGER INTERNATIONAL PUBLISHING AG. 2023: 182-192
  • The Association of Tissue Change and Treatment Success During High-intensity Focused Ultrasound Focal Therapy for Prostate Cancer. European urology focus Khandwala, Y. S., Soerensen, S. J., Morisetty, S., Ghanouni, P., Fan, R. E., Vesal, S., Rusu, M., Sonn, G. A. 2022

    Abstract

    BACKGROUND: Tissue preservation strategies have been increasingly used for the management of localized prostate cancer. Focal ablation using ultrasound-guided high-intensity focused ultrasound (HIFU) has demonstrated promising short and medium-term oncological outcomes. Advancements in HIFU therapy such as the introduction of tissue change monitoring (TCM) aim to further improve treatment efficacy.OBJECTIVE: To evaluate the association between intraoperative TCM during HIFU focal therapy for localized prostate cancer and oncological outcomes 12 mo afterward.DESIGN, SETTING, AND PARTICIPANTS: Seventy consecutive men at a single institution with prostate cancer were prospectively enrolled. Men with prior treatment, metastases, or pelvic radiation were excluded to obtain a final cohort of 55 men.INTERVENTION: All men underwent HIFU focal therapy followed by magnetic resonance (MR)-fusion biopsy 12 mo later. Tissue change was quantified intraoperatively by measuring the backscatter of ultrasound waves during ablation.OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Gleason grade group (GG) ≥2 cancer on postablation biopsy was the primary outcome. Secondary outcomes included GG ≥1 cancer, Prostate Imaging Reporting and Data System (PI-RADS) scores ≥3, and evidence of tissue destruction on post-treatment magnetic resonance imaging (MRI). A Student's t - test analysis was performed to evaluate the mean TCM scores and efficacy of ablation measured by histopathology. Multivariate logistic regression was also performed to identify the odds of residual cancer for each unit increase in the TCM score.RESULTS AND LIMITATIONS: A lower mean TCM score within the region of the tumor (0.70 vs 0.97, p=0.02) was associated with the presence of persistent GG ≥2 cancer after HIFU treatment. Adjusting for initial prostate-specific antigen, PI-RADS score, Gleason GG, positive cores, and age, each incremental increase of TCM was associated with an 89% reduction in the odds (odds ratio: 0.11, confidence interval: 0.01-0.97) of having residual GG ≥2 cancer on postablation biopsy. Men with higher mean TCM scores (0.99 vs 0.72, p=0.02) at the time of treatment were less likely to have abnormal MRI (PI-RADS ≥3) at 12 mo postoperatively. Cases with high TCM scores also had greater tissue destruction measured on MRI and fewer visible lesions on postablation MRI.CONCLUSIONS: Tissue change measured using TCM values during focal HIFU of the prostate was associated with histopathology and radiological outcomes 12 mo after the procedure.PATIENT SUMMARY: In this report, we looked at how well ultrasound changes of the prostate during focal high-intensity focused ultrasound (HIFU) therapy for the treatment of prostate cancer predict patient outcomes. We found that greater tissue change measured by the HIFU device was associated with less residual cancer at 1 yr. This tool should be used to ensure optimal ablation of the cancer and may improve focal therapy outcomes in the future.

    View details for DOI 10.1016/j.euf.2022.10.010

    View details for PubMedID 36372735

  • A review of artificial intelligence in prostate cancer detection on imaging. Therapeutic advances in urology Bhattacharya, I., Khandwala, Y. S., Vesal, S., Shao, W., Yang, Q., Soerensen, S. J., Fan, R. E., Ghanouni, P., Kunder, C. A., Brooks, J. D., Hu, Y., Rusu, M., Sonn, G. A. 2022; 14: 17562872221128791

    Abstract

    A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.

    View details for DOI 10.1177/17562872221128791

    View details for PubMedID 36249889

    View details for PubMedCentralID PMC9554123

  • Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study. Medical image analysis Vesal, S., Gayo, I., Bhattacharya, I., Natarajan, S., Marks, L. S., Barratt, D. C., Fan, R. E., Hu, Y., Sonn, G. A., Rusu, M. 2022; 82: 102620

    Abstract

    Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7mm and Dice: 82.0±0.03; HD95: 7.1mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.

    View details for DOI 10.1016/j.media.2022.102620

    View details for PubMedID 36148705

  • Evaluation of post-ablation mpMRI as a predictor of residual prostate cancer after focal high intensity focused ultrasound (HIFU) ablation. Urologic oncology Khandwala, Y. S., Morisetty, S., Ghanouni, P., Fan, R. E., Soerensen, S. J., Rusu, M., Sonn, G. A. 2022

    Abstract

    PURPOSE: To evaluate the performance of multiparametric magnetic resonance imaging (mpMRI) and PSA testing in follow-up after high intensity focused ultrasound (HIFU) focal therapy for localized prostate cancer.METHODS: A total of 73 men with localized prostate cancer were prospectively enrolled and underwent focal HIFU followed by per-protocol PSA and mpMRI with systematic plus targeted biopsies at 12 months after treatment. We evaluated the association between post-treatment mpMRI and PSA with disease persistence on the post-ablation biopsy. We also assessed post-treatment functional and oncological outcomes.RESULTS: Median age was 69 years (Interquartile Range (IQR): 66-74) and median PSA was 6.9 ng/dL (IQR: 5.3-9.9). Of 19 men with persistent GG ≥ 2 disease, 58% (11 men) had no visible lesions on MRI. In the 14 men with PIRADS 4 or 5 lesions, 7 (50%) had either no cancer or GG 1 cancer at biopsy. Men with false negative mpMRI findings had higher PSA density (0.16 vs. 0.07 ng/mL2, P = 0.01). No change occurred in the mean Sexual Health Inventory for Men (SHIM) survey scores (17.0 at baseline vs. 17.7 post-treatment, P = 0.75) or International Prostate Symptom Score (IPSS) (8.1 at baseline vs. 7.7 at 24 months, P = 0.81) after treatment.CONCLUSIONS: Persistent GG ≥ 2 cancer may occur after focal HIFU. mpMRI alone without confirmatory biopsy may be insufficient to rule out residual cancer, especially in patients with higher PSA density. Our study also validates previously published studies demonstrating preservation of urinary and sexual function after HIFU treatment.

    View details for DOI 10.1016/j.urolonc.2022.07.017

    View details for PubMedID 36058811

  • Deep learning-based pseudo-mass spectrometry imaging analysis for precision medicine. Briefings in bioinformatics Shen, X., Shao, W., Wang, C., Liang, L., Chen, S., Zhang, S., Rusu, M., Snyder, M. P. 2022

    Abstract

    Liquid chromatography-mass spectrometry (LC-MS)-based untargeted metabolomics provides systematic profiling of metabolic. Yet, its applications in precision medicine (disease diagnosis) have been limited by several challenges, including metabolite identification, information loss and low reproducibility. Here, we present the deep-learning-based Pseudo-Mass Spectrometry Imaging (deepPseudoMSI) project (https://www.deeppseudomsi.org/), which converts LC-MS raw data to pseudo-MS images and then processes them by deep learning for precision medicine, such as disease diagnosis. Extensive tests based on real data demonstrated the superiority of deepPseudoMSI over traditional approaches and the capacity of our method to achieve an accurate individualized diagnosis. Our framework lays the foundation for future metabolic-based precision medicine.

    View details for DOI 10.1093/bib/bbac331

    View details for PubMedID 35947990

  • Computational Detection of Extraprostatic Extension of Prostate Cancer on Multiparametric MRI Using Deep Learning. Cancers Moroianu, S. L., Bhattacharya, I., Seetharaman, A., Shao, W., Kunder, C. A., Sharma, A., Ghanouni, P., Fan, R. E., Sonn, G. A., Rusu, M. 2022; 14 (12)

    Abstract

    The localization of extraprostatic extension (EPE), i.e., local spread of prostate cancer beyond the prostate capsular boundary, is important for risk stratification and surgical planning. However, the sensitivity of EPE detection by radiologists on MRI is low (57% on average). In this paper, we propose a method for computational detection of EPE on multiparametric MRI using deep learning. Ground truth labels of cancers and EPE were obtained in 123 patients (38 with EPE) by registering pre-surgical MRI with whole-mount digital histopathology images from radical prostatectomy. Our approach has two stages. First, we trained deep learning models using the MRI as input to generate cancer probability maps both inside and outside the prostate. Second, we built an image post-processing pipeline that generates predictions for EPE location based on the cancer probability maps and clinical knowledge. We used five-fold cross-validation to train our approach using data from 74 patients and tested it using data from an independent set of 49 patients. We compared two deep learning models for cancer detection: (i) UNet and (ii) the Correlated Signature Network for Indolent and Aggressive prostate cancer detection (CorrSigNIA). The best end-to-end model for EPE detection, which we call EPENet, was based on the CorrSigNIA cancer detection model. EPENet was successful at detecting cancers with extraprostatic extension, achieving a mean area under the receiver operator characteristic curve of 0.72 at the patient-level. On the test set, EPENet had 80.0% sensitivity and 28.2% specificity at the patient-level compared to 50.0% sensitivity and 76.9% specificity for the radiologists. To account for spatial location of predictions during evaluation, we also computed results at the sextant-level, where the prostate was divided into sextants according to standard systematic 12-core biopsy procedure. At the sextant-level, EPENet achieved mean sensitivity 61.1% and mean specificity 58.3%. Our approach has the potential to provide the location of extraprostatic extension using MRI alone, thus serving as an independent diagnostic aid to radiologists and facilitating treatment planning.

    View details for DOI 10.3390/cancers14122821

    View details for PubMedID 35740487

  • Bridging the gap between prostate radiology and pathology through machine learning. Medical physics Bhattacharya, I., Lim, D. S., Aung, H. L., Liu, X., Seetharaman, A., Kunder, C. A., Shao, W., Soerensen, S. J., Fan, R. E., Ghanouni, P., To'o, K. J., Brooks, J. D., Sonn, G. A., Rusu, M. 2022

    Abstract

    Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, Magnetic Resonance Imaging (MRI) is considered the most sensitive non-invasive imaging modality that enables visualization, detection and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter-reader agreements.Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI.Four different deep learning models (SPCNet, U-Net, branched U-Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel-level Gleason patterns) on whole-mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre-operative MRI using an automated MRI-histopathology registration platform.Radiologist labels missed cancers (ROC-AUC: 0.75 - 0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24 - 0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC-AUC: 0.97 - 1, lesion Dice: 0.75 - 0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC-AUC: 0.91 - 0.94), and had generalizable and comparable performance to pathologist label trained-models in the targeted biopsy cohort (aggressive lesion ROC-AUC: 0.87 - 0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel-level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human-annotated label type.Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label-trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter- and intra-reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15777

    View details for PubMedID 35633505

  • Correlation of 68Ga-RM2 PET with Post-Surgery Histopathology Findings in Patients with Newly Diagnosed Intermediate- or High-Risk Prostate Cancer. Journal of nuclear medicine : official publication, Society of Nuclear Medicine Duan, H., Baratto, L., Fan, R. E., Soerensen, S. J., Liang, T., Chung, B. I., Thong, A. E., Gill, H., Kunder, C., Stoyanova, T., Rusu, M., Loening, A. M., Ghanouni, P., Davidzon, G. A., Moradi, F., Sonn, G. A., Iagaru, A. 2022

    Abstract

    Rationale: 68Ga-RM2 targets gastrin-releasing peptide receptors (GRPR), which are overexpressed in prostate cancer (PC). Here, we compared pre-operative 68Ga-RM2 PET to post-surgery histopathology in patients with newly diagnosed intermediate- or high-risk PC. Methods: Forty-one men, 64.0+/-6.7-year-old, were prospectively enrolled. PET images were acquired 42 - 72 (median+/-SD 52.5+/-6.5) minutes after injection of 118.4 - 247.9 (median+/-SD 138.0+/-22.2)MBq of 68Ga-RM2. PET findings were compared to pre-operative mpMRI (n = 36) and 68Ga-PSMA11 PET (n = 17) and correlated to post-prostatectomy whole-mount histopathology (n = 32) and time to biochemical recurrence. Nine participants decided to undergo radiation therapy after study enrollment. Results: All participants had intermediate (n = 17) or high-risk (n = 24) PC and were scheduled for prostatectomy. Prostate specific antigen (PSA) was 8.8+/-77.4 (range 2.5 - 504) ng/mL, and 7.6+/-5.3 (range 2.5 - 28.0) ng/mL when excluding participants who ultimately underwent radiation treatment. Pre-operative 68Ga-RM2 PET identified 70 intraprostatic foci of uptake in 40/41 patients. Post-prostatectomy histopathology was available in 32 patients in which 68Ga-RM2 PET identified 50/54 intraprostatic lesions (detection rate = 93%). 68Ga-RM2 uptake was recorded in 19 non-enlarged pelvic lymph nodes in 6 patients. Pathology confirmed lymph node metastases in 16 lesions, and follow-up imaging confirmed nodal metastases in 2 lesions. 68Ga-PSMA11 and 68Ga-RM2 PET identified 27 and 26 intraprostatic lesions, respectively, and 5 pelvic lymph nodes each in 17 patients. Concordance between 68Ga-RM2 and 68Ga-PSMA11 PET was found in 18 prostatic lesions in 11 patients, and 4 lymph nodes in 2 patients. Non-congruent findings were observed in 6 patients (intraprostatic lesions in 4 patients and nodal lesions in 2 patients). Both 68Ga-RM2 and 68Ga-PSMA11 had higher sensitivity and accuracy rates with 98%, 89%, and 95%, 89%, respectively, compared to mpMRI at 77% and 77%. Specificity was highest for mpMRI with 75% followed by 68Ga-PSMA11 (67%), and 68Ga-RM2 (65%). Conclusion: 68Ga-RM2 PET accurately detects intermediate- and high-risk primary PC with a detection rate of 93%. In addition, it showed significantly higher specificity and accuracy compared to mpMRI and similar performance to 68Ga-PSMA11 PET. These findings need to be confirmed in larger studies to identify which patients will benefit from one or the other or both radiopharmaceuticals.

    View details for DOI 10.2967/jnumed.122.263971

    View details for PubMedID 35552245

  • Image quality assessment for machine learning tasks using meta-reinforcement learning. Medical image analysis Saeed, S. U., Fu, Y., Stavrinides, V., Baum, Z. M., Yang, Q., Rusu, M., Fan, R. E., Sonn, G. A., Noble, J. A., Barratt, D. C., Hu, Y. 2022; 78: 102427

    Abstract

    In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.

    View details for DOI 10.1016/j.media.2022.102427

    View details for PubMedID 35344824

  • Collaborative Quantization Embeddings for Intra-subject Prostate MR Image Registration Shen, Z., Yang, Q., Shen, Y., Giganti, F., Stavrinides, V., Fan, R., Moore, C., Rusu, M., Sonn, G., Torr, P., Barratt, D., Hu, Y., Wang, L., Dou, Q., Fletcher, P. T., Speidel, S., Li, S. SPRINGER INTERNATIONAL PUBLISHING AG. 2022: 237-247
  • The Learn2Reg 2021 MICCAI Grand Challenge (PIMed Team) The Learn2Reg 2021 MICCAI Grand Challenge (PIMed Team) Shao, W., Vesal, S., Lim, D., Li, C., Golestani, N., Alsinan, A., Fan, R., Sonn, G., Rusu, M. 2022
  • Integrating zonal priors and pathomic MRI biomarkers for improved aggressive prostate cancer detection on MRI Bhattacharya, I., Shao, W., Soerensen, S. C., Fan, R. E., Wang, J. B., Kunder, C., Ghanouni, P., Sonn, G. A., Rusu, M., Drukker, K., Iftekharuddin, K. M. SPIE-INT SOC OPTICAL ENGINEERING. 2022

    View details for DOI 10.1117/12.2612433

    View details for Web of Science ID 000838048600024

  • EXTERNAL VALIDATION OF AN ARTIFICIAL INTELLIGENCE ALGORITHM FOR PROSTATE CANCER GLEASON GRADING AND TUMOR QUANTIFICATION Schmidt, B., Bhambhvani, H. P., Fan, R. E., Kunder, C., Kao, C., Higgins, J. P., Rusu, M., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2021: E1004
  • Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy JOURNAL OF UROLOGY Soerensen, S., Fan, R. E., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Kim, Y., Sood, R., Borre, M., Chung, B., To'o, K. J., Rusu, M., Sonn, G. A. 2021; 206 (3): 605-612
  • DETAILED ANALYSIS OF MRI CONCORDANCE WITH PROSTATECTOMY HISTOPATHOLOGY USING DEEP LEARNING-BASED DIGITAL PATHOLOGY Hockman, L., Fan, R., Schmidt, B., Bhattacharya, I., Rusu, M., Sonn, G. LIPPINCOTT WILLIAMS & WILKINS. 2021: E813-E814
  • Geodesic density regression for correcting 4DCT pulmonary respiratory motion artifacts. Medical image analysis Shao, W., Pan, Y., Durumeric, O. C., Reinhardt, J. M., Bayouth, J. E., Rusu, M., Christensen, G. E. 2021; 72: 102140

    Abstract

    Pulmonary respiratory motion artifacts are common in four-dimensional computed tomography (4DCT) of lungs and are caused by missing, duplicated, and misaligned image data. This paper presents a geodesic density regression (GDR) algorithm to correct motion artifacts in 4DCT by correcting artifacts in one breathing phase with artifact-free data from corresponding regions of other breathing phases. The GDR algorithm estimates an artifact-free lung template image and a smooth, dense, 4D (space plus time) vector field that deforms the template image to each breathing phase to produce an artifact-free 4DCT scan. Correspondences are estimated by accounting for the local tissue density change associated with air entering and leaving the lungs, and using binary artifact masks to exclude regions with artifacts from image regression. The artifact-free lung template image is generated by mapping the artifact-free regions of each phase volume to a common reference coordinate system using the estimated correspondences and then averaging. This procedure generates a fixed view of the lung with an improved signal-to-noise ratio. The GDR algorithm was evaluated and compared to a state-of-the-art geodesic intensity regression (GIR) algorithm using simulated CT time-series and 4DCT scans with clinically observed motion artifacts. The simulation shows that the GDR algorithm has achieved significantly more accurate Jacobian images and sharper template images, and is less sensitive to data dropout than the GIR algorithm. We also demonstrate that the GDR algorithm is more effective than the GIR algorithm for removing clinically observed motion artifacts in treatment planning 4DCT scans. Our code is freely available at https://github.com/Wei-Shao-Reg/GDR.

    View details for DOI 10.1016/j.media.2021.102140

    View details for PubMedID 34214957

  • Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on MRI for Targeted Biopsy. The Journal of urology Soerensen, S. J., Fan, R. E., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Kim, Y., Sood, R., Borre, M., Chung, B. I., To'o, K. J., Rusu, M., Sonn, G. A. 2021: 101097JU0000000000001783

    Abstract

    PURPOSE: Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on MRI is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine MR-US fusion biopsy in the clinic.MATERIALS AND METHODS: 905 subjects underwent multiparametric MRI at 29 institutions, followed by MR-US fusion biopsy at one institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to two deep learning networks (U-Net and HED) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests.RESULTS: ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), HED (DSC=0.80, p< 0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs. 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file.CONCLUSIONS: This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urologic clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.

    View details for DOI 10.1097/JU.0000000000001783

    View details for PubMedID 33878887

  • Automated Detection of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging. Medical physics Seetharaman, A., Bhattacharya, I., Chen, L. C., Kunder, C. A., Shao, W., Soerensen, S. J., Wang, J. B., Teslovich, N. C., Fan, R. E., Ghanouni, P., Brooks, J. D., To'o, K. J., Sonn, G. A., Rusu, M. 2021

    Abstract

    PURPOSE: While multi-parametric Magnetic Resonance Imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy.METHODS: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtainedby registering MRI with whole-mount digital histopathology images from patients that underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients that underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including: 6 patients with normal MRI and no cancer, 23 patients that underwent radical prostatectomy, and 293 patients that underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists.RESULTS: Our model detected clinically significant lesions with an Area Under the Receiver Operator Characteristics Curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer.CONCLUSIONS: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.

    View details for DOI 10.1002/mp.14855

    View details for PubMedID 33760269

  • 3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction. Medical image analysis Sood, R. R., Shao, W. n., Kunder, C. n., Teslovich, N. C., Wang, J. B., Soerensen, S. J., Madhuripan, N. n., Jawahar, A. n., Brooks, J. D., Ghanouni, P. n., Fan, R. E., Sonn, G. A., Rusu, M. n. 2021; 69: 101957

    Abstract

    The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.

    View details for DOI 10.1016/j.media.2021.101957

    View details for PubMedID 33550008

  • ProGNet: Prostate Gland Segmentation on MRI with Deep Learning Soerensen, S., Fan, R., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Borre, M., Chung, B., To'o, K., Sonn, G., Rusu, M., Isgum, Landman, B. A. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2580448

    View details for Web of Science ID 000672800200091

  • Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework. Medical image analysis Bhattacharya, I., Seetharaman, A., Kunder, C., Shao, W., Chen, L. C., Soerensen, S. J., Wang, J. B., Teslovich, N. C., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M. 2021; 75: 102288

    Abstract

    Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.

    View details for DOI 10.1016/j.media.2021.102288

    View details for PubMedID 34784540

  • Weakly Supervised Registration of Prostate MRI and Histopathology Images Shao, W., Bhattacharya, I., Soerensen, S. C., Kunder, C. A., Wang, J. B., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M., DeBruijne, M., Cattin, P. C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 98-107
  • Adaptable Image Quality Assessment Using Meta-Reinforcement Learning of Task Amenability Saeed, S. U., Fu, Y., Stavrinides, V., Baum, Z. C., Yang, Q., Rusu, M., Fan, R. E., Sonn, G. A., Noble, J., Barratt, D. C., Hu, Y., Noble, J. A., Aylward, S., Grimwood, A., Min, Z., Lee, S. L., Hu, Y. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 191-201
  • Detecting Invasive Breast Carcinoma on Dynamic Contrast-Enhanced MRI Moroianu, S. L., Rusu, M., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2580989

    View details for Web of Science ID 000672800100012

  • Intensity Normalization of Prostate MRIs using Conditional Generative Adversarial Networks for Cancer Detection DeSilvio, T., Moroianu, S., Bhattacharya, I., Seetharaman, A., Sonn, G., Rusu, M., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2582297

    View details for Web of Science ID 000672800100016

  • Clinically significant prostate cancer detection on MRI with self-supervised learning using image context restoration Bolous, A., Seetharaman, A., Bhattacharya, I., Fan, R. E., Soerensen, S., Chen, L., Ghanouni, P., Sonn, G. A., Rusu, M., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581557

    View details for Web of Science ID 000672800100052

  • Registration of pre-surgical MRI and histopathology images from radical prostatectomy via RAPSODI. Medical physics Rusu, M., Shao, W., Kunder, C. A., Wang, J. B., Soerensen, S. J., Teslovich, N. C., Sood, R. R., Chen, L. C., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A. 2020

    Abstract

    PURPOSE: Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis, however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with pre-operative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align pre-surgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI.METHODS: Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a 3D reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the pre-operative MRI.RESULTS: We tested RAPSODI in a phantom study where we simulated various conditions, e.g., tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97±0.01 for the prostate, a Hausdorff distance of 1.99±0.70 mm for the prostate boundary, a urethra deviation of 3.09±1.45 mm, and a landmark deviation of 2.80±0.59 mm between registered histopathology images and MRI.CONCLUSION: Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.

    View details for DOI 10.1002/mp.14337

    View details for PubMedID 32564359

  • ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate. Medical image analysis Shao, W. n., Banh, L. n., Kunder, C. A., Fan, R. E., Soerensen, S. J., Wang, J. B., Teslovich, N. C., Madhuripan, N. n., Jawahar, A. n., Ghanouni, P. n., Brooks, J. D., Sonn, G. A., Rusu, M. n. 2020; 68: 101919

    Abstract

    Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.

    View details for DOI 10.1016/j.media.2020.101919

    View details for PubMedID 33385701

  • Multiscale, multimodal analysis of tumor heterogeneity in IDH1 mutant vs wild-type diffuse gliomas PLOS ONE Berens, M. E., Sood, A., Barnholtz-Sloan, J. S., Graf, J. F., Cho, S., Kim, S., Kiefer, J., Byron, S. A., Halperin, R. F., Nasser, S., Adkins, J., Cuyugan, L., Devine, K., Ostrom, Q., Couce, M., Wolansky, L., McDonough, E., Schyberg, S., Dinn, S., Sloan, A. E., Prados, M., Phillips, J. J., Nelson, S. J., Liang, W. S., Al-Kofahi, Y., Rusu, M., Zavodszky, M., Ginty, F. 2019; 14 (12): e0219724

    Abstract

    Glioma is recognized to be a highly heterogeneous CNS malignancy, whose diverse cellular composition and cellular interactions have not been well characterized. To gain new clinical- and biological-insights into the genetically-bifurcated IDH1 mutant (mt) vs wildtype (wt) forms of glioma, we integrated data from protein, genomic and MR imaging from 20 treatment-naïve glioma cases and 16 recurrent GBM cases. Multiplexed immunofluorescence (MxIF) was used to generate single cell data for 43 protein markers representing all cancer hallmarks, Genomic sequencing (exome and RNA (normal and tumor) and magnetic resonance imaging (MRI) quantitative features (protocols were T1-post, FLAIR and ADC) from whole tumor, peritumoral edema and enhancing core vs equivalent normal region were also collected from patients. Based on MxIF analysis, 85,767 cells (glioma cases) and 56,304 cells (GBM cases) were used to generate cell-level data for 24 biomarkers. K-means clustering was used to generate 7 distinct groups of cells with divergent biomarker profiles and deconvolution was used to assign RNA data into three classes. Spatial and molecular heterogeneity metrics were generated for the cell data. All features were compared between IDH mt and IDHwt patients and were finally combined to provide a holistic/integrated comparison. Protein expression by hallmark was generally lower in the IDHmt vs wt patients. Molecular and spatial heterogeneity scores for angiogenesis and cell invasion also differed between IDHmt and wt gliomas irrespective of prior treatment and tumor grade; these differences also persisted in the MR imaging features of peritumoral edema and contrast enhancement volumes. A coherent picture of enhanced angiogenesis in IDHwt tumors was derived from multiple platforms (genomic, proteomic and imaging) and scales from individual proteins to cell clusters and heterogeneity, as well as bulk tumor RNA and imaging features. Longer overall survival for IDH1mt glioma patients may reflect mutation-driven alterations in cellular, molecular, and spatial heterogeneity which manifest in discernable radiological manifestations.

    View details for DOI 10.1371/journal.pone.0219724

    View details for Web of Science ID 000515089200003

    View details for PubMedID 31881020

    View details for PubMedCentralID PMC6934292

  • AUTOMATED DETECTION OF PROSTATE CANCER ON MULTIPARAMETRIC MRI USING DEEP NEURAL NETWORKS TRAINED ON SPATIAL COORDINATES AND PATHOLOGY OF BIOPSY CORES Chen, L., Bien, N., Fan, R., Cheong, R., Rajpurkar, P., Thong, A., Wang, N., Ahmadi, S., Rusu, M., Brooks, J., Ng, A., Sonn, G. LIPPINCOTT WILLIAMS & WILKINS. 2019: E1098
  • ANISOTROPIC SUPER RESOLUTION IN PROSTATE MRI USING SUPER RESOLUTION GENERATIVE ADVERSARIAL NETWORKS Sood, R., Rusu, M., IEEE IEEE. 2019: 1688–91
  • Spatial integration of radiology and pathology images to characterize breast cancer aggressiveness on pre-surgical MRI Rusu, M., Daniel, B., West, R., Angelini, E. D., Landman, B. A. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2512670

    View details for Web of Science ID 000483012700032

  • Framework for the co-registration of MRI and Histology Images in Prostate Cancer Patients with Radical Prostatectomy Rusu, M., Kunder, C., Fan, R., Ghanouni, P., West, R., Sonn, G., Brooks, J., Angelini, E. D., Landman, B. A. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2513099

    View details for Web of Science ID 000483012700057

  • A deep learning-based algorithm for 2-D cell segmentation in microscopy images BMC BIOINFORMATICS Al-Kofahi, Y., Zaltsman, A., Graves, R., Marshall, W., Rusu, M. 2018; 19: 365

    Abstract

    Automatic and reliable characterization of cells in cell cultures is key to several applications such as cancer research and drug discovery. Given the recent advances in light microscopy and the need for accurate and high-throughput analysis of cells, automated algorithms have been developed for segmenting and analyzing the cells in microscopy images. Nevertheless, accurate, generic and robust whole-cell segmentation is still a persisting need to precisely quantify its morphological properties, phenotypes and sub-cellular dynamics.We present a single-channel whole cell segmentation algorithm. We use markers that stain the whole cell, but with less staining in the nucleus, and without using a separate nuclear stain. We show the utility of our approach in microscopy images of cell cultures in a wide variety of conditions. Our algorithm uses a deep learning approach to learn and predict locations of the cells and their nuclei, and combines that with thresholding and watershed-based segmentation. We trained and validated our approach using different sets of images, containing cells stained with various markers and imaged at different magnifications. Our approach achieved a 86% similarity to ground truth segmentation when identifying and separating cells.The proposed algorithm is able to automatically segment cells from single channel images using a variety of markers and magnifications.

    View details for PubMedID 30285608

  • An Application of Generative Adversarial Networks for Super Resolution Medical Imaging Sood, R., Topiwala, B., Choutagunta, K., Sood, R., Rusu, M., Wani, M. A., Kantardzic, M., Sayedmouchaweh, M., Gama, J., Lughofer, E. IEEE. 2018: 326–31
  • Co-registration of pre-operative CT with ex vivo surgically excised ground glass nodules to define spatial extent of invasive adenocarcinoma on in vivo imaging: a proof-of-concept study. European radiology Rusu, M., Rajiah, P., Gilkeson, R., Yang, M., Donatelli, C., Thawani, R., Jacono, F. J., Linden, P., Madabhushi, A. 2017

    Abstract

    To develop an approach for radiology-pathology fusion of ex vivo histology of surgically excised pulmonary nodules with pre-operative CT, to radiologically map spatial extent of the invasive adenocarcinomatous component of the nodule.Six subjects (age: 75 ± 11 years) with pre-operative CT and surgically excised ground-glass nodules (size: 22.5 ± 5.1 mm) with a significant invasive adenocarcinomatous component (>5 mm) were included. The pathologist outlined disease extent on digitized histology specimens; two radiologists and a pulmonary critical care physician delineated the entire nodule on CT (in-plane resolution: <0.8 mm, inter-slice distance: 1-5 mm). We introduced a novel reconstruction approach to localize histology slices in 3D relative to each other while using CT scan as spatial constraint. This enabled the spatial mapping of the extent of tumour invasion from histology onto CT.Good overlap of the 3D reconstructed histology and the nodule outlined on CT was observed (65.9 ± 5.2%). Reduction in 3D misalignment of corresponding anatomical landmarks on histology and CT was observed (1.97 ± 0.42 mm). Moreover, the CT attenuation (HU) distributions were different when comparing invasive and in situ regions.This proof-of-concept study suggests that our fusion method can enable the spatial mapping of the invasive adenocarcinomatous component from 2D histology slices onto in vivo CT.• 3D reconstructions are generated from 2D histology specimens of ground glass nodules. • The reconstruction methodology used pre-operative in vivo CT as 3D spatial constraint. • The methodology maps adenocarcinoma extent from digitized histology onto in vivo CT. • The methodology potentially facilitates the discovery of CT signature of invasive adenocarcinoma.

    View details for DOI 10.1007/s00330-017-4813-0

    View details for PubMedID 28386717

    View details for PubMedCentralID PMC5630490

  • Computational imaging reveals shape differences between normal and malignant prostates on MRI SCIENTIFIC REPORTS Rusu, M., Purysko, A. S., Verma, S., Kiechle, J., Gollamudi, J., Ghose, S., Herrmann, K., Gulani, V., Paspulati, R., Ponsky, L., Bohm, M., Haynes, A., Moses, D., Shnier, R., Delprado, W., Thompson, J., Stricker, P., Madabhushi, A. 2017; 7

    Abstract

    We seek to characterize differences in the shape of the prostate and the central gland (combined central and transitional zones) between men with biopsy confirmed prostate cancer and men who were identified as not having prostate cancer either on account of a negative biopsy or had pelvic imaging done for a non-prostate malignancy. T2w MRI from 70 men were acquired at three institutions. The cancer positive group (PCa+) comprised 35 biopsy positive (Bx+) subjects from three institutions (Gleason scores: 6-9, Stage: T1-T3). The negative group (PCa-) combined 24 biopsy negative (Bx-) from two institutions and 11 subjects diagnosed with rectal cancer but with no clinical or MRI indications of prostate cancer (Cl-). The boundaries of the prostate and central gland were delineated on T2w MRI by two expert raters and were used to construct statistical shape atlases for the PCa+, Bx- and Cl- prostates. An atlas comparison was performed via per-voxel statistical tests to localize shape differences (significance assessed at p < 0.05). The atlas comparison revealed central gland hypertrophy in the Bx- subpopulation, resulting in significant volume and posterior side shape differences relative to PCa+ group. Significant differences in the corresponding prostate shapes were noted at the apex when comparing the Cl- and PCa+ prostates.

    View details for DOI 10.1038/srep41261

    View details for Web of Science ID 000393299000001

    View details for PubMedID 28145532

    View details for PubMedCentralID PMC5286513

  • Prostate shapes on pre-treatment MRI between prostate cancer patients who do and do not undergo biochemical recurrence are different: Preliminary Findings Sci Rep Ghose, S., Shiradkar, R., Rusu, M., Mitra, J., Thawani, R., Feldman, M., Gupta, A., Ponsky, L., Purysko, A., Madabushi, A. 2017; 7 (1): 15829
  • Field Effect Induced Organ Distension (FOrge) Features Predicting Biochemical Recurrence from Pre-treatment Prostate MRI Medical Image Computing and Computer Assisted Intervention. Medical Image Computing and Computer-Assisted Intervention (MICCAI) Ghose, S., Shiradkar, R., Rusu, M., Mitra, J., Thawani, R., Feldman, M., Gupta, A., Purysko, A., Ponsky, L., Madabhushi, A. 2017: 442-449
  • Co-Registration of ex vivo Surgical Histopathology and in vivo T2 weighted MRI of the Prostate via multi-scale spectral embedding representation Sci. Rep Li, L., Pahwac, S., Penzias, G., Rusu, M., Gollamudi, J., Viswanath, S., Madabhushi, A. 2017; 7: 8717
  • Identifying in vivo DCE MRI markers associated with microvessel architecture and gleason grades of prostate cancer JOURNAL OF MAGNETIC RESONANCE IMAGING Singanamalli, A., Rusu, M., Sparks, R. E., Shih, N. N., Ziober, A., Wang, L., Tomaszewski, J., Rosen, M., Feldman, M., Madabhushi, A. 2016; 43 (1): 149-158

    Abstract

    To identify computer extracted in vivo dynamic contrast enhanced (DCE) MRI markers associated with quantitative histomorphometric (QH) characteristics of microvessels and Gleason scores (GS) in prostate cancer.This study considered retrospective data from 23 biopsy confirmed prostate cancer patients who underwent 3 Tesla multiparametric MRI before radical prostatectomy (RP). Representative slices from RP specimens were stained with vascular marker CD31. Tumor extent was mapped from RP sections onto DCE MRI using nonlinear registration methods. Seventy-seven microvessel QH features and 18 DCE MRI kinetic features were extracted and evaluated for their ability to distinguish low from intermediate and high GS. The effect of temporal sampling on kinetic features was assessed and correlations between those robust to temporal resolution and microvessel features discriminative of GS were examined.A total of 12 microvessel architectural features were discriminative of low and intermediate/high grade tumors with area under the receiver operating characteristic curve (AUC) > 0.7. These features were most highly correlated with mean washout gradient (WG) (max rho = -0.62). Independent analysis revealed WG to be moderately robust to temporal resolution (intraclass correlation coefficient [ICC] = 0.63) and WG variance, which was poorly correlated with microvessel features, to be predictive of low grade tumors (AUC = 0.77). Enhancement ratio was the most robust (ICC = 0.96) and discriminative (AUC = 0.78) kinetic feature but was moderately correlated with microvessel features (max rho = -0.52).Computer extracted features of prostate DCE MRI appear to be correlated with microvessel architecture and may be discriminative of low versus intermediate and high GS.

    View details for DOI 10.1002/jmri.24975

    View details for Web of Science ID 000368741400013

    View details for PubMedID 26110513

    View details for PubMedCentralID PMC4691230

  • Radiomics Analysis on FLT-PET/MRI for Characterization of Early Treatment Response in Renal Cell Carcinoma: A Proof-of-Concept Study Transl Oncol Antunes, J., Viswanath, S., Rusu, M., Valls, L., Hoimes, C., Avril, N., Madabhushi, A. 2016; 9 (2): 155-162
  • AutoStitcher: An Automated Program for Efficient and Robust Reconstruction of Digitized Whole Histological Sections from Tissue Fragments Sci Rep Penzias, G., Janowczyk, A., Singanamalli, A., Rusu, M., Shih, N., Feldman, M., Stricker, P. D., Delprado, W., Tiwari, S., Böhm, M., Haynes, A., Ponsky, L., Viswanath, S., Madabhushi, A. 2016; 6: 29906

    View details for DOI 10.1038/srep29906

  • Framework for 3D histologic reconstruction and fusion with in vivo MRI: Preliminary results of characterizing pulmonary inflammation in a mouse model MEDICAL PHYSICS Rusu, M., Golden, T., Wang, H., Gow, A., Madabhushi, A. 2015; 42 (8): 4822-4832

    Abstract

    Pulmonary inflammation is associated with a variety of diseases. Assessing pulmonary inflammation on in vivo imaging may facilitate the early detection and treatment of lung diseases. Although routinely used in thoracic imaging, computed tomography has thus far not been compellingly shown to characterize inflammation in vivo. Alternatively, magnetic resonance imaging (MRI) is a nonionizing radiation technique to better visualize and characterize pulmonary tissue. Prior to routine adoption of MRI for early characterization of inflammation in humans, a rigorous and quantitative characterization of the utility of MRI to identify inflammation is required. Such characterization may be achieved by considering ex vivo histology as the ground truth, since it enables the definitive spatial assessment of inflammation. In this study, the authors introduce a novel framework to integrate 2D histology, ex vivo and in vivo imaging to enable the mapping of the extent of disease from ex vivo histology onto in vivo imaging, with the goal of facilitating computerized feature analysis and interrogation of disease appearance on in vivo imaging. The authors' framework was evaluated in a preclinical preliminary study aimed to identify computer extracted features on in vivo MRI associated with chronic pulmonary inflammation.The authors' image analytics framework first involves reconstructing the histologic volume in 3D from individual histology slices. Second, the authors map the disease ground truth onto in vivo MRI via coregistration with 3D histology using the ex vivo lung MRI as a conduit. Finally, computerized feature analysis of the disease extent is performed to identify candidate in vivo imaging signatures of disease presence and extent.The authors evaluated the framework by assessing the quality of the 3D histology reconstruction and the histology-MRI fusion, in the context of an initial use case involving characterization of chronic inflammation in a mouse model. The authors' evaluation considered three mice, two with an inflammation phenotype and one control. The authors' iterative 3D histology reconstruction yielded a 70.1% ± 2.7% overlap with the ex vivo MRI volume. Across a total of 17 anatomic landmarks manually delineated at the division of airways, the target registration error between the ex vivo MRI and 3D histology reconstruction was 0.85 ± 0.44 mm, suggesting that a good alignment of the ex vivo 3D histology and ex vivo MRI had been achieved. The 3D histology-in vivo MRI coregistered volumes resulted in an overlap of 73.7% ± 0.9%. Preliminary computerized feature analysis was performed on an additional four control mice, for a total of seven mice considered in this study. Gabor texture filters appeared to best capture differences between the inflamed and noninflamed regions on MRI.The authors' 3D histology reconstruction and multimodal registration framework were successfully employed to reconstruct the histology volume of the lung and fuse it with in vivo MRI to create a ground truth map for inflammation on in vivo MRI. The analytic platform presented here lays the framework for a rigorous validation of the identified imaging features for chronic lung inflammation on MRI in a large prospective cohort.

    View details for DOI 10.1118/1.4923161

    View details for Web of Science ID 000358933000039

    View details for PubMedID 26233209

    View details for PubMedCentralID PMC4522013

  • Prostatome: A combined anatomical and disease based MRI atlas of the prostate MEDICAL PHYSICS Rusu, M., Bloch, B. N., Jaffe, C. C., Genega, E. M., Lenkinski, R. E., Rofsky, N. M., Feleppa, E., Madabhushi, A. 2014; 41 (7)

    Abstract

    In this work, the authors introduce a novel framework, the anatomically constrained registration (AnCoR) scheme and apply it to create a fused anatomic-disease atlas of the prostate which the authors refer to as the prostatome. The prostatome combines a MRI based anatomic and a histology based disease atlas. Statistical imaging atlases allow for the integration of information across multiple scales and imaging modalities into a single canonical representation, in turn enabling a fused anatomical-disease representation which may facilitate the characterization of disease appearance relative to anatomic structures. While statistical atlases have been extensively developed and studied for the brain, approaches that have attempted to combine pathology and imaging data for study of prostate pathology are not extant. This works seeks to address this gap.The AnCoR framework optimizes a scoring function composed of two surface (prostate and central gland) misalignment measures and one intensity-based similarity term. This ensures the correct mapping of anatomic regions into the atlas, even when regional MRI intensities are inconsistent or highly variable between subjects. The framework allows for creation of an anatomic imaging and a disease atlas, while enabling their fusion into the anatomic imaging-disease atlas. The atlas presented here was constructed using 83 subjects with biopsy confirmed cancer who had pre-operative MRI (collected at two institutions) followed by radical prostatectomy. The imaging atlas results from mapping thein vivo MRI into the canonical space, while the anatomic regions serve as domain constraints. Elastic co-registration MRI and corresponding ex vivo histology provides "ground truth" mapping of cancer extent on in vivo imaging for 23 subjects.AnCoR was evaluated relative to alternative construction strategies that use either MRI intensities or the prostate surface alone for registration. The AnCoR framework yielded a central gland Dice similarity coefficient (DSC) of 90%, and prostate DSC of 88%, while the misalignment of the urethra and verumontanum was found to be 3.45 mm, and 4.73 mm, respectively, which were measured to be significantly smaller compared to the alternative strategies. As might have been anticipated from our limited cohort of biopsy confirmed cancers, the disease atlas showed that most of the tumor extent was limited to the peripheral zone. Moreover, central gland tumors were typically larger in size, possibly because they are only discernible at a much later stage.The authors presented the AnCoR framework to explicitly model anatomic constraints for the construction of a fused anatomic imaging-disease atlas. The framework was applied to constructing a preliminary version of an anatomic-disease atlas of the prostate, the prostatome. The prostatome could facilitate the quantitative characterization of gland morphology and imaging features of prostate cancer. These techniques, may be applied on a large sample size data set to create a fully developed prostatome that could serve as a spatial prior for targeted biopsies by urologists. Additionally, the AnCoR framework could allow for incorporation of complementary imaging and molecular data, thereby enabling their careful correlation for population based radio-omics studies.

    View details for DOI 10.1118/1.4881515

    View details for Web of Science ID 000339009800034

    View details for PubMedID 24989400

    View details for PubMedCentralID PMC4187363

  • Identifying Quantitative In Vivo Multi-Parametric MRI Features For Treatment Related Changes after Laser Interstitial Thermal Therapy of Prostate Cancer Neurocomputing Viswanath, S., Toth, R., Rusu, M., Sperling, D., Madabhushi, A. 2014; 144: 13-23
  • Anisotropic Smoothing Regularization (AnSR) in Thirion's Demons Registration Evaluates Brain MRI Tissue Changes Post-Laser Ablation IEEE Engineering in Medicine and Biology Sciences Hwuang, E., Danish, S., Rusu, M., Sparks, R., Toth, R., Madabhushi, A. 2013: 4006-4009
  • Automated tracing of filaments in 3D electron tomography reconstructions using Sculptor and Situs JOURNAL OF STRUCTURAL BIOLOGY Rusu, M., Starosolski, Z., Wahle, M., Rigort, A., Wriggers, W. 2012; 178 (2): 121-128

    Abstract

    The molecular graphics program Sculptor and the command-line suite Situs are software packages for the integration of biophysical data across spatial resolution scales. Herein, we provide an overview of recently developed tools relevant to cryo-electron tomography (cryo-ET), with an emphasis on functionality supported by Situs 2.7.1 and Sculptor 2.1.1. We describe a work flow for automatically segmenting filaments in cryo-ET maps including denoising, local normalization, feature detection, and tracing. Tomograms of cellular actin networks exhibit both cross-linked and bundled filament densities. Such filamentous regions in cryo-ET data sets can then be segmented using a stochastic template-based search, VolTrac. The approach combines a genetic algorithm and a bidirectional expansion with a tabu search strategy to localize and characterize filamentous regions. The automated filament segmentation by VolTrac compares well to a manual one performed by expert users, and it allows an efficient and reproducible analysis of large data sets. The software is free, open source, and can be used on Linux, Macintosh or Windows computers.

    View details for DOI 10.1016/j.jsb.2012.03.001

    View details for Web of Science ID 000304287400007

    View details for PubMedID 22433493

    View details for PubMedCentralID PMC3440181

  • Evolutionary bidirectional expansion for the tracing of alpha helices in cryo-electron microscopy reconstructions JOURNAL OF STRUCTURAL BIOLOGY Rusu, M., Wriggers, W. 2012; 177 (2): 410-419

    Abstract

    Cryo-electron microscopy (cryo-EM) enables the imaging of macromolecular complexes in near-native environments at resolutions that often permit the visualization of secondary structure elements. For example, alpha helices frequently show consistent patterns in volumetric maps, exhibiting rod-like structures of high density. Here, we introduce VolTrac (Volume Tracer) - a novel technique for the annotation of alpha-helical density in cryo-EM data sets. VolTrac combines a genetic algorithm and a bidirectional expansion with a tabu search strategy to trace helical regions. Our method takes advantage of the stochastic search by using a genetic algorithm to identify optimal placements for a short cylindrical template, avoiding exploration of already characterized tabu regions. These placements are then utilized as starting positions for the adaptive bidirectional expansion that characterizes the curvature and length of the helical region. The method reliably predicted helices with seven or more residues in experimental and simulated maps at intermediate (4-10Å) resolution. The observed success rates, ranging from 70.6% to 100%, depended on the map resolution and validation parameters. For successful predictions, the helical axes were located within 2Å from known helical axes of atomic structures.

    View details for DOI 10.1016/j.jsb.2011.11.029

    View details for Web of Science ID 000300755400026

    View details for PubMedID 22155667

    View details for PubMedCentralID PMC3288247

  • An assembly model of rift valley Fever virus. Frontiers in microbiology Rusu, M., Bonneau, R., Holbrook, M. R., Watowich, S. J., Birmanns, S., Wriggers, W., Freiberg, A. N. 2012; 3: 254-?

    Abstract

    Rift Valley fever virus (RVFV) is a bunyavirus endemic to Africa and the Arabian Peninsula that infects humans and livestock. The virus encodes two glycoproteins, Gn and Gc, which represent the major structural antigens and are responsible for host cell receptor binding and fusion. Both glycoproteins are organized on the virus surface as cylindrical hollow spikes that cluster into distinct capsomers with the overall assembly exhibiting an icosahedral symmetry. Currently, no experimental three-dimensional structure for any entire bunyavirus glycoprotein is available. Using fold recognition, we generated molecular models for both RVFV glycoproteins and found significant structural matches between the RVFV Gn protein and the influenza virus hemagglutinin protein and a separate match between RVFV Gc protein and Sindbis virus envelope protein E1. Using these models, the potential interaction and arrangement of both glycoproteins in the RVFV particle was analyzed, by modeling their placement within the cryo-electron microscopy density map of RVFV. We identified four possible arrangements of the glycoproteins in the virion envelope. Each assembly model proposes that the ectodomain of Gn forms the majority of the protruding capsomer and that Gc is involved in formation of the capsomer base. Furthermore, Gc is suggested to facilitate intercapsomer connections. The proposed arrangement of the two glycoproteins on the RVFV surface is similar to that described for the alphavirus E1-E2 proteins. Our models will provide guidance to better understand the assembly process of phleboviruses and such structural studies can also contribute to the design of targeted antivirals.

    View details for DOI 10.3389/fmicb.2012.00254

    View details for PubMedID 22837754

    View details for PubMedCentralID PMC3400131

  • Developing a denoising filter for electron microscopy and tomography data in the cloud Biophysical Reviews Starosolski, Z., Szczepanski, M., Wahle, M., Rusu, M., Wriggers, W. 2012: 1-7
  • Evolutionary tabu search strategies for the simultaneous registration of multiple atomic structures in cryo-EM reconstructions JOURNAL OF STRUCTURAL BIOLOGY Rusu, M., Birmanns, S. 2010; 170 (1): 164-171

    Abstract

    A structural characterization of multi-component cellular assemblies is essential to explain the mechanisms governing biological function. Macromolecular architectures may be revealed by integrating information collected from various biophysical sources - for instance, by interpreting low-resolution electron cryomicroscopy reconstructions in relation to the crystal structures of the constituent fragments. A simultaneous registration of multiple components is beneficial when building atomic models as it introduces additional spatial constraints to facilitate the native placement inside the map. The high-dimensional nature of such a search problem prevents the exhaustive exploration of all possible solutions. Here we introduce a novel method based on genetic algorithms, for the efficient exploration of the multi-body registration search space. The classic scheme of a genetic algorithm was enhanced with new genetic operations, tabu search and parallel computing strategies and validated on a benchmark of synthetic and experimental cryo-EM datasets. Even at a low level of detail, for example 35-40 A, the technique successfully registered multiple component biomolecules, measuring accuracies within one order of magnitude of the nominal resolutions of the maps. The algorithm was implemented using the Sculptor molecular modeling framework, which also provides a user-friendly graphical interface and enables an instantaneous, visual exploration of intermediate solutions.

    View details for DOI 10.1016/j.jsb.2009.12.028

    View details for Web of Science ID 000276329600020

    View details for PubMedID 20056148

    View details for PubMedCentralID PMC2872094

  • Using Sculptor and Situs for simultaneous assembly of atomic components into low-resolution shapes Journal of Structural Biology Birmanns, S., Rusu, M., Wriggers, W., et al 2010; 173: 428-435
  • Biomolecular pleiomorphism probed by spatial interpolation of coarse models BIOINFORMATICS Rusu, M., Birmanns, S., Wriggers, W. 2008; 24 (21): 2460-2466

    Abstract

    In low resolution structures of biological assemblies one can often observe conformational deviations that require a flexible rearrangement of structural domains fitted at the atomic level. We are evaluating interpolation methods for the flexible alignment of atomic models based on coarse models. Spatial interpolation is well established in image-processing and visualization to describe the overall deformation or warping of an object or an image. Combined with a coarse representation of the biological system by feature vectors, such methods can provide a flexible approximation of the molecular structure. We have compared three well-known interpolation techniques and evaluated the results by comparing them with constrained molecular dynamics. One method, inverse distance weighting interpolation, consistently produced models that were nearly indistinguishable on the alpha carbon level from the molecular dynamics results. The method is simple to apply and enables flexing of structures by non-expert modelers. This is useful for the basic interpretation of volumetric data in biological applications such as electron microscopy. The method can be used as a general interpretation tool for sparsely sampled motions derived from coarse models.

    View details for DOI 10.1093/bioinformatics/btn461

    View details for Web of Science ID 000260381200007

    View details for PubMedID 18757874

    View details for PubMedCentralID PMC2732278

  • VITA - An Interactive 3-D Visualization System to Enhance Student Understanding of Mathematical Concepts in Medical Decision-making IEEE Computer-Based Medical Systems Iyengar, M., Svirbely, J., Rusu, M., Smith, J. 2008

    View details for DOI 10.1109/CBMS.2008.35

  • A mammalian microRNA expression atlas based on small RNA library sequencing CELL Landgraf, P., Rusu, M., Sheridan, R., Sewer, A., Iovino, N., Aravin, A., Pfeffer, S., Rice, A., Kamphorst, A. O., Landthaler, M., Lin, C., Socci, N. D., Hermida, L., Fulci, V., Chiaretti, S., Foa, R., Schliwka, J., Fuchs, U., Novosel, A., Mueller, R., Schermer, B., Bissels, U., Inman, J., Phan, Q., Chien, M., Weir, D. B., Choksi, R., De Vita, G., Frezzetti, D., Trompeter, H., Hornung, V., Teng, G., Hartmann, G., Palkovits, M., Di Lauro, R., Wernet, P., Macino, G., Rogler, C. E., Nagle, J. W., Ju, J., Papavasiliou, F. N., Benzing, T., Lichter, P., Tam, W., Brownstein, M. J., Bosio, A., Borkhardt, A., Russo, J. J., Sander, C., Zavolan, M., Tuschl, T. 2007; 129 (7): 1401-1414

    Abstract

    MicroRNAs (miRNAs) are small noncoding regulatory RNAs that reduce stability and/or translation of fully or partially sequence-complementary target mRNAs. In order to identify miRNAs and to assess their expression patterns, we sequenced over 250 small RNA libraries from 26 different organ systems and cell types of human and rodents that were enriched in neuronal as well as normal and malignant hematopoietic cells and tissues. We present expression profiles derived from clone count data and provide computational tools for their analysis. Unexpectedly, a relatively small set of miRNAs, many of which are ubiquitously expressed, account for most of the differences in miRNA profiles between cell lineages and tissues. This broad survey also provides detailed and accurate information about mature sequences, precursors, genome locations, maturation processes, inferred transcriptional units, and conservation patterns. We also propose a subclassification scheme for miRNAs for assisting future experimental and computational functional analyses.

    View details for DOI 10.1016/j.cell.2007.04.040

    View details for Web of Science ID 000247911400024

    View details for PubMedID 17604727

    View details for PubMedCentralID PMC2681231