Dr. Rusu is currently an Assistant Professor, in the Department of Radiology at Stanford University, where she leads the Personalized Integrative Medicine Laboratory (PIMed). The PIMed Laboratory has a multi-disciplinary direction and focuses on developing analytic methods for biomedical data integration, with a particular interest in radiology-pathology fusion to facilitate radiology image labeling. The radiology-pathology fusion allows the creation of detailed spatial labels, that later on can be used as input for advanced machine learning, such as deep learning. The recent focus of the lab has been on applying deep learning methods to detect and differentiate aggressive from indolent prostate cancers on MRI using the pathology information (both labels and the image content), work that was recently published in Medical Physics and Medical Image Analysis Journals.
Dr. Rusu received a Master of Engineering in Bioinformatics from the National Institute of Applied Sciences in Lyon, France. She continued her training at the University of Texas Health Science Center in Houston, where she received a Master of Science and PhD degree in Health Informatics for her work in biomolecular structural data integration of cryo-electron micrographs and X-ray crystallography models.
During her postdoctoral training at Case Western Reserve University, Dr. Rusu has developed computational tools for the integration and interpretation of multi-modal medical imaging data and focused on studying prostate and lung cancers. Prior to joining Stanford, Dr. Rusu was a Lead Engineer and Medical Image Analysis Scientist at GE Global Research Niskayuna NY where she was involved in the development of analytic methods to characterize biological samples in microscopy images and pathologic conditions in MRI or CT.
Honors & Awards
Above and Beyond (6), GE Global Research (2015-2017)
School of Engineering Innovation Award, Case Western Reserve University (2014)
Postdoctoral Award for poster presentation at the Research ShowCASE, Case Western Reserve University (2013)
Winner, Grand Challenge Automated SEgmentation of Prostate Structures, NCI-ISBI (2013)
James T. and Nancy Beamer Willerson Endowed Scholarship, University of Texas Health Science Center in Houston (2010)
Paul Boyle Award for Excellence in Student Research, University of Texas Health Science Center in Houston (2007)
Undergraduate Research Fellowship, Keck Center for Computational and Structural Biology, Houston (2006)
International Mobility Fellowship, Rhone-Alpes Region, France (2005)
PhD, University of Texas Health Science Center at Houston, Health Informatics | Structural Bioinformatics (2011)
MS, University of Texas Health Science Center at Houston, Health Informatics | Structural Bioinformatics (2008)
Master of Engineering, National Institute of Applied Sciences, BioSciences | Bioinformatics and Modeling (2006)
Anant Madabhushi, Mirabela Rusu. "United States Patent US9767555B2 Disease characterization from fused pathology and radiology data", Case Western Reserve University
Current Research and Scholarly Interests
Dr. Mirabela Rusu focuses on developing analytic methods for biomedical data integration, with a particular interest in radiology-pathology fusion. Such integrative methods may be applied to create comprehensive multi-scale representations of biomedical processes and pathological conditions, thus enabling their in-depth characterization.
- Computational Methods for Biomedical Image Analysis and Interpretation
BIOMEDIN 260, BMP 260, RAD 260 (Spr)
- Independent Studies (5)
Prior Year Courses
- Computational Methods for Biomedical Image Analysis and Interpretation
BIOMEDIN 260, CS 235, RAD 260 (Spr)
- Computational Methods for Biomedical Image Analysis and Interpretation
Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study.
Medical image analysis
2022; 82: 102620
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7mm and Dice: 82.0±0.03; HD95: 7.1mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.
View details for DOI 10.1016/j.media.2022.102620
View details for PubMedID 36148705
Evaluation of post-ablation mpMRI as a predictor of residual prostate cancer after focal high intensity focused ultrasound (HIFU) ablation.
PURPOSE: To evaluate the performance of multiparametric magnetic resonance imaging (mpMRI) and PSA testing in follow-up after high intensity focused ultrasound (HIFU) focal therapy for localized prostate cancer.METHODS: A total of 73 men with localized prostate cancer were prospectively enrolled and underwent focal HIFU followed by per-protocol PSA and mpMRI with systematic plus targeted biopsies at 12 months after treatment. We evaluated the association between post-treatment mpMRI and PSA with disease persistence on the post-ablation biopsy. We also assessed post-treatment functional and oncological outcomes.RESULTS: Median age was 69 years (Interquartile Range (IQR): 66-74) and median PSA was 6.9 ng/dL (IQR: 5.3-9.9). Of 19 men with persistent GG ≥ 2 disease, 58% (11 men) had no visible lesions on MRI. In the 14 men with PIRADS 4 or 5 lesions, 7 (50%) had either no cancer or GG 1 cancer at biopsy. Men with false negative mpMRI findings had higher PSA density (0.16 vs. 0.07 ng/mL2, P = 0.01). No change occurred in the mean Sexual Health Inventory for Men (SHIM) survey scores (17.0 at baseline vs. 17.7 post-treatment, P = 0.75) or International Prostate Symptom Score (IPSS) (8.1 at baseline vs. 7.7 at 24 months, P = 0.81) after treatment.CONCLUSIONS: Persistent GG ≥ 2 cancer may occur after focal HIFU. mpMRI alone without confirmatory biopsy may be insufficient to rule out residual cancer, especially in patients with higher PSA density. Our study also validates previously published studies demonstrating preservation of urinary and sexual function after HIFU treatment.
View details for DOI 10.1016/j.urolonc.2022.07.017
View details for PubMedID 36058811
Deep learning-based pseudo-mass spectrometry imaging analysis for precision medicine.
Briefings in bioinformatics
Liquid chromatography-mass spectrometry (LC-MS)-based untargeted metabolomics provides systematic profiling of metabolic. Yet, its applications in precision medicine (disease diagnosis) have been limited by several challenges, including metabolite identification, information loss and low reproducibility. Here, we present the deep-learning-based Pseudo-Mass Spectrometry Imaging (deepPseudoMSI) project (https://www.deeppseudomsi.org/), which converts LC-MS raw data to pseudo-MS images and then processes them by deep learning for precision medicine, such as disease diagnosis. Extensive tests based on real data demonstrated the superiority of deepPseudoMSI over traditional approaches and the capacity of our method to achieve an accurate individualized diagnosis. Our framework lays the foundation for future metabolic-based precision medicine.
View details for DOI 10.1093/bib/bbac331
View details for PubMedID 35947990
Computational Detection of Extraprostatic Extension of Prostate Cancer on Multiparametric MRI Using Deep Learning.
2022; 14 (12)
The localization of extraprostatic extension (EPE), i.e., local spread of prostate cancer beyond the prostate capsular boundary, is important for risk stratification and surgical planning. However, the sensitivity of EPE detection by radiologists on MRI is low (57% on average). In this paper, we propose a method for computational detection of EPE on multiparametric MRI using deep learning. Ground truth labels of cancers and EPE were obtained in 123 patients (38 with EPE) by registering pre-surgical MRI with whole-mount digital histopathology images from radical prostatectomy. Our approach has two stages. First, we trained deep learning models using the MRI as input to generate cancer probability maps both inside and outside the prostate. Second, we built an image post-processing pipeline that generates predictions for EPE location based on the cancer probability maps and clinical knowledge. We used five-fold cross-validation to train our approach using data from 74 patients and tested it using data from an independent set of 49 patients. We compared two deep learning models for cancer detection: (i) UNet and (ii) the Correlated Signature Network for Indolent and Aggressive prostate cancer detection (CorrSigNIA). The best end-to-end model for EPE detection, which we call EPENet, was based on the CorrSigNIA cancer detection model. EPENet was successful at detecting cancers with extraprostatic extension, achieving a mean area under the receiver operator characteristic curve of 0.72 at the patient-level. On the test set, EPENet had 80.0% sensitivity and 28.2% specificity at the patient-level compared to 50.0% sensitivity and 76.9% specificity for the radiologists. To account for spatial location of predictions during evaluation, we also computed results at the sextant-level, where the prostate was divided into sextants according to standard systematic 12-core biopsy procedure. At the sextant-level, EPENet achieved mean sensitivity 61.1% and mean specificity 58.3%. Our approach has the potential to provide the location of extraprostatic extension using MRI alone, thus serving as an independent diagnostic aid to radiologists and facilitating treatment planning.
View details for DOI 10.3390/cancers14122821
View details for PubMedID 35740487
Bridging the gap between prostate radiology and pathology through machine learning.
Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, Magnetic Resonance Imaging (MRI) is considered the most sensitive non-invasive imaging modality that enables visualization, detection and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter-reader agreements.Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI.Four different deep learning models (SPCNet, U-Net, branched U-Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel-level Gleason patterns) on whole-mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre-operative MRI using an automated MRI-histopathology registration platform.Radiologist labels missed cancers (ROC-AUC: 0.75 - 0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24 - 0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC-AUC: 0.97 - 1, lesion Dice: 0.75 - 0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC-AUC: 0.91 - 0.94), and had generalizable and comparable performance to pathologist label trained-models in the targeted biopsy cohort (aggressive lesion ROC-AUC: 0.87 - 0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel-level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human-annotated label type.Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label-trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter- and intra-reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI. This article is protected by copyright. All rights reserved.
View details for DOI 10.1002/mp.15777
View details for PubMedID 35633505
Correlation of 68Ga-RM2 PET with Post-Surgery Histopathology Findings in Patients with Newly Diagnosed Intermediate- or High-Risk Prostate Cancer.
Journal of nuclear medicine : official publication, Society of Nuclear Medicine
Rationale: 68Ga-RM2 targets gastrin-releasing peptide receptors (GRPR), which are overexpressed in prostate cancer (PC). Here, we compared pre-operative 68Ga-RM2 PET to post-surgery histopathology in patients with newly diagnosed intermediate- or high-risk PC. Methods: Forty-one men, 64.0+/-6.7-year-old, were prospectively enrolled. PET images were acquired 42 - 72 (median+/-SD 52.5+/-6.5) minutes after injection of 118.4 - 247.9 (median+/-SD 138.0+/-22.2)MBq of 68Ga-RM2. PET findings were compared to pre-operative mpMRI (n = 36) and 68Ga-PSMA11 PET (n = 17) and correlated to post-prostatectomy whole-mount histopathology (n = 32) and time to biochemical recurrence. Nine participants decided to undergo radiation therapy after study enrollment. Results: All participants had intermediate (n = 17) or high-risk (n = 24) PC and were scheduled for prostatectomy. Prostate specific antigen (PSA) was 8.8+/-77.4 (range 2.5 - 504) ng/mL, and 7.6+/-5.3 (range 2.5 - 28.0) ng/mL when excluding participants who ultimately underwent radiation treatment. Pre-operative 68Ga-RM2 PET identified 70 intraprostatic foci of uptake in 40/41 patients. Post-prostatectomy histopathology was available in 32 patients in which 68Ga-RM2 PET identified 50/54 intraprostatic lesions (detection rate = 93%). 68Ga-RM2 uptake was recorded in 19 non-enlarged pelvic lymph nodes in 6 patients. Pathology confirmed lymph node metastases in 16 lesions, and follow-up imaging confirmed nodal metastases in 2 lesions. 68Ga-PSMA11 and 68Ga-RM2 PET identified 27 and 26 intraprostatic lesions, respectively, and 5 pelvic lymph nodes each in 17 patients. Concordance between 68Ga-RM2 and 68Ga-PSMA11 PET was found in 18 prostatic lesions in 11 patients, and 4 lymph nodes in 2 patients. Non-congruent findings were observed in 6 patients (intraprostatic lesions in 4 patients and nodal lesions in 2 patients). Both 68Ga-RM2 and 68Ga-PSMA11 had higher sensitivity and accuracy rates with 98%, 89%, and 95%, 89%, respectively, compared to mpMRI at 77% and 77%. Specificity was highest for mpMRI with 75% followed by 68Ga-PSMA11 (67%), and 68Ga-RM2 (65%). Conclusion: 68Ga-RM2 PET accurately detects intermediate- and high-risk primary PC with a detection rate of 93%. In addition, it showed significantly higher specificity and accuracy compared to mpMRI and similar performance to 68Ga-PSMA11 PET. These findings need to be confirmed in larger studies to identify which patients will benefit from one or the other or both radiopharmaceuticals.
View details for DOI 10.2967/jnumed.122.263971
View details for PubMedID 35552245
Image quality assessment for machine learning tasks using meta-reinforcement learning.
Medical image analysis
2022; 78: 102427
In this paper, we consider image quality assessment (IQA) as a measure of how images are amenable with respect to a given downstream task, or task amenability. When the task is performed using machine learning algorithms, such as a neural-network-based task predictor for image classification or segmentation, the performance of the task predictor provides an objective estimate of task amenability. In this work, we use an IQA controller to predict the task amenability which, itself being parameterised by neural networks, can be trained simultaneously with the task predictor. We further develop a meta-reinforcement learning framework to improve the adaptability for both IQA controllers and task predictors, such that they can be fine-tuned efficiently on new datasets or meta-tasks. We demonstrate the efficacy of the proposed task-specific, adaptable IQA approach, using two clinical applications for ultrasound-guided prostate intervention and pneumonia detection on X-ray images.
View details for DOI 10.1016/j.media.2022.102427
View details for PubMedID 35344824
- Integrating zonal priors and pathomic MRI biomarkers for improved aggressive prostate cancer detection on MRI SPIE-INT SOC OPTICAL ENGINEERING. 2022
The Learn2Reg 2021 MICCAI Grand Challenge (PIMed Team)
The Learn2Reg 2021 MICCAI Grand Challenge (PIMed Team)
View details for DOI 10.1007/978-3-030-97281-3_24
EXTERNAL VALIDATION OF AN ARTIFICIAL INTELLIGENCE ALGORITHM FOR PROSTATE CANCER GLEASON GRADING AND TUMOR QUANTIFICATION
LIPPINCOTT WILLIAMS & WILKINS. 2021: E1004
View details for Web of Science ID 000693689000506
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy
JOURNAL OF UROLOGY
2021; 206 (3): 605-612
View details for Web of Science ID 000711819100035
DETAILED ANALYSIS OF MRI CONCORDANCE WITH PROSTATECTOMY HISTOPATHOLOGY USING DEEP LEARNING-BASED DIGITAL PATHOLOGY
LIPPINCOTT WILLIAMS & WILKINS. 2021: E813-E814
View details for Web of Science ID 000693689000126
Geodesic density regression for correcting 4DCT pulmonary respiratory motion artifacts.
Medical image analysis
2021; 72: 102140
Pulmonary respiratory motion artifacts are common in four-dimensional computed tomography (4DCT) of lungs and are caused by missing, duplicated, and misaligned image data. This paper presents a geodesic density regression (GDR) algorithm to correct motion artifacts in 4DCT by correcting artifacts in one breathing phase with artifact-free data from corresponding regions of other breathing phases. The GDR algorithm estimates an artifact-free lung template image and a smooth, dense, 4D (space plus time) vector field that deforms the template image to each breathing phase to produce an artifact-free 4DCT scan. Correspondences are estimated by accounting for the local tissue density change associated with air entering and leaving the lungs, and using binary artifact masks to exclude regions with artifacts from image regression. The artifact-free lung template image is generated by mapping the artifact-free regions of each phase volume to a common reference coordinate system using the estimated correspondences and then averaging. This procedure generates a fixed view of the lung with an improved signal-to-noise ratio. The GDR algorithm was evaluated and compared to a state-of-the-art geodesic intensity regression (GIR) algorithm using simulated CT time-series and 4DCT scans with clinically observed motion artifacts. The simulation shows that the GDR algorithm has achieved significantly more accurate Jacobian images and sharper template images, and is less sensitive to data dropout than the GIR algorithm. We also demonstrate that the GDR algorithm is more effective than the GIR algorithm for removing clinically observed motion artifacts in treatment planning 4DCT scans. Our code is freely available at https://github.com/Wei-Shao-Reg/GDR.
View details for DOI 10.1016/j.media.2021.102140
View details for PubMedID 34214957
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on MRI for Targeted Biopsy.
The Journal of urology
PURPOSE: Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on MRI is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine MR-US fusion biopsy in the clinic.MATERIALS AND METHODS: 905 subjects underwent multiparametric MRI at 29 institutions, followed by MR-US fusion biopsy at one institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to two deep learning networks (U-Net and HED) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests.RESULTS: ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), HED (DSC=0.80, p< 0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs. 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file.CONCLUSIONS: This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urologic clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.
View details for DOI 10.1097/JU.0000000000001783
View details for PubMedID 33878887
Automated Detection of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging.
PURPOSE: While multi-parametric Magnetic Resonance Imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy.METHODS: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtainedby registering MRI with whole-mount digital histopathology images from patients that underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients that underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including: 6 patients with normal MRI and no cancer, 23 patients that underwent radical prostatectomy, and 293 patients that underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists.RESULTS: Our model detected clinically significant lesions with an Area Under the Receiver Operator Characteristics Curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer.CONCLUSIONS: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
View details for DOI 10.1002/mp.14855
View details for PubMedID 33760269
3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction.
Medical image analysis
2021; 69: 101957
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
View details for DOI 10.1016/j.media.2021.101957
View details for PubMedID 33550008
- ProGNet: Prostate Gland Segmentation on MRI with Deep Learning SPIE-INT SOC OPTICAL ENGINEERING. 2021
Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework.
Medical image analysis
2021; 75: 102288
Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.
View details for DOI 10.1016/j.media.2021.102288
View details for PubMedID 34784540
- Weakly Supervised Registration of Prostate MRI and Histopathology Images SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 98-107
- Adaptable Image Quality Assessment Using Meta-Reinforcement Learning of Task Amenability SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 191-201
- Detecting Invasive Breast Carcinoma on Dynamic Contrast-Enhanced MRI SPIE-INT SOC OPTICAL ENGINEERING. 2021
- Intensity Normalization of Prostate MRIs using Conditional Generative Adversarial Networks for Cancer Detection SPIE-INT SOC OPTICAL ENGINEERING. 2021
- Clinically significant prostate cancer detection on MRI with self-supervised learning using image context restoration SPIE-INT SOC OPTICAL ENGINEERING. 2021
Registration of pre-surgical MRI and histopathology images from radical prostatectomy via RAPSODI.
PURPOSE: Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis, however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with pre-operative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align pre-surgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI.METHODS: Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a 3D reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the pre-operative MRI.RESULTS: We tested RAPSODI in a phantom study where we simulated various conditions, e.g., tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97±0.01 for the prostate, a Hausdorff distance of 1.99±0.70 mm for the prostate boundary, a urethra deviation of 3.09±1.45 mm, and a landmark deviation of 2.80±0.59 mm between registered histopathology images and MRI.CONCLUSION: Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.
View details for DOI 10.1002/mp.14337
View details for PubMedID 32564359
ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate.
Medical image analysis
2020; 68: 101919
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.
View details for DOI 10.1016/j.media.2020.101919
View details for PubMedID 33385701
Multiscale, multimodal analysis of tumor heterogeneity in IDH1 mutant vs wild-type diffuse gliomas
2019; 14 (12): e0219724
Glioma is recognized to be a highly heterogeneous CNS malignancy, whose diverse cellular composition and cellular interactions have not been well characterized. To gain new clinical- and biological-insights into the genetically-bifurcated IDH1 mutant (mt) vs wildtype (wt) forms of glioma, we integrated data from protein, genomic and MR imaging from 20 treatment-naïve glioma cases and 16 recurrent GBM cases. Multiplexed immunofluorescence (MxIF) was used to generate single cell data for 43 protein markers representing all cancer hallmarks, Genomic sequencing (exome and RNA (normal and tumor) and magnetic resonance imaging (MRI) quantitative features (protocols were T1-post, FLAIR and ADC) from whole tumor, peritumoral edema and enhancing core vs equivalent normal region were also collected from patients. Based on MxIF analysis, 85,767 cells (glioma cases) and 56,304 cells (GBM cases) were used to generate cell-level data for 24 biomarkers. K-means clustering was used to generate 7 distinct groups of cells with divergent biomarker profiles and deconvolution was used to assign RNA data into three classes. Spatial and molecular heterogeneity metrics were generated for the cell data. All features were compared between IDH mt and IDHwt patients and were finally combined to provide a holistic/integrated comparison. Protein expression by hallmark was generally lower in the IDHmt vs wt patients. Molecular and spatial heterogeneity scores for angiogenesis and cell invasion also differed between IDHmt and wt gliomas irrespective of prior treatment and tumor grade; these differences also persisted in the MR imaging features of peritumoral edema and contrast enhancement volumes. A coherent picture of enhanced angiogenesis in IDHwt tumors was derived from multiple platforms (genomic, proteomic and imaging) and scales from individual proteins to cell clusters and heterogeneity, as well as bulk tumor RNA and imaging features. Longer overall survival for IDH1mt glioma patients may reflect mutation-driven alterations in cellular, molecular, and spatial heterogeneity which manifest in discernable radiological manifestations.
View details for DOI 10.1371/journal.pone.0219724
View details for Web of Science ID 000515089200003
View details for PubMedID 31881020
View details for PubMedCentralID PMC6934292
AUTOMATED DETECTION OF PROSTATE CANCER ON MULTIPARAMETRIC MRI USING DEEP NEURAL NETWORKS TRAINED ON SPATIAL COORDINATES AND PATHOLOGY OF BIOPSY CORES
LIPPINCOTT WILLIAMS & WILKINS. 2019: E1098
View details for Web of Science ID 000473345203470
ANISOTROPIC SUPER RESOLUTION IN PROSTATE MRI USING SUPER RESOLUTION GENERATIVE ADVERSARIAL NETWORKS
IEEE. 2019: 1688–91
View details for Web of Science ID 000485040000360
- Spatial integration of radiology and pathology images to characterize breast cancer aggressiveness on pre-surgical MRI SPIE-INT SOC OPTICAL ENGINEERING. 2019
- Framework for the co-registration of MRI and Histology Images in Prostate Cancer Patients with Radical Prostatectomy SPIE-INT SOC OPTICAL ENGINEERING. 2019
A deep learning-based algorithm for 2-D cell segmentation in microscopy images
2018; 19: 365
Automatic and reliable characterization of cells in cell cultures is key to several applications such as cancer research and drug discovery. Given the recent advances in light microscopy and the need for accurate and high-throughput analysis of cells, automated algorithms have been developed for segmenting and analyzing the cells in microscopy images. Nevertheless, accurate, generic and robust whole-cell segmentation is still a persisting need to precisely quantify its morphological properties, phenotypes and sub-cellular dynamics.We present a single-channel whole cell segmentation algorithm. We use markers that stain the whole cell, but with less staining in the nucleus, and without using a separate nuclear stain. We show the utility of our approach in microscopy images of cell cultures in a wide variety of conditions. Our algorithm uses a deep learning approach to learn and predict locations of the cells and their nuclei, and combines that with thresholding and watershed-based segmentation. We trained and validated our approach using different sets of images, containing cells stained with various markers and imaged at different magnifications. Our approach achieved a 86% similarity to ground truth segmentation when identifying and separating cells.The proposed algorithm is able to automatically segment cells from single channel images using a variety of markers and magnifications.
View details for PubMedID 30285608
- An Application of Generative Adversarial Networks for Super Resolution Medical Imaging IEEE. 2018: 326–31
Co-registration of pre-operative CT with ex vivo surgically excised ground glass nodules to define spatial extent of invasive adenocarcinoma on in vivo imaging: a proof-of-concept study.
To develop an approach for radiology-pathology fusion of ex vivo histology of surgically excised pulmonary nodules with pre-operative CT, to radiologically map spatial extent of the invasive adenocarcinomatous component of the nodule.Six subjects (age: 75 ± 11 years) with pre-operative CT and surgically excised ground-glass nodules (size: 22.5 ± 5.1 mm) with a significant invasive adenocarcinomatous component (>5 mm) were included. The pathologist outlined disease extent on digitized histology specimens; two radiologists and a pulmonary critical care physician delineated the entire nodule on CT (in-plane resolution: <0.8 mm, inter-slice distance: 1-5 mm). We introduced a novel reconstruction approach to localize histology slices in 3D relative to each other while using CT scan as spatial constraint. This enabled the spatial mapping of the extent of tumour invasion from histology onto CT.Good overlap of the 3D reconstructed histology and the nodule outlined on CT was observed (65.9 ± 5.2%). Reduction in 3D misalignment of corresponding anatomical landmarks on histology and CT was observed (1.97 ± 0.42 mm). Moreover, the CT attenuation (HU) distributions were different when comparing invasive and in situ regions.This proof-of-concept study suggests that our fusion method can enable the spatial mapping of the invasive adenocarcinomatous component from 2D histology slices onto in vivo CT.• 3D reconstructions are generated from 2D histology specimens of ground glass nodules. • The reconstruction methodology used pre-operative in vivo CT as 3D spatial constraint. • The methodology maps adenocarcinoma extent from digitized histology onto in vivo CT. • The methodology potentially facilitates the discovery of CT signature of invasive adenocarcinoma.
View details for DOI 10.1007/s00330-017-4813-0
View details for PubMedID 28386717
View details for PubMedCentralID PMC5630490
Computational imaging reveals shape differences between normal and malignant prostates on MRI
We seek to characterize differences in the shape of the prostate and the central gland (combined central and transitional zones) between men with biopsy confirmed prostate cancer and men who were identified as not having prostate cancer either on account of a negative biopsy or had pelvic imaging done for a non-prostate malignancy. T2w MRI from 70 men were acquired at three institutions. The cancer positive group (PCa+) comprised 35 biopsy positive (Bx+) subjects from three institutions (Gleason scores: 6-9, Stage: T1-T3). The negative group (PCa-) combined 24 biopsy negative (Bx-) from two institutions and 11 subjects diagnosed with rectal cancer but with no clinical or MRI indications of prostate cancer (Cl-). The boundaries of the prostate and central gland were delineated on T2w MRI by two expert raters and were used to construct statistical shape atlases for the PCa+, Bx- and Cl- prostates. An atlas comparison was performed via per-voxel statistical tests to localize shape differences (significance assessed at p < 0.05). The atlas comparison revealed central gland hypertrophy in the Bx- subpopulation, resulting in significant volume and posterior side shape differences relative to PCa+ group. Significant differences in the corresponding prostate shapes were noted at the apex when comparing the Cl- and PCa+ prostates.
View details for DOI 10.1038/srep41261
View details for Web of Science ID 000393299000001
View details for PubMedID 28145532
View details for PubMedCentralID PMC5286513
Prostate shapes on pre-treatment MRI between prostate cancer patients who do and do not undergo biochemical recurrence are different: Preliminary Findings
2017; 7 (1): 15829
View details for DOI 10.1038/s41598-017-13443-8.
- Field Effect Induced Organ Distension (FOrge) Features Predicting Biochemical Recurrence from Pre-treatment Prostate MRI Medical Image Computing and Computer Assisted Intervention. Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017: 442-449
Co-Registration of ex vivo Surgical Histopathology and in vivo T2 weighted MRI of the Prostate via multi-scale spectral embedding representation
2017; 7: 8717
View details for DOI 10.1038/s41598-017-08969-w
Identifying in vivo DCE MRI markers associated with microvessel architecture and gleason grades of prostate cancer
JOURNAL OF MAGNETIC RESONANCE IMAGING
2016; 43 (1): 149-158
To identify computer extracted in vivo dynamic contrast enhanced (DCE) MRI markers associated with quantitative histomorphometric (QH) characteristics of microvessels and Gleason scores (GS) in prostate cancer.This study considered retrospective data from 23 biopsy confirmed prostate cancer patients who underwent 3 Tesla multiparametric MRI before radical prostatectomy (RP). Representative slices from RP specimens were stained with vascular marker CD31. Tumor extent was mapped from RP sections onto DCE MRI using nonlinear registration methods. Seventy-seven microvessel QH features and 18 DCE MRI kinetic features were extracted and evaluated for their ability to distinguish low from intermediate and high GS. The effect of temporal sampling on kinetic features was assessed and correlations between those robust to temporal resolution and microvessel features discriminative of GS were examined.A total of 12 microvessel architectural features were discriminative of low and intermediate/high grade tumors with area under the receiver operating characteristic curve (AUC) > 0.7. These features were most highly correlated with mean washout gradient (WG) (max rho = -0.62). Independent analysis revealed WG to be moderately robust to temporal resolution (intraclass correlation coefficient [ICC] = 0.63) and WG variance, which was poorly correlated with microvessel features, to be predictive of low grade tumors (AUC = 0.77). Enhancement ratio was the most robust (ICC = 0.96) and discriminative (AUC = 0.78) kinetic feature but was moderately correlated with microvessel features (max rho = -0.52).Computer extracted features of prostate DCE MRI appear to be correlated with microvessel architecture and may be discriminative of low versus intermediate and high GS.
View details for DOI 10.1002/jmri.24975
View details for Web of Science ID 000368741400013
View details for PubMedID 26110513
View details for PubMedCentralID PMC4691230
Radiomics Analysis on FLT-PET/MRI for Characterization of Early Treatment Response in Renal Cell Carcinoma: A Proof-of-Concept Study
2016; 9 (2): 155-162
View details for DOI 10.1016/j.tranon.2016.01.008
AutoStitcher: An Automated Program for Efficient and Robust Reconstruction of Digitized Whole Histological Sections from Tissue Fragments
2016; 6: 29906
View details for DOI 10.1038/srep29906
Framework for 3D histologic reconstruction and fusion with in vivo MRI: Preliminary results of characterizing pulmonary inflammation in a mouse model
2015; 42 (8): 4822-4832
Pulmonary inflammation is associated with a variety of diseases. Assessing pulmonary inflammation on in vivo imaging may facilitate the early detection and treatment of lung diseases. Although routinely used in thoracic imaging, computed tomography has thus far not been compellingly shown to characterize inflammation in vivo. Alternatively, magnetic resonance imaging (MRI) is a nonionizing radiation technique to better visualize and characterize pulmonary tissue. Prior to routine adoption of MRI for early characterization of inflammation in humans, a rigorous and quantitative characterization of the utility of MRI to identify inflammation is required. Such characterization may be achieved by considering ex vivo histology as the ground truth, since it enables the definitive spatial assessment of inflammation. In this study, the authors introduce a novel framework to integrate 2D histology, ex vivo and in vivo imaging to enable the mapping of the extent of disease from ex vivo histology onto in vivo imaging, with the goal of facilitating computerized feature analysis and interrogation of disease appearance on in vivo imaging. The authors' framework was evaluated in a preclinical preliminary study aimed to identify computer extracted features on in vivo MRI associated with chronic pulmonary inflammation.The authors' image analytics framework first involves reconstructing the histologic volume in 3D from individual histology slices. Second, the authors map the disease ground truth onto in vivo MRI via coregistration with 3D histology using the ex vivo lung MRI as a conduit. Finally, computerized feature analysis of the disease extent is performed to identify candidate in vivo imaging signatures of disease presence and extent.The authors evaluated the framework by assessing the quality of the 3D histology reconstruction and the histology-MRI fusion, in the context of an initial use case involving characterization of chronic inflammation in a mouse model. The authors' evaluation considered three mice, two with an inflammation phenotype and one control. The authors' iterative 3D histology reconstruction yielded a 70.1% ± 2.7% overlap with the ex vivo MRI volume. Across a total of 17 anatomic landmarks manually delineated at the division of airways, the target registration error between the ex vivo MRI and 3D histology reconstruction was 0.85 ± 0.44 mm, suggesting that a good alignment of the ex vivo 3D histology and ex vivo MRI had been achieved. The 3D histology-in vivo MRI coregistered volumes resulted in an overlap of 73.7% ± 0.9%. Preliminary computerized feature analysis was performed on an additional four control mice, for a total of seven mice considered in this study. Gabor texture filters appeared to best capture differences between the inflamed and noninflamed regions on MRI.The authors' 3D histology reconstruction and multimodal registration framework were successfully employed to reconstruct the histology volume of the lung and fuse it with in vivo MRI to create a ground truth map for inflammation on in vivo MRI. The analytic platform presented here lays the framework for a rigorous validation of the identified imaging features for chronic lung inflammation on MRI in a large prospective cohort.
View details for DOI 10.1118/1.4923161
View details for Web of Science ID 000358933000039
View details for PubMedID 26233209
View details for PubMedCentralID PMC4522013
Prostatome: A combined anatomical and disease based MRI atlas of the prostate
2014; 41 (7)
In this work, the authors introduce a novel framework, the anatomically constrained registration (AnCoR) scheme and apply it to create a fused anatomic-disease atlas of the prostate which the authors refer to as the prostatome. The prostatome combines a MRI based anatomic and a histology based disease atlas. Statistical imaging atlases allow for the integration of information across multiple scales and imaging modalities into a single canonical representation, in turn enabling a fused anatomical-disease representation which may facilitate the characterization of disease appearance relative to anatomic structures. While statistical atlases have been extensively developed and studied for the brain, approaches that have attempted to combine pathology and imaging data for study of prostate pathology are not extant. This works seeks to address this gap.The AnCoR framework optimizes a scoring function composed of two surface (prostate and central gland) misalignment measures and one intensity-based similarity term. This ensures the correct mapping of anatomic regions into the atlas, even when regional MRI intensities are inconsistent or highly variable between subjects. The framework allows for creation of an anatomic imaging and a disease atlas, while enabling their fusion into the anatomic imaging-disease atlas. The atlas presented here was constructed using 83 subjects with biopsy confirmed cancer who had pre-operative MRI (collected at two institutions) followed by radical prostatectomy. The imaging atlas results from mapping thein vivo MRI into the canonical space, while the anatomic regions serve as domain constraints. Elastic co-registration MRI and corresponding ex vivo histology provides "ground truth" mapping of cancer extent on in vivo imaging for 23 subjects.AnCoR was evaluated relative to alternative construction strategies that use either MRI intensities or the prostate surface alone for registration. The AnCoR framework yielded a central gland Dice similarity coefficient (DSC) of 90%, and prostate DSC of 88%, while the misalignment of the urethra and verumontanum was found to be 3.45 mm, and 4.73 mm, respectively, which were measured to be significantly smaller compared to the alternative strategies. As might have been anticipated from our limited cohort of biopsy confirmed cancers, the disease atlas showed that most of the tumor extent was limited to the peripheral zone. Moreover, central gland tumors were typically larger in size, possibly because they are only discernible at a much later stage.The authors presented the AnCoR framework to explicitly model anatomic constraints for the construction of a fused anatomic imaging-disease atlas. The framework was applied to constructing a preliminary version of an anatomic-disease atlas of the prostate, the prostatome. The prostatome could facilitate the quantitative characterization of gland morphology and imaging features of prostate cancer. These techniques, may be applied on a large sample size data set to create a fully developed prostatome that could serve as a spatial prior for targeted biopsies by urologists. Additionally, the AnCoR framework could allow for incorporation of complementary imaging and molecular data, thereby enabling their careful correlation for population based radio-omics studies.
View details for DOI 10.1118/1.4881515
View details for Web of Science ID 000339009800034
View details for PubMedID 24989400
View details for PubMedCentralID PMC4187363
Identifying Quantitative In Vivo Multi-Parametric MRI Features For Treatment Related Changes after Laser Interstitial Thermal Therapy of Prostate Cancer
2014; 144: 13-23
View details for DOI 10.1016/j.neucom.2014.03.065
Anisotropic Smoothing Regularization (AnSR) in Thirion's Demons Registration Evaluates Brain MRI Tissue Changes Post-Laser Ablation
IEEE Engineering in Medicine and Biology Sciences
View details for DOI 10.1109/EMBC.2013.6610423
Automated tracing of filaments in 3D electron tomography reconstructions using Sculptor and Situs
JOURNAL OF STRUCTURAL BIOLOGY
2012; 178 (2): 121-128
The molecular graphics program Sculptor and the command-line suite Situs are software packages for the integration of biophysical data across spatial resolution scales. Herein, we provide an overview of recently developed tools relevant to cryo-electron tomography (cryo-ET), with an emphasis on functionality supported by Situs 2.7.1 and Sculptor 2.1.1. We describe a work flow for automatically segmenting filaments in cryo-ET maps including denoising, local normalization, feature detection, and tracing. Tomograms of cellular actin networks exhibit both cross-linked and bundled filament densities. Such filamentous regions in cryo-ET data sets can then be segmented using a stochastic template-based search, VolTrac. The approach combines a genetic algorithm and a bidirectional expansion with a tabu search strategy to localize and characterize filamentous regions. The automated filament segmentation by VolTrac compares well to a manual one performed by expert users, and it allows an efficient and reproducible analysis of large data sets. The software is free, open source, and can be used on Linux, Macintosh or Windows computers.
View details for DOI 10.1016/j.jsb.2012.03.001
View details for Web of Science ID 000304287400007
View details for PubMedID 22433493
View details for PubMedCentralID PMC3440181
Evolutionary bidirectional expansion for the tracing of alpha helices in cryo-electron microscopy reconstructions
JOURNAL OF STRUCTURAL BIOLOGY
2012; 177 (2): 410-419
Cryo-electron microscopy (cryo-EM) enables the imaging of macromolecular complexes in near-native environments at resolutions that often permit the visualization of secondary structure elements. For example, alpha helices frequently show consistent patterns in volumetric maps, exhibiting rod-like structures of high density. Here, we introduce VolTrac (Volume Tracer) - a novel technique for the annotation of alpha-helical density in cryo-EM data sets. VolTrac combines a genetic algorithm and a bidirectional expansion with a tabu search strategy to trace helical regions. Our method takes advantage of the stochastic search by using a genetic algorithm to identify optimal placements for a short cylindrical template, avoiding exploration of already characterized tabu regions. These placements are then utilized as starting positions for the adaptive bidirectional expansion that characterizes the curvature and length of the helical region. The method reliably predicted helices with seven or more residues in experimental and simulated maps at intermediate (4-10Å) resolution. The observed success rates, ranging from 70.6% to 100%, depended on the map resolution and validation parameters. For successful predictions, the helical axes were located within 2Å from known helical axes of atomic structures.
View details for DOI 10.1016/j.jsb.2011.11.029
View details for Web of Science ID 000300755400026
View details for PubMedID 22155667
View details for PubMedCentralID PMC3288247
An assembly model of rift valley Fever virus.
Frontiers in microbiology
2012; 3: 254-?
Rift Valley fever virus (RVFV) is a bunyavirus endemic to Africa and the Arabian Peninsula that infects humans and livestock. The virus encodes two glycoproteins, Gn and Gc, which represent the major structural antigens and are responsible for host cell receptor binding and fusion. Both glycoproteins are organized on the virus surface as cylindrical hollow spikes that cluster into distinct capsomers with the overall assembly exhibiting an icosahedral symmetry. Currently, no experimental three-dimensional structure for any entire bunyavirus glycoprotein is available. Using fold recognition, we generated molecular models for both RVFV glycoproteins and found significant structural matches between the RVFV Gn protein and the influenza virus hemagglutinin protein and a separate match between RVFV Gc protein and Sindbis virus envelope protein E1. Using these models, the potential interaction and arrangement of both glycoproteins in the RVFV particle was analyzed, by modeling their placement within the cryo-electron microscopy density map of RVFV. We identified four possible arrangements of the glycoproteins in the virion envelope. Each assembly model proposes that the ectodomain of Gn forms the majority of the protruding capsomer and that Gc is involved in formation of the capsomer base. Furthermore, Gc is suggested to facilitate intercapsomer connections. The proposed arrangement of the two glycoproteins on the RVFV surface is similar to that described for the alphavirus E1-E2 proteins. Our models will provide guidance to better understand the assembly process of phleboviruses and such structural studies can also contribute to the design of targeted antivirals.
View details for DOI 10.3389/fmicb.2012.00254
View details for PubMedID 22837754
View details for PubMedCentralID PMC3400131
Developing a denoising filter for electron microscopy and tomography data in the cloud
View details for DOI 10.1007/s12551-012-0083-x
Evolutionary tabu search strategies for the simultaneous registration of multiple atomic structures in cryo-EM reconstructions
JOURNAL OF STRUCTURAL BIOLOGY
2010; 170 (1): 164-171
A structural characterization of multi-component cellular assemblies is essential to explain the mechanisms governing biological function. Macromolecular architectures may be revealed by integrating information collected from various biophysical sources - for instance, by interpreting low-resolution electron cryomicroscopy reconstructions in relation to the crystal structures of the constituent fragments. A simultaneous registration of multiple components is beneficial when building atomic models as it introduces additional spatial constraints to facilitate the native placement inside the map. The high-dimensional nature of such a search problem prevents the exhaustive exploration of all possible solutions. Here we introduce a novel method based on genetic algorithms, for the efficient exploration of the multi-body registration search space. The classic scheme of a genetic algorithm was enhanced with new genetic operations, tabu search and parallel computing strategies and validated on a benchmark of synthetic and experimental cryo-EM datasets. Even at a low level of detail, for example 35-40 A, the technique successfully registered multiple component biomolecules, measuring accuracies within one order of magnitude of the nominal resolutions of the maps. The algorithm was implemented using the Sculptor molecular modeling framework, which also provides a user-friendly graphical interface and enables an instantaneous, visual exploration of intermediate solutions.
View details for DOI 10.1016/j.jsb.2009.12.028
View details for Web of Science ID 000276329600020
View details for PubMedID 20056148
View details for PubMedCentralID PMC2872094
Using Sculptor and Situs for simultaneous assembly of atomic components into low-resolution shapes
Journal of Structural Biology
2010; 173: 428-435
View details for DOI 10.1016/j.jsb.2010.11.002
Biomolecular pleiomorphism probed by spatial interpolation of coarse models
2008; 24 (21): 2460-2466
In low resolution structures of biological assemblies one can often observe conformational deviations that require a flexible rearrangement of structural domains fitted at the atomic level. We are evaluating interpolation methods for the flexible alignment of atomic models based on coarse models. Spatial interpolation is well established in image-processing and visualization to describe the overall deformation or warping of an object or an image. Combined with a coarse representation of the biological system by feature vectors, such methods can provide a flexible approximation of the molecular structure. We have compared three well-known interpolation techniques and evaluated the results by comparing them with constrained molecular dynamics. One method, inverse distance weighting interpolation, consistently produced models that were nearly indistinguishable on the alpha carbon level from the molecular dynamics results. The method is simple to apply and enables flexing of structures by non-expert modelers. This is useful for the basic interpretation of volumetric data in biological applications such as electron microscopy. The method can be used as a general interpretation tool for sparsely sampled motions derived from coarse models.
View details for DOI 10.1093/bioinformatics/btn461
View details for Web of Science ID 000260381200007
View details for PubMedID 18757874
View details for PubMedCentralID PMC2732278
VITA - An Interactive 3-D Visualization System to Enhance Student Understanding of Mathematical Concepts in Medical Decision-making
IEEE Computer-Based Medical Systems
View details for DOI 10.1109/CBMS.2008.35
A mammalian microRNA expression atlas based on small RNA library sequencing
2007; 129 (7): 1401-1414
MicroRNAs (miRNAs) are small noncoding regulatory RNAs that reduce stability and/or translation of fully or partially sequence-complementary target mRNAs. In order to identify miRNAs and to assess their expression patterns, we sequenced over 250 small RNA libraries from 26 different organ systems and cell types of human and rodents that were enriched in neuronal as well as normal and malignant hematopoietic cells and tissues. We present expression profiles derived from clone count data and provide computational tools for their analysis. Unexpectedly, a relatively small set of miRNAs, many of which are ubiquitously expressed, account for most of the differences in miRNA profiles between cell lineages and tissues. This broad survey also provides detailed and accurate information about mature sequences, precursors, genome locations, maturation processes, inferred transcriptional units, and conservation patterns. We also propose a subclassification scheme for miRNAs for assisting future experimental and computational functional analyses.
View details for DOI 10.1016/j.cell.2007.04.040
View details for Web of Science ID 000247911400024
View details for PubMedID 17604727
View details for PubMedCentralID PMC2681231