Honors & Awards
First Place in the CTVIE19 Grand Challenge, The American Association of Physicists in Medicine (2019)
Ballard Seashore Dissertation Fellowship, University of Iowa (2019)
China National Scholarship (acceptance rate 0.2%), Ministry of Education of the People’s Republic of China (2012)
First Prize, Jiangsu Province Undergraduate Electronic Design Contest (TI Cup), Jiangsu Provincial Department of Education, Jiangsu Province, China (2012)
First Prize, Jiangsu Province Undergraduate Mathematics Contest, Jiangsu Provincial Department of Education, Jiangsu Province, China (2012)
First Prize, National Senior High School Mathematics Contest (acceptance rate 0.04%), Jiangsu Mathematical Society, Jiangsu Province, China (2009)
Doctor of Philosophy, University of Iowa, Electrical and Computer Engineering (2019)
Master of Science, University of Iowa, Mathematics (2018)
Master of Science, University of Iowa, Electrical and Computer Engineering (2016)
Bachelor of Engineering, Soochow University, Electronics and Information Engineering (2014)
Mirabela Rusu, Postdoctoral Faculty Sponsor
Geodesic density regression for correcting 4DCT pulmonary respiratory motion artifacts.
Medical image analysis
2021; 72: 102140
Pulmonary respiratory motion artifacts are common in four-dimensional computed tomography (4DCT) of lungs and are caused by missing, duplicated, and misaligned image data. This paper presents a geodesic density regression (GDR) algorithm to correct motion artifacts in 4DCT by correcting artifacts in one breathing phase with artifact-free data from corresponding regions of other breathing phases. The GDR algorithm estimates an artifact-free lung template image and a smooth, dense, 4D (space plus time) vector field that deforms the template image to each breathing phase to produce an artifact-free 4DCT scan. Correspondences are estimated by accounting for the local tissue density change associated with air entering and leaving the lungs, and using binary artifact masks to exclude regions with artifacts from image regression. The artifact-free lung template image is generated by mapping the artifact-free regions of each phase volume to a common reference coordinate system using the estimated correspondences and then averaging. This procedure generates a fixed view of the lung with an improved signal-to-noise ratio. The GDR algorithm was evaluated and compared to a state-of-the-art geodesic intensity regression (GIR) algorithm using simulated CT time-series and 4DCT scans with clinically observed motion artifacts. The simulation shows that the GDR algorithm has achieved significantly more accurate Jacobian images and sharper template images, and is less sensitive to data dropout than the GIR algorithm. We also demonstrate that the GDR algorithm is more effective than the GIR algorithm for removing clinically observed motion artifacts in treatment planning 4DCT scans. Our code is freely available at https://github.com/Wei-Shao-Reg/GDR.
View details for DOI 10.1016/j.media.2021.102140
View details for PubMedID 34214957
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on MRI for Targeted Biopsy.
The Journal of urology
PURPOSE: Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on MRI is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine MR-US fusion biopsy in the clinic.MATERIALS AND METHODS: 905 subjects underwent multiparametric MRI at 29 institutions, followed by MR-US fusion biopsy at one institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to two deep learning networks (U-Net and HED) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests.RESULTS: ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), HED (DSC=0.80, p< 0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs. 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file.CONCLUSIONS: This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urologic clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.
View details for DOI 10.1097/JU.0000000000001783
View details for PubMedID 33878887
Automated Detection of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging.
PURPOSE: While multi-parametric Magnetic Resonance Imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy.METHODS: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtainedby registering MRI with whole-mount digital histopathology images from patients that underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients that underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including: 6 patients with normal MRI and no cancer, 23 patients that underwent radical prostatectomy, and 293 patients that underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists.RESULTS: Our model detected clinically significant lesions with an Area Under the Receiver Operator Characteristics Curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer.CONCLUSIONS: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
View details for DOI 10.1002/mp.14855
View details for PubMedID 33760269
3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction.
Medical image analysis
2021; 69: 101957
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
View details for DOI 10.1016/j.media.2021.101957
View details for PubMedID 33550008
Registration of pre-surgical MRI and histopathology images from radical prostatectomy via RAPSODI.
PURPOSE: Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis, however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with pre-operative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align pre-surgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI.METHODS: Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a 3D reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the pre-operative MRI.RESULTS: We tested RAPSODI in a phantom study where we simulated various conditions, e.g., tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97±0.01 for the prostate, a Hausdorff distance of 1.99±0.70 mm for the prostate boundary, a urethra deviation of 3.09±1.45 mm, and a landmark deviation of 2.80±0.59 mm between registered histopathology images and MRI.CONCLUSION: Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.
View details for DOI 10.1002/mp.14337
View details for PubMedID 32564359
Modeling the impact of out-of-phase ventilation on normal lung tissue response to radiation dose
To create a dose-response model that predicts lung ventilation change following radiation therapy, and examine the effects of out-of-phase ventilation.The dose-response model was built using 27 human subjects who underwent radiation therapy (RT) from an IRB-approved trial. For each four-dimensional computed tomography, two ventilation maps were created by calculating the N-phase local expansion ratio (LERN ) using most or all breathing phases and the 2-phase LER (LER2 ) using only the end inspiration and end expiration breathing phases. A polynomial regression model was created using the LERN ventilation maps pre-RT and post-RT and dose distributions for each subject, and crossvalidated with a leave-one-out method. Further validation of the model was performed using 15 additional human subjects using common statistical operating characteristics and gamma pass rates.For voxels receiving 20 Gy or greater, there was a significant increase from 52% to 59% (P = 0.03) in the gamma pass rates of the LERN model predicted post-RT Jacobian maps to the actual post-RT Jacobian maps, relative to the LER2 model. Additionally, accuracy significantly increased (P = 0.03) from 68% to 75% using the LERN model, relative to the LER2 model.The LERN model was significantly more accurate than the LER2 model at predicting post-RT ventilation maps. More accurate post-RT ventilation maps will aid in producing a higher quality functional avoidance treatment plan, allowing for potentially better normal tissue sparing.
View details for DOI 10.1002/mp.14146
View details for Web of Science ID 000526995800001
View details for PubMedID 32187683
Quantifying Regional Lung Deformation Using Four-Dimensional Computed Tomography: A Comparison of Conventional and Oscillatory Ventilation
FRONTIERS IN PHYSIOLOGY
2020; 11: 14
Mechanical ventilation strategies that reduce the heterogeneity of regional lung stress and strain may reduce the risk of ventilator-induced lung injury (VILI). In this study, we used registration of four-dimensional computed tomographic (4DCT) images to assess regional lung aeration and deformation in 10 pigs under baseline conditions and following acute lung injury induced with oleic acid. CT images were obtained via dynamic axial imaging (Siemens SOMATOM Force) during conventional pressure-controlled mechanical ventilation (CMV), as well as high-frequency and multi-frequency oscillatory ventilation modalities (HFOV and MFOV, respectively). Our results demonstrate that oscillatory modalities reduce intratidal strain throughout the lung in comparison to conventional ventilation, as well as the spatial gradients of dynamic strain along the dorsal-ventral axis. Harmonic distortion of parenchymal deformation was observed during HFOV with a single discrete sinusoid delivered at the airway opening, suggesting inherent mechanical nonlinearity of the lung tissues. MFOV may therefore provide improved lung-protective ventilation by reducing strain magnitudes and spatial gradients of strain compared to either CMV or HFOV.
View details for DOI 10.3389/fphys.2020.00014
View details for Web of Science ID 000518925200001
View details for PubMedID 32153417
View details for PubMedCentralID PMC7044245
N-Phase Local Expansion Ratio for Characterizing Out-of-Phase Lung Ventilation.
IEEE transactions on medical imaging
2020; 39 (6): 2025–34
Out-of-phase ventilation occurs when local regions of the lung reach their maximum or minimum volumes at breathing phases other than the global end inhalation or exhalation phases. This paper presents the N-phase local expansion ratio (LER N ) as a surrogate for lung ventilation. A common approach to estimate lung ventilation is to use image registration to align the end exhalation and inhalation 3DCT images and then analyze the resulting correspondence map. This 2-phase local expansion ratio (LER2) is limited because it ignores out-of-phase ventilation and thus may underestimate local lung ventilation. To overcome this limitation, LER N measures the maximum ratio of local expansion and contraction over the entire breathing cycle. Comparing LER2 to LER N provides a means for detecting and characterizing locations of the lung that experience out-of-phase ventilation. We present a novel in-phase/out-of-phase ventilation (IOV) function plot to visualize and measure the amount of high-function IOV that occurs during a breathing cycle. Treatment planning 4DCT scans collected during coached breathing from 32 human subjects with lung cancer were analyzed in this study. Results show that out-of-phase breathing occurred in all subjects and that the spatial distribution of out-of-phase ventilation varied from subject to subject. For the 32 subjects analyzed, 50% of the out-of-phase regions on average were mislabeled as low-function by LER2 (high-function threshold of 1.1, IOV threshold of 1.05). 4DCT and Xenon-enhanced CT of four sheep showed that LER8 is more accurate than LER2 for measuring lung ventilation.
View details for DOI 10.1109/TMI.2019.2963083
View details for PubMedID 31899418
ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate.
Medical image analysis
2020; 68: 101919
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.
View details for DOI 10.1016/j.media.2020.101919
View details for PubMedID 33385701
Radiation Dose Response Model for Ventilation Change Using All Phases of 4DCT
WILEY. 2019: E352
View details for Web of Science ID 000471277702234
Improving the Accuracy of 4DCT-Based Ventilation Measurements Using Multiple Phases
WILEY. 2019: E378
View details for Web of Science ID 000471277702330
Longitudinal Changes in Lung Tissue Elasticity Following Radiation Therapy
WILEY. 2019: E378
View details for Web of Science ID 000471277702331
- Deep Neural Networks and Kernel Density Estimation for Detecting Human Activity Patterns from Geo-Tagged Images: A Case Study of Birdwatching on Flickr ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2019; 8 (1)
Multi-Frequency Oscillatory Ventilation Minimizes Spatial Gradients of Regional Strain Using 4D CT Image Registration in Porcine Lung Injury
AMER THORACIC SOC. 2019
View details for Web of Science ID 000466776705159
- Nondestructive Measurement of Conformal Coating Thickness on Printed Circuit Board With Ultra-High Resolution Optical Coherence Tomography IEEE ACCESS 2019; 7: 18138–45
Quantifying ventilation change due to radiation therapy using 4DCT Jacobian calculations
2018; 45 (10): 4483–92
Regional ventilation and its response to radiation dose can be estimated using four-dimensional computed tomography (4DCT) and image registration. This study investigated the impact of radiation therapy (RT) on ventilation and the dependence of radiation-induced ventilation change on pre-RT ventilation derived from 4DCT.Three 4DCT scans were acquired from each of 12 subjects: two scans before RT and one scan 3 months after RT. The 4DCT datasets were used to generate the pre-RT and post-RT ventilation maps by registering the inhale phase image to the exhale phase image and computing the Jacobian determinant of the resulting transformation. The ventilation change between pre-RT and post-RT was calculated by taking a ratio of the post-RT Jacobian map to the pre-RT Jacobian map. The voxel-wise ventilation change between pre- and post-RT was investigated as a function of dose and pre-RT ventilation.Lung regions receiving over 20 Gy exhibited a significant decrease in function (3.3%, P < 0.01) compared to those receiving less than 20 Gy. When the voxels were stratified into high and low pre-RT function by thresholding the Jacobian map at 10% volume expansion (Jacobian = 1.1), high-function voxels exhibited 4.8% reduction in function for voxels receiving over 20 Gy, a significantly greater decline (P = 0.037) than the 2.4% reduction in function for low-function voxels. Ventilation decreased linearly with dose in both high-function and low-function regions. High-function regions showed a significantly larger decline in ventilation (P ≪ 0.001) as dose increased (1.4% ventilation reduction/10 Gy) compared to low-function regions (0.3% ventilation reduction/10 Gy). With further stratification of pre-RT ventilation, voxels exhibited increasing dose-dependent ventilation reduction with increasing pre-RT ventilation, with the largest pre-RT Jacobian bin (pre-RT Jacobian between 1.5 and 1.6) exhibiting a ventilation reduction of 4.8% per 10 Gy.Significant ventilation reductions were measured after radiation therapy treatments, and were dependent on the dose delivered to the tissue and the pre-RT ventilation of the tissue. For a fixed radiation dose, lung tissue with high pre-RT ventilation experienced larger decreases in post-RT ventilation than lung tissue with low pre-RT ventilation.
View details for DOI 10.1002/mp.13105
View details for Web of Science ID 000446995000031
View details for PubMedID 30047588
View details for PubMedCentralID PMC6220845
- Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer SPIE-INT SOC OPTICAL ENGINEERING. 2018
- Population Shape Collapse in Large Deformation Registration of MR Brain Images IEEE. 2016: 549–57