All Publications


  • Deep Learning-Based Water-Fat Separation from Dual-Echo Chemical Shift-Encoded Imaging. Bioengineering (Basel, Switzerland) Wu, Y., Alley, M., Li, Z., Datta, K., Wen, Z., Sandino, C., Syed, A., Ren, H., Xing, L., Lustig, M., Pauly, J., Vasanawala, S. 2022; 9 (10)

    Abstract

    Conventional water-fat separation approaches suffer long computational times and are prone to water/fat swaps. To solve these problems, we propose a deep learning-based dual-echo water-fat separation method. With IRB approval, raw data from 68 pediatric clinically indicated dual echo scans were analyzed, corresponding to 19382 contrast-enhanced images. A densely connected hierarchical convolutional network was constructed, in which dual-echo images and corresponding echo times were used as input and water/fat images obtained using the projected power method were regarded as references. Models were trained and tested using knee images with 8-fold cross validation and validated on out-of-distribution data from the ankle, foot, and arm. Using the proposed method, the average computational time for a volumetric dataset with ~400 slices was reduced from 10 min to under one minute. High fidelity was achieved (correlation coefficient of 0.9969, l1 error of 0.0381, SSIM of 0.9740, pSNR of 58.6876) and water/fat swaps were mitigated. I is of particular interest that metal artifacts were substantially reduced, even when the training set contained no images with metallic implants. Using the models trained with only contrast-enhanced images, water/fat images were predicted from non-contrast-enhanced images with high fidelity. The proposed water-fat separation method has been demonstrated to be fast, robust, and has the added capability to compensate for metal artifacts.

    View details for DOI 10.3390/bioengineering9100579

    View details for PubMedID 36290546

  • Deep learning-augmented radioluminescence imaging for radiotherapy dose verification. Medical physics Jia, M., Yang, Y., Wu, Y., Li, X., Xing, L., Wang, L. 2021

    Abstract

    PURPOSE: We developed a novel dose verification method using a camera-based radioluminescence imaging system (CRIS) combined with a deep learning-based signal processing technique.METHODS: The CRIS consists of a cylindrical chamber coated with scintillator material on the inner surface of the cylinder, coupled with a hemispherical mirror and a digital camera at the two ends. After training, the deep learning model is used for image-to-dose conversion to provide absolute dose prediction at multiple depths of a specific water phantom from a single CRIS image under the assumption of a good consistency between the TPS setting and actual beam energy. The model was trained using a set of captured radioluminescence images and the corresponding dose maps from the clinical treatment planning system (TPS) for the sake of acceptable data collection. To overcome the latent error and inconsistency that exists between the TPS calculation and the corresponding measurement, the model was trained in an unsupervised manner. Validation experiments were performed on five square fields (ranging from 2 * 2 cm2 to 10 * 10 cm2 ), and three clinical IMRT cases. The results were compared to the TPS calculations in terms of gamma index at 1.5 cm, 5 cm and 10 cm depths.RESULTS: The mean 2% / 2mm gamma pass rates were 100% for square fields and 97.2% (range from 95.5% to 99.5%) for the IMRT fields. Further validations were performed by comparing the CRIS results with measurements on various regular fields. The results show a mean gamma pass rate of 91% (1% / 1mm) for cross-profiles and a mean percentage deviation of 1.15% for percentage depth doses (PDDs).CONCLUSIONS: The system is capable of converting the irradiated radioluminescence image to corresponding water-based dose maps at multiple depths with a spatial resolution comparable to the TPS calculations. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15229

    View details for PubMedID 34523131

  • Deep learning-enabled EPID-based 3D dosimetry for dose verification of step-and-shoot radiotherapy. Medical physics Jia, M., Wu, Y., Yang, Y., Wang, L., Chuang, C., Han, B., Xing, L. 2021

    Abstract

    PURPOSE: The study aims at a novel dosimetry methodology to reconstruct a 3D dose distribution as imparted to a virtual cylindrical phantom using an electronic portal imaging device (EPID).METHODS: A deep learning-based signal processing strategy, referred to as 3DosiNet, is utilized to learn a mapping from an EPID image to planar dose distributions at given depths. The network was trained with the volumetric dose exported from the clinical treatment planning system (TPS). Given the latent inconsistency between measurements and corresponding TPS calculations, unsupervised learning is formulated in 3DosiNet to capture abstractive image features that are less sensitive to the potential variations.RESULTS: Validation experiments were performed using five regular fields and three clinical IMRT cases. The measured dose profiles and percentage depth dose (PDD) curves were compared with those measured using standard tools in terms of the 1D gamma index. The mean gamma pass rates (2%/2mm) over the regular fields are 100% and 97.3% for the dose profile and PDD measurements, respectively. The measured volumetric dose was compared to corresponding TPS calculation in terms of the 3D gamma index. The mean 2% / 2mm gamma pass rates are 97.9% for square fields and 94.9% for the IMRT fields.CONCLUSIONS: The system promises to be a practical 3D dosimetric tool for pre-treatment patient-specific quality assurance and further developed for in-treatment patient dose monitoring. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15218

    View details for PubMedID 34519365

  • Fine-grained similarity fusion for Multi-view Spectral Clustering q INFORMATION SCIENCES Yu, X., Liu, H., Wu, Y., Zhang, C. 2021; 568: 350-368
  • Deep learning-augmented radiotherapy visualization with a cylindrical radioluminescence system. Physics in medicine and biology Jia, M., Li, X., Wu, Y., Yang, Y., Kasimbeg, P., Skinner, L. B., Wang, L., Xing, L. 2020

    Abstract

    This study aims to demonstrate a low-cost camera-based radioluminescence imaging system (CRIS) for high-quality beam visualization that encourages accurate pre-treatment verifications on radiation delivery in external beam radiotherapy. To ameliorate the optical image that suffers from mirror glare and edge blurring caused by photon scattering, a deep learning model is proposed and trained to learn from an on-board electronic portal imaging device (EPID). Beyond the typical purposes of an on-board EPID, the developed system maintains independent measurement with co-planar detection ability by involving a cylindrical receptor. Three task-aware modules are integrated into the network design to enhance its robustness against the artifacts that exist in an EPID running at the cine mode for efficient image acquisition. The training data consists of various designed beam fields that were modulated via the multi-leaf collimator (MLC). Validation experiments are performed for five regular fields ranging from 2 * 2 cm2 to 10 * 10 cm2 and three clinical IMRT cases. The captured CRIS images are compared to the high-quality images collected from an EPID running at the integration-mode, in terms of gamma index and other typical similarity metrics. The mean 2% / 2mm gamma pass rate is 99.14% (range between 98.6% and 100%) and 97.1% (ranging between 96.3% and 97.9%), for the regular fields and IMRT cases, respectively. The CRIS is further applied as a tool for MLC leaf-end position verification. A rectangular field with introduced leaf displacement is designed, and the measurements using CRIS and EPID agree within 0.100 mm ± 0.072 mm with maximum of 0.292 mm. Coupled with its simple system design and low-cost nature, the technique promises to provide viable choice for routine quality assurance in radiation oncology practice.

    View details for DOI 10.1088/1361-6560/abd673

    View details for PubMedID 33361563

  • Accelerating quantitative MR imaging with the incorporation of B1 compensation using deep learning. Magnetic resonance imaging Wu, Y., Ma, Y., Du, J., Xing, L. 2020

    Abstract

    Quantitative magnetic resonance imaging (MRI) attracts attention due to its support to quantitative image analysis and data driven medicine. However, the application of quantitative MRI is severely limited by the long data acquisition time required by repetitive image acquisition and measurement of field map. Inspired by recent development of artificial intelligence, we propose a deep learning strategy to accelerate the acquisition of quantitative MRI, where every quantitative T1 map is derived from two highly undersampled variable-contrast images with radiofrequency field inhomogeneity automatically compensated. In a multi-step framework, variable-contrast images are first jointly reconstructed from incoherently undersampled images using convolutional neural networks; then T1 map and B1 map are predicted from reconstructed images employing deep learning. Thus, the acceleration includes undersampling in every input image, a reduction in the number of variable contrast images, as well as a waiver of B1 map measurement. The strategy is validated in T1 mapping of cartilage. Acquired with a consistent imaging protocol, 1224 image sets from 51 subjects are used for the training of the prediction models, and 288 image sets from 12 subjects are used for testing. High degree of acceleration is achieved with image fidelity well maintained. The proposed method can be broadly applied to quantify other tissue properties (e.g. T2, T1rho) as well.

    View details for DOI 10.1016/j.mri.2020.06.011

    View details for PubMedID 32610065

  • Deciphering tissue relaxation parameters from a single MR image using deep learning SPIE Medical Imaging Wu, Y., Ma, Y., Du, J., Xing, L. 2020

    View details for DOI 10.1117/12.2546025

  • Deriving new soft tissue contrasts from conventional MR images using deep learning. Magnetic resonance imaging Wu, Y. n., Li, D. n., Xing, L. n., Gold, G. n. 2020

    Abstract

    Versatile soft tissue contrast in magnetic resonance imaging is a unique advantage of the imaging modality. However, the versatility is not fully exploited. In this study, we propose a deep learning-based strategy to derive more soft tissue contrasts from conventional MR images obtained in standard clinical MRI. Two types of experiments are performed. First, MR images corresponding to different pulse sequences are predicted from one or more images already acquired. As an example, we predict T1ρ weighted knee image from T2 weighted image and/or T1 weighted image. Furthermore, we estimate images corresponding to alternative imaging parameter values. In a representative case, variable flip angle images are predicted from a single T1 weighted image, whose accuracy is further validated in quantitative T1 map subsequently derived. To accomplish these tasks, images are retrospectively collected from 56 subjects, and self-attention convolutional neural network models are trained using 1104 knee images from 46 subjects and tested using 240 images from 10 other subjects. High accuracy has been achieved in resultant qualitative images as well as quantitative T1 maps. The proposed deep learning method can be broadly applied to obtain more versatile soft tissue contrasts without additional scans or used to normalize MR data that were inconsistently acquired for quantitative analysis.

    View details for DOI 10.1016/j.mri.2020.09.014

    View details for PubMedID 32956805

  • Superpixel Region Merging based on Deep Network for Medical Image Segmentation ACM Transactions on Intelligent Systems and Technology Liu, H., Wang, H., Wu, Y., Xing, L. 2020; 11 (4)

    View details for DOI 10.1145/3386090

  • Self-Attention Convolutional Neural Network for Improved MR Image Reconstruction. Information sciences Wu, Y., Ma, Y., Liu, J., Du, J., Xing, L. 2019; 490: 317-328

    Abstract

    MRI is an advanced imaging modality with the unfortunate disadvantage of long data acquisition time. To accelerate MR image acquisition while maintaining high image quality, extensive investigations have been conducted on image reconstruction of sparsely sampled MRI. Recently, deep convolutional neural networks have achieved promising results, yet the local receptive field in convolution neural network raises concerns regarding signal synthesis and artifact compensation. In this study, we proposed a deep learning-based reconstruction framework to provide improved image fidelity for accelerated MRI. We integrated the self-attention mechanism, which captured long-range dependencies across image regions, into a volumetric hierarchical deep residual convolutional neural network. Basically, a self-attention module was integrated to every convolutional layer, where signal at a position was calculated as a weighted sum of the features at all positions. Furthermore, relatively dense shortcut connections were employed, and data consistency was enforced. The proposed network, referred to as SAT-Net, was applied on cartilage MRI acquired using an ultrashort TE sequence and retrospectively undersampled in a pseudo-random Cartesian pattern. The network was trained using 336 three dimensional images (each containing 32 slices) and tested with 24 images that yielded improved outcome. The framework is generic and can be extended to various applications.

    View details for DOI 10.1016/j.ins.2019.03.080

    View details for PubMedID 32817993

    View details for PubMedCentralID PMC7430761

  • Incorporating prior knowledge via volumetric deep residual network to optimize the reconstruction of sparsely sampled MRI. Magnetic resonance imaging Wu, Y., Ma, Y., Capaldi, D. P., Liu, J., Zhao, W., Du, J., Xing, L. 2019

    Abstract

    For sparse sampling that accelerates magnetic resonance (MR) image acquisition, non-linear reconstruction algorithms have been developed, which incorporated patient specific a prior information. More generic a prior information could be acquired via deep learning and utilized for image reconstruction. In this study, we developed a volumetric hierarchical deep residual convolutional neural network, referred to as T-Net, to provide a data-driven end-to-end mapping from sparsely sampled MR images to fully sampled MR images, where cartilage MR images were acquired using an Ultra-short TE sequence and retrospectively undersampled using pseudo-random Cartesian and radial acquisition schemes. The network had a hierarchical architecture that promoted the sparsity of feature maps and increased the receptive field, which were valuable for signal synthesis and artifact suppression. Relatively dense local connections and global shortcuts were established to facilitate residual learning and compensate for details lost in hierarchical processing. Additionally, volumetric processing was adopted to fully exploit spatial continuity in three-dimensional space. Data consistency was further enforced. The network was trained with 336 three-dimensional images (each consisting of 32 slices) and tested by 24 images. The incorporation of a priori information acquired via deep learning facilitated high acceleration factors (as high as 8) while maintaining high image fidelity (quantitatively evaluated using the structural similarity index measurement). The proposed T-Net had an improved performance as compared to several state-of-the-art networks.

    View details for PubMedID 30880112

  • Automatic marker-free target positioning and tracking for image-guided radiotherapy and interventions Zhao, W., Shen, L., Wu, Y., Han, B., Yang, Y., Xing, L., Fei, B., Linte, C. A. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2512166

    View details for Web of Science ID 000483683500010

  • Learning deconvolutional deep neural network for high resolution medical image reconstruction INFORMATION SCIENCES Liu, H., Xu, J., Wu, Y., Guo, Q., Ibragimov, B., Xing, L. 2018; 468: 142–54