All Publications


  • Real Time Volumetric MRI for 3D Motion Tracking via Geometry-Informed Deep Learning. Medical physics Liu, L., Shen, L., Johansson, A., Balter, J. M., Cao, Y., Chang, D., Xing, L. 2022

    Abstract

    To develop a geometry-informed deep learning framework for volumetric MRI with sub-second acquisition time in support of 3D motion tracking, which is highly desirable for improved radiotherapy precision but hindered by the long image acquisition time.A 2D-3D deep learning network with an explicitly defined geometry module that embeds geometric priors of the k-space encoding pattern was investigated, where a 2D generation network first augmented the sparsely sampled image dataset by generating new 2D representations of the underlying 3D subject. A geometry module then unfolded the 2D representations to the volumetric space. Finally, a 3D refinement network took the unfolded 3D data and outputted high-resolution volumetric images. Patient-specific models were trained for 7 abdominal patients to reconstruct volumetric MRI from both orthogonal cine slices and sparse radial samples. To evaluate the robustness of the proposed method to longitudinal patient anatomy and position changes, we tested the trained model on separate datasets acquired more than one month later and evaluated 3D target motion tracking accuracy using the model-reconstructed images by deforming a reference MRI with gross tumor volume (GTV) contours to a 5-min time series of both ground truth and model-reconstructed volumetric images with a temporal resolution of 340 ms.Across the 7 patients evaluated, the median distances between model-predicted and ground truth GTV centroids in the superior-inferior direction were 0.4±0.3 mm and 0.5±0.4 mm for cine and radial acquisitions respectively. The 95-percentile Hausdorff distances between model-predicted and ground truth GTV contours were 4.7±1.1 mm and 3.2±1.5 mm for cine and radial acquisitions, which are of the same scale as cross-plane image resolution.Incorporating geometric priors into deep learning model enables volumetric imaging with high spatial and temporal resolution, which is particularly valuable for 3D motion tracking and has the potential of greatly improving MRI-guided radiotherapy precision. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15822

    View details for PubMedID 35766221

  • A geometry-informed deep learning framework for ultra-sparse 3D tomographic image reconstruction. Computers in biology and medicine Shen, L., Zhao, W., Capaldi, D., Pauly, J., Xing, L. 2022: 105710

    Abstract

    Deep learning affords enormous opportunities to augment the armamentarium of biomedical imaging. However, the pure data-driven nature of deep learning models may limit the model generalizability and application scope. Here we establish a geometry-informed deep learning framework for ultra-sparse 3D tomographic image reconstruction. We introduce a novel mechanism for integrating geometric priors of the imaging system. We demonstrate that the seamless inclusion of known priors is essential to enhance the performance of 3D volumetric computed tomography imaging with ultra-sparse sampling. The study opens new avenues for data-driven biomedical imaging and promises to provide substantially improved imaging tools for various clinical imaging and image-guided interventions.

    View details for DOI 10.1016/j.compbiomed.2022.105710

    View details for PubMedID 35715260

  • NeRP: Implicit Neural Representation Learning With Prior Embedding for Sparsely Sampled Image Reconstruction. IEEE transactions on neural networks and learning systems Shen, L., Pauly, J., Xing, L. 2022; PP

    Abstract

    Image reconstruction is an inverse problem that solves for a computational image based on sampled sensor measurement. Sparsely sampled image reconstruction poses additional challenges due to limited measurements. In this work, we propose a methodology of implicit Neural Representation learning with Prior embedding (NeRP) to reconstruct a computational image from sparsely sampled measurements. The method differs fundamentally from previous deep learning-based image reconstruction approaches in that NeRP exploits the internal information in an image prior and the physics of the sparsely sampled measurements to produce a representation of the unknown subject. No large-scale data is required to train the NeRP except for a prior image and sparsely sampled measurements. In addition, we demonstrate that NeRP is a general methodology that generalizes to different imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). We also show that NeRP can robustly capture the subtle yet significant image changes required for assessing tumor progression.

    View details for DOI 10.1109/TNNLS.2022.3177134

    View details for PubMedID 35657845

  • A Geometry-Informed Deep Learning Framework for Ultra-Sparse 3D Tomographic Image Reconstruction Shen, L., Zhao, W., Pauly, J., Xing, L. ELSEVIER IRELAND LTD. 2022: S287-S288
  • Implicit neural representation for radiation therapy dose distribution. Physics in medicine and biology Vasudevan, V., Shen, L., Huang, C., Chuang, C. F., Islam, M. T., Ren, H., Yang, Y., Dong, P., Xing, L. 2022

    Abstract

    OBJECTIVE: Dose distribution data plays a pivotal role in radiotherapy treatment planning. The data is typically represented using voxel grids, and its size ranges from 10^6--10^8. A concise representation of the treatment plan is of great value in facilitating treatment planning and downstream applications. This work aims to develop an implicit neural representation of 3D dose distribution data.APPROACH: Instead of storing the dose values at each voxel, in the proposed approach, the weights of a multilayer perceptron (MLP) are employed to characterize the dosimetric data for plan representation and subsequent applications. We train a coordinate-based MLP with sinusoidal activations to map the voxel spatial coordinates to the corresponding dose values. We identify the best architecture for a given parameter budget and use that to train a model for each patient. The trained MLP is evaluated at each voxel location to reconstruct the dose distribution. We perform extensive experiments on dose distributions of prostate, spine, and head and neck tumor cases to evaluate the quality of the proposed representation. We also study the change in representation quality by varying model size and activation function.MAIN RESULTS: Using coordinate-based MLPs with sinusoidal activations, we can learn implicit representations that achieve a mean-squared error of 10^{-6} and peak signal-to-noise ratio greater than 50 dB at a target bitrate of ~1 across all the datasets, with a compression ratio of ~32. Our results also show that model sizes with a bitrate of 1--2 achieve optimal accuracy. For smaller bitrates, performance starts to drop significantly.SIGNIFICANCE: The proposed model provides a low-dimensional, implicit, and continuous representation of 3D dose data. In summary, given a dose distribution, we systematically show how to find a compact model to fit the data accurately. This study lays the groundwork for future applications of neural representations of dose data in radiation oncology.

    View details for DOI 10.1088/1361-6560/ac6b10

    View details for PubMedID 35477171

  • Novel-view X-ray projection synthesis through geometry-integrated deep learning. Medical image analysis Shen, L., Yu, L., Zhao, W., Pauly, J., Xing, L. 2022; 77: 102372

    Abstract

    X-ray imaging is a widely used approach to view the internal structure of a subject for clinical diagnosis, image-guided interventions and decision-making. The X-ray projections acquired at different view angles provide complementary information of patient's anatomy and are required for stereoscopic or volumetric imaging of the subject. In reality, obtaining multiple-view projections inevitably increases radiation dose and complicates clinical workflow. Here we investigate a strategy of obtaining the X-ray projection image at a novel view angle from a given projection image at a specific view angle to alleviate the need for actual projection measurement. Specifically, a Deep Learning-based Geometry-Integrated Projection Synthesis (DL-GIPS) framework is proposed for the generation of novel-view X-ray projections. The proposed deep learning model extracts geometry and texture features from a source-view projection, and then conducts geometry transformationon the geometry features to accommodate the change of view angle.At the final stage, the X-ray projection in the target view is synthesized from the transformed geometry and the shared texture features via an image generator. The feasibility and potential impact of the proposed DL-GIPS model are demonstrated using lung imaging cases.The proposed strategy can be generalized to a general case of multiple projections synthesis from multiple input views and potentially provides a new paradigm for various stereoscopic and volumetric imaging with substantially reduced efforts in data acquisition.

    View details for DOI 10.1016/j.media.2022.102372

    View details for PubMedID 35131701

  • Attention-guided deep learning for gestational age prediction using fetal brain MRI. Scientific reports Shen, L., Zheng, J., Lee, E. H., Shpanskaya, K., McKenna, E. S., Atluri, M. G., Plasto, D., Mitchell, C., Lai, L. M., Guimaraes, C. V., Dahmoush, H., Chueh, J., Halabi, S. S., Pauly, J. M., Xing, L., Lu, Q., Oztekin, O., Kline-Fath, B. M., Yeom, K. W. 1800; 12 (1): 1408

    Abstract

    Magnetic resonance imaging offers unrivaled visualization of the fetal brain, forming the basis for establishing age-specific morphologic milestones. However, gauging age-appropriate neural development remains a difficult task due to the constantly changing appearance of the fetal brain, variable image quality, and frequent motion artifacts. Here we present an end-to-end, attention-guided deep learning model that predicts gestational age with R2 score of 0.945, mean absolute error of 6.7days, and concordance correlation coefficient of 0.970. The convolutional neural network was trained on a heterogeneous dataset of 741 developmentally normal fetal brain images ranging from 19 to 39weeks in gestational age. We also demonstrate model performance and generalizability using independent datasets from four academic institutions across the U.S. and Turkey with R2 scores of 0.81-0.90 after minimal fine-tuning. The proposed regression algorithm provides an automated machine-enabled tool with the potential to better characterize in utero neurodevelopment and guide real-time gestational age estimation after the first trimester.

    View details for DOI 10.1038/s41598-022-05468-5

    View details for PubMedID 35082346

  • Artificial intelligence in image-guided radiotherapy: a review of treatment target localization. Quantitative imaging in medicine and surgery Zhao, W., Shen, L., Islam, M. T., Qin, W., Zhang, Z., Liang, X., Zhang, G., Xu, S., Li, X. 2021; 11 (12): 4881-4894

    Abstract

    Modern conformal beam delivery techniques require image-guidance to ensure the prescribed dose to be delivered as planned. Recent advances in artificial intelligence (AI) have greatly augmented our ability to accurately localize the treatment target while sparing the normal tissues. In this paper, we review the applications of AI-based algorithms in image-guided radiotherapy (IGRT), and discuss the indications of these applications to the future of clinical practice of radiotherapy. The benefits, limitations and some important trends in research and development of the AI-based IGRT techniques are also discussed. AI-based IGRT techniques have the potential to monitor tumor motion, reduce treatment uncertainty and improve treatment precision. Particularly, these techniques also allow more healthy tissue to be spared while keeping tumor coverage the same or even better.

    View details for DOI 10.21037/qims-21-199

    View details for PubMedID 34888196

    View details for PubMedCentralID PMC8611462

  • Deep Neural Network With Consistency Regularization of Multi-Output Channels for Improved Tumor Detection and Delineation IEEE TRANSACTIONS ON MEDICAL IMAGING Seo, H., Yu, L., Ren, H., Li, X., Shen, L., Xing, L. 2021; 40 (12): 3369-3378

    Abstract

    Deep learning is becoming an indispensable tool for imaging applications, such as image segmentation, classification, and detection. In this work, we reformulate a standard deep learning problem into a new neural network architecture with multi-output channels, which reflects different facets of the objective, and apply the deep neural network to improve the performance of image segmentation. By adding one or more interrelated auxiliary-output channels, we impose an effective consistency regularization for the main task of pixelated classification (i.e., image segmentation). Specifically, multi-output-channel consistency regularization is realized by residual learning via additive paths that connect main-output channel and auxiliary-output channels in the network. The method is evaluated on the detection and delineation of lung and liver tumors with public data. The results clearly show that multi-output-channel consistency implemented by residual learning improves the standard deep neural network. The proposed framework is quite broad and should find widespread applications in various deep learning problems.

    View details for DOI 10.1109/TMI.2021.3084748

    View details for Web of Science ID 000724511900011

    View details for PubMedID 34048339

  • Artificial intelligence in image-guided radiotherapy: a review of treatment target localization QUANTITATIVE IMAGING IN MEDICINE AND SURGERY Zhao, W., Shen, L., Islam, M., Qin, W., Zhang, Z., Liang, X., Zhang, G., Xu, S., Li, X. 2021
  • Multi-Domain Image Completion for Random Missing Input Data IEEE TRANSACTIONS ON MEDICAL IMAGING Shen, L., Zhu, W., Wang, X., Xing, L., Pauly, J. M., Turkbey, B., Harmon, S., Sanford, T., Mehralivand, S., Choyke, P. L., Wood, B. J., Xu, D. 2021; 40 (4): 1113–22

    Abstract

    Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources in practice, which makes it challenging to build a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the random missing domain(s) data in real applications. Specifically, we develop a novel multi-domain image completion method that utilizes a generative adversarial network (GAN) with a representational disentanglement scheme to extract shared content encoding and separate style encoding across multiple domains. We further illustrate that the learned representation in multi-domain image completion could be leveraged for high-level tasks, e.g., segmentation, by introducing a unified framework consisting of image completion and segmentation with a shared content encoder. The experiments demonstrate consistent performance improvement on three datasets for brain tumor segmentation, prostate segmentation, and facial expression image completion respectively.

    View details for DOI 10.1109/TMI.2020.3046444

    View details for Web of Science ID 000637532800002

    View details for PubMedID 33351753

  • GLoRIA: A Multimodal Global-Local Representation Learning Framework for Label-efficient Medical Image Recognition Huang, S., Shen, L., Lungren, M. P., Yeung, S., IEEE IEEE. 2021: 3922-3931
  • Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning. Nature biomedical engineering Shen, L., Zhao, W., Xing, L. 2019

    Abstract

    Tomographic imaging using penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.

    View details for DOI 10.1038/s41551-019-0466-4

    View details for PubMedID 31659306

  • Markerless pancreatic tumor target localization enabled by deep learning. International journal of radiation oncology, biology, physics Zhao, W. n., Shen, L. n., Han, B. n., Yang, Y. n., Cheng, K. n., Toesca, D. A., Koong, A. C., Chang, D. T., Xing, L. n. 2019

    Abstract

    To estimate the impact of radiotherapy (RT) on non-breast second malignant neoplasms (SMNs) in young women survivors of stage I-IIIA breast cancer.Women aged 20-44 years diagnosed with stage I-IIIA breast cancer (1988-2008) were identified in Surveillance, Epidemiology, and End Results (SEER) 9 registries. Bootstrapping approach and competing risk proportional hazards models were used to evaluate the effect of RT on non-breast SMN risk. The analysis was repeated in racial subgroups. Radio-tolerance score (RTS) analysis of normal airway epithelium was performed using Gene Expression Omnibus (GEO) datasets.Within records of 30,003 women with primary breast cancer, 20,516 eligible patients were identified (including 2,183 African Americans [AAs] and 16,009 Caucasians). The 25-year cumulative incidences of SMN were 5.2% and 3.6% (RT vs. no-RT) for AAs with 12.8-year and 17.4-year (RT vs. no-RT) median follow-up (HR=1.81, 95% bootstrapping confidence intervals [BCIs] [1.02, 2.50], P < 0.05); and 6.4% and 5.9% (RT vs. no-RT) for Caucasians with 14.3-year and 18.1-year (RT vs. no-RT) median follow-up (HR=1.10, 95% BCI [0.61, 1.40], P > 0.05). The largest portion of excess RT-related SMN risk was lung cancer (AA: HR=2.08, 95% BCI [1.02, 5.39], P < 0.05; Caucasian: HR=1.50, 95% BCI [0.84, 5.38], P > 0.05). STEPP analysis revealed higher post-RT non-breast SMN risk essentially throughout entire age range 20-44 years, with larger HR for RT in AAs. RTS of normal airway epithelium from young AA women was significantly lower than that from young Caucasian women (P = 0.038).With a projected 25-year follow-up, RT is associated with elevated risk of non-breast SMNs, particularly second lung cancer, in young women survivors of stage I-IIIA breast cancer, especially higher in AA women than Caucasian women.

    View details for DOI 10.1016/j.ijrobp.2019.05.071

    View details for PubMedID 31201892

  • Harnessing the power of deep learning for volumetric CT imaging with single or limited number of projections Shen, L., Zhao, W., Xing, L., Schmidt, T. G., Chen, G. H., Bosmans, H. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2513032

    View details for Web of Science ID 000483585700072

  • Automatic marker-free target positioning and tracking for image-guided radiotherapy and interventions Zhao, W., Shen, L., Wu, Y., Han, B., Yang, Y., Xing, L., Fei, B., Linte, C. A. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2512166

    View details for Web of Science ID 000483683500010

  • A deep learning approach for dual-energy CT imaging using a single-energy CT data Zhao, W., Lv, T., Gao, P., Shen, L., Dai, X., Cheng, K., Jia, M., Chen, Y., Xing, L., Matej, S., Metzler, S. D. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2534433

    View details for Web of Science ID 000535354300073

  • Scaling Human-Object Interaction Recognition through Zero-Shot Learning Shen, L., Yeung, S., Hoffman, J., Mori, G., Li Fei-Fei, IEEE IEEE. 2018: 1568–76
  • Learning to Learn from Noisy Web Videos Yeung, S., Ramanathan, V., Russakovsky, O., Shen, L., Mori, G., Li Fei-Fei, IEEE IEEE. 2017: 7455–63