All Publications

  • Artificial intelligence in image-guided radiotherapy: a review of treatment target localization QUANTITATIVE IMAGING IN MEDICINE AND SURGERY Zhao, W., Shen, L., Islam, M., Qin, W., Zhang, Z., Liang, X., Zhang, G., Xu, S., Li, X. 2021
  • Multi-Domain Image Completion for Random Missing Input Data IEEE TRANSACTIONS ON MEDICAL IMAGING Shen, L., Zhu, W., Wang, X., Xing, L., Pauly, J. M., Turkbey, B., Harmon, S., Sanford, T., Mehralivand, S., Choyke, P. L., Wood, B. J., Xu, D. 2021; 40 (4): 1113–22


    Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources in practice, which makes it challenging to build a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the random missing domain(s) data in real applications. Specifically, we develop a novel multi-domain image completion method that utilizes a generative adversarial network (GAN) with a representational disentanglement scheme to extract shared content encoding and separate style encoding across multiple domains. We further illustrate that the learned representation in multi-domain image completion could be leveraged for high-level tasks, e.g., segmentation, by introducing a unified framework consisting of image completion and segmentation with a shared content encoder. The experiments demonstrate consistent performance improvement on three datasets for brain tumor segmentation, prostate segmentation, and facial expression image completion respectively.

    View details for DOI 10.1109/TMI.2020.3046444

    View details for Web of Science ID 000637532800002

    View details for PubMedID 33351753

  • Artificial intelligence in image-guided radiotherapy: a review of treatment target localization. Quantitative imaging in medicine and surgery Zhao, W., Shen, L., Islam, M. T., Qin, W., Zhang, Z., Liang, X., Zhang, G., Xu, S., Li, X. 2021; 11 (12): 4881-4894


    Modern conformal beam delivery techniques require image-guidance to ensure the prescribed dose to be delivered as planned. Recent advances in artificial intelligence (AI) have greatly augmented our ability to accurately localize the treatment target while sparing the normal tissues. In this paper, we review the applications of AI-based algorithms in image-guided radiotherapy (IGRT), and discuss the indications of these applications to the future of clinical practice of radiotherapy. The benefits, limitations and some important trends in research and development of the AI-based IGRT techniques are also discussed. AI-based IGRT techniques have the potential to monitor tumor motion, reduce treatment uncertainty and improve treatment precision. Particularly, these techniques also allow more healthy tissue to be spared while keeping tumor coverage the same or even better.

    View details for DOI 10.21037/qims-21-199

    View details for PubMedID 34888196

    View details for PubMedCentralID PMC8611462

  • Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning. Nature biomedical engineering Shen, L., Zhao, W., Xing, L. 2019


    Tomographic imaging using penetrating waves generates cross-sectional views of the internal anatomy of a living subject. For artefact-free volumetric imaging, projection views from a large number of angular positions are required. Here we show that a deep-learning model trained to map projection radiographs of a patient to the corresponding 3D anatomy can subsequently generate volumetric tomographic X-ray images of the patient from a single projection view. We demonstrate the feasibility of the approach with upper-abdomen, lung, and head-and-neck computed tomography scans from three patients. Volumetric reconstruction via deep learning could be useful in image-guided interventional procedures such as radiation therapy and needle biopsy, and might help simplify the hardware of tomographic imaging systems.

    View details for DOI 10.1038/s41551-019-0466-4

    View details for PubMedID 31659306

  • Markerless pancreatic tumor target localization enabled by deep learning. International journal of radiation oncology, biology, physics Zhao, W. n., Shen, L. n., Han, B. n., Yang, Y. n., Cheng, K. n., Toesca, D. A., Koong, A. C., Chang, D. T., Xing, L. n. 2019


    To estimate the impact of radiotherapy (RT) on non-breast second malignant neoplasms (SMNs) in young women survivors of stage I-IIIA breast cancer.Women aged 20-44 years diagnosed with stage I-IIIA breast cancer (1988-2008) were identified in Surveillance, Epidemiology, and End Results (SEER) 9 registries. Bootstrapping approach and competing risk proportional hazards models were used to evaluate the effect of RT on non-breast SMN risk. The analysis was repeated in racial subgroups. Radio-tolerance score (RTS) analysis of normal airway epithelium was performed using Gene Expression Omnibus (GEO) datasets.Within records of 30,003 women with primary breast cancer, 20,516 eligible patients were identified (including 2,183 African Americans [AAs] and 16,009 Caucasians). The 25-year cumulative incidences of SMN were 5.2% and 3.6% (RT vs. no-RT) for AAs with 12.8-year and 17.4-year (RT vs. no-RT) median follow-up (HR=1.81, 95% bootstrapping confidence intervals [BCIs] [1.02, 2.50], P < 0.05); and 6.4% and 5.9% (RT vs. no-RT) for Caucasians with 14.3-year and 18.1-year (RT vs. no-RT) median follow-up (HR=1.10, 95% BCI [0.61, 1.40], P > 0.05). The largest portion of excess RT-related SMN risk was lung cancer (AA: HR=2.08, 95% BCI [1.02, 5.39], P < 0.05; Caucasian: HR=1.50, 95% BCI [0.84, 5.38], P > 0.05). STEPP analysis revealed higher post-RT non-breast SMN risk essentially throughout entire age range 20-44 years, with larger HR for RT in AAs. RTS of normal airway epithelium from young AA women was significantly lower than that from young Caucasian women (P = 0.038).With a projected 25-year follow-up, RT is associated with elevated risk of non-breast SMNs, particularly second lung cancer, in young women survivors of stage I-IIIA breast cancer, especially higher in AA women than Caucasian women.

    View details for DOI 10.1016/j.ijrobp.2019.05.071

    View details for PubMedID 31201892

  • Harnessing the power of deep learning for volumetric CT imaging with single or limited number of projections Shen, L., Zhao, W., Xing, L., Schmidt, T. G., Chen, G. H., Bosmans, H. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2513032

    View details for Web of Science ID 000483585700072

  • Automatic marker-free target positioning and tracking for image-guided radiotherapy and interventions Zhao, W., Shen, L., Wu, Y., Han, B., Yang, Y., Xing, L., Fei, B., Linte, C. A. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2512166

    View details for Web of Science ID 000483683500010

  • A deep learning approach for dual-energy CT imaging using a single-energy CT data Zhao, W., Lv, T., Gao, P., Shen, L., Dai, X., Cheng, K., Jia, M., Chen, Y., Xing, L., Matej, S., Metzler, S. D. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2534433

    View details for Web of Science ID 000535354300073

  • Scaling Human-Object Interaction Recognition through Zero-Shot Learning Shen, L., Yeung, S., Hoffman, J., Mori, G., Li Fei-Fei, IEEE IEEE. 2018: 1568–76
  • Learning to Learn from Noisy Web Videos Yeung, S., Ramanathan, V., Russakovsky, O., Shen, L., Mori, G., Li Fei-Fei, IEEE IEEE. 2017: 7455–63