Bio


Liyan Sun is a postdoctoral scholar at Department of Radiation Oncology, Stanford University, Stanford, CA, USA. He was born in Xuchang, China. He received B.E. degree in communication engineering from Zhengzhou University, Zhengzhou, China and Ph.D. degree in Signal and Information Processing from Xiamen University, Xiamen, China. He then did a one-year postdoctoral training at Xiamen University in Biomedical Imaging before joining Stanford University. His main research interests include machine learning and its application in biomedical imaging and radiation treatment.

Honors & Awards


  • Winning Team Award, Grand Challenge on MR Brain Segmentation 2018 (2018)
  • Student Travel Grant, IEEE Global Conference on Signal and Information Processing (2015)

Boards, Advisory Committees, Professional Organizations


  • Student Member, IEEE Signal Processing Society (2015 - 2016)

Professional Education


  • Bachelor of Engineering, Zhengzhou University (2014)
  • Doctor of Philosophy, Xiamen University (2021)
  • Ph.D., Xiamen University, Signal and Information Processing (2021)
  • B.E., Zhengzhou University, Communication Engineering (2014)

Stanford Advisors


  • Wu Liu, Postdoctoral Faculty Sponsor

Current Research and Scholarly Interests


(1) Compressed sensing MRI with deep learning models optimized under a unified framework. The low-level reconstruction task and high-level analysis problem regularize each other. Deep learning models enable their joint optimization within a unified framework.

(2) Segmentation of medical images suffer from scarce data resources derived from low data acquisition efficiency, demand for annotation expertise and other factors. Develop deep learning algorithms capable of addressing aforementioned challenges is one important issue to be explored.

(3) Medical image generation from external modalities or subjects provides complementary information. Supervised/Unsupervised deep learning models enable flexible high-quality image synthesis under both paired and unpaired data condition.

(4) Time-series data offers longitudinal information on the disease development of patients. By leveraging temporary correlation, deep neural networks are able to predict future state of health of target patient.

(5) PET imaging with radiation treatment offers new possibility for better treating cancer patients. Improving real-time PET image quality with deep learning methods could potentially lead to better treatment outcome.

All Publications


  • Harmonizing Pathological and Normal Pixels for Pseudo-Healthy Synthesis IEEE TRANSACTIONS ON MEDICAL IMAGING Zhang, Y., Lin, X., Zhuang, Y., Sun, L., Huang, Y., Ding, X., Wang, G., Yang, L., Yu, Y. 2022; 41 (9): 2457-2468

    Abstract

    Synthesizing a subject-specific pathology-free image from a pathological image is valuable for algorithm development and clinical practice. In recent years, several approaches based on the Generative Adversarial Network (GAN) have achieved promising results in pseudo-healthy synthesis. However, the discriminator (i.e., a classifier) in the GAN cannot accurately identify lesions and further hampers from generating admirable pseudo-healthy images. To address this problem, we present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images. Then, we apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem existing in medical image segmentation. Furthermore, a reliable metric is proposed by utilizing two attributes of label noise to measure the health of synthetic images. Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods. The method achieves better performance than the existing methods with only 30% of the training data. The effectiveness of the proposed method is also demonstrated on the LiTS and the T1 modality of BraTS. The code and the pre-trained model of this study are publicly available at https://github.com/Au3C2/Generator-Versus-Segmentor.

    View details for DOI 10.1109/TMI.2022.3164095

    View details for Web of Science ID 000848274700020

    View details for PubMedID 35363612

  • A teacher-student framework for liver and tumor segmentation under mixed supervision from abdominal CT scans NEURAL COMPUTING & APPLICATIONS Sun, L., Wu, J., Ding, X., Huang, Y., Chen, Z., Wang, G., Yu, Y. 2022; 34 (19): 16547-16561
  • Few-shot medical image segmentation using a global correlation network with discriminative embedding COMPUTERS IN BIOLOGY AND MEDICINE Sun, L., Li, C., Ding, X., Huang, Y., Chen, Z., Wang, G., Yu, Y., Paisley, J. 2022; 140: 105067

    Abstract

    Despite impressive developments in deep convolutional neural networks for medical imaging, the paradigm of supervised learning requires numerous annotations in training to avoid overfitting. In clinical cases, massive semantic annotations are difficult to acquire where biomedical expert knowledge is required. Moreover, it is common when only a few annotated classes are available. In this study, we proposed a new approach to few-shot medical image segmentation, which enables a segmentation model to quickly generalize to an unseen class with few training images. We constructed a few-shot image segmentation mechanism using a deep convolutional network trained episodically. Motivated by the spatial consistency and regularity in medical images, we developed an efficient global correlation module to model the correlation between a support and query image and incorporate it into the deep network. We enhanced the discrimination ability of the deep embedding scheme to encourage clustering of feature domains belonging to the same class while keeping feature domains of different organs far apart. We experimented using anatomical abdomen images from both CT and MRI modalities.

    View details for DOI 10.1016/j.compbiomed.2021.105067

    View details for Web of Science ID 000731789000006

    View details for PubMedID 34920364

  • Hierarchical deep network with uncertainty-aware semi-supervised learning for vessel segmentation NEURAL COMPUTING & APPLICATIONS Li, C., Ma, W., Sun, L., Ding, X., Huang, Y., Wang, G., Yu, Y. 2022; 34 (4): 3151-3164
  • Enhanced Deep Blind Hyperspectral Image Fusion IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Wang, W., Fu, X., Zeng, W., Sun, L., Zhan, R., Huang, Y., Ding, X. 2021

    Abstract

    The goal of hyperspectral image fusion (HIF) is to reconstruct high spatial resolution hyperspectral images (HR-HSI) via fusing low spatial resolution hyperspectral images (LR-HSI) and high spatial resolution multispectral images (HR-MSI) without loss of spatial and spectral information. Most existing HIF methods are designed based on the assumption that the observation models are known, which is unrealistic in many scenarios. To address this blind HIF problem, we propose a deep learning-based method that optimizes the observation model and fusion processes iteratively and alternatively during the reconstruction to enforce bidirectional data consistency, which leads to better spatial and spectral accuracy. However, general deep neural network inherently suffers from information loss, preventing us to achieve this bidirectional data consistency. To settle this problem, we enhance the blind HIF algorithm by making part of the deep neural network invertible via applying a slightly modified spectral normalization to the weights of the network. Furthermore, in order to reduce spatial distortion and feature redundancy, we introduce a Content-Aware ReAssembly of FEatures module and an SE-ResBlock model to our network. The former module helps to boost the fusion performance, while the latter make our model more compact. Experiments demonstrate that our model performs favorably against compared methods in terms of both nonblind HIF fusion and semiblind HIF fusion.

    View details for DOI 10.1109/TNNLS.2021.3105543

    View details for Web of Science ID 000733226500001

    View details for PubMedID 34460396

  • Triple-D network for efficient undersampled magnetic resonance images reconstruction MAGNETIC RESONANCE IMAGING Li, Z., Bao, Q., Yang, C., Chen, F., Wu, G., Sun, L., Zhang, Z., Liu, C. 2021; 77: 44-56

    Abstract

    Compressed sensing (CS) theory can help accelerate magnetic resonance imaging (MRI) by sampling partial k-space measurements. However, conventional optimization-based CS-MRI methods are often time-consuming and are based on fixed transform or shallow image dictionaries, which limits modeling capabilities. Recently, deep learning models have been used to solve the CS-MRI problem. However, recent researches have focused on modeling in image domain, and the potential of k-space modeling capability has not been utilized seriously. In this paper, we propose a deep model called Dual Domain Dense network (Triple-D network), which consisted of some k-space and image domain sub-network. These sub-networks are connected with dense connections, which can utilize feature maps at different levels to enhance performance. To further promote model capabilities, we use two strategies: multi-supervision strategies, which can avoid loss of supervision information; channel-wise attention layer (CA layer), which can adaptively adjust the weight of the feature map. Experimental results show that the proposed Triple-D network provides promising performance in CS-MRI, and it can effectively work on different sampling trajectories and noisy settings.

    View details for DOI 10.1016/j.mri.2020.11.010

    View details for Web of Science ID 000617137300007

    View details for PubMedID 33242592

  • Fast Magnetic Resonance Imaging on Regions of Interest: From Sensing to Reconstruction Sun, L., Huang, H., Ding, X., Huang, Y., Liu, X., Yu, Y., deBruijne, M., Cattin, P. C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 97-106
  • Generator Versus Segmentor: Pseudo-healthy Synthesis Zhang, Y., Li, C., Lin, X., Sun, L., Zhuang, Y., Huang, Y., Ding, X., Liu, X., Yu, Y., deBruijne, M., Cattin, P. C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 150-160
  • An Adversarial Learning Approach to Medical Image Synthesis for Lesion Detection IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS Sun, L., Wang, J., Huang, Y., Ding, X., Greenspan, H., Paisley, J. 2020; 24 (8): 2303-2314

    Abstract

    The identification of lesion within medical image data is necessary for diagnosis, treatment and prognosis. Segmentation and classification approaches are mainly based on supervised learning with well-paired image-level or voxel-level labels. However, labeling the lesion in medical images is laborious requiring highly specialized knowledge. We propose a medical image synthesis model named abnormal-to-normal translation generative adversarial network (ANT-GAN) to generate a normal-looking medical image based on its abnormal-looking counterpart without the need for paired training data. Unlike typical GANs, whose aim is to generate realistic samples with variations, our more restrictive model aims at producing a normal-looking image corresponding to one containing lesions, and thus requires a special design. Being able to provide a "normal" counterpart to a medical image can provide useful side information for medical imaging tasks like lesion segmentation or classification validated by our experiments. In the other aspect, the ANT-GAN model is also capable of producing highly realistic lesion-containing image corresponding to the healthy one, which shows the potential in data augmentation verified in our experiments.

    View details for DOI 10.1109/JBHI.2020.2964016

    View details for Web of Science ID 000557358500017

    View details for PubMedID 31905155

  • A dual-domain deep lattice network for rapid MRI reconstruction NEUROCOMPUTING Sun, L., Wu, Y., Shu, B., Ding, X., Cai, C., Huang, Y., Paisley, J. 2020; 397: 94-107
  • A 3D Spatially Weighted Network for Segmentation of Brain Tissue From MRI IEEE TRANSACTIONS ON MEDICAL IMAGING Sun, L., Ma, W., Ding, X., Huang, Y., Liang, D., Paisley, J. 2020; 39 (4): 898-909

    Abstract

    The segmentation of brain tissue in MRI is valuable for extracting brain structure to aid diagnosis, treatment and tracking the progression of different neurologic diseases. Medical image data are volumetric and some neural network models for medical image segmentation have addressed this using a 3D convolutional architecture. However, this volumetric spatial information has not been fully exploited to enhance the representative ability of deep networks, and these networks have not fully addressed the practical issues facing the analysis of multimodal MRI data. In this paper, we propose a spatially-weighted 3D network (SW-3D-UNet) for brain tissue segmentation of single-modality MRI, and extend it using multimodality MRI data. We validate our model on the MRBrainS13 and MALC12 datasets. This unpublished model ranked first on the leaderboard of the MRBrainS13 Challenge.

    View details for DOI 10.1109/TMI.2019.2937271

    View details for Web of Science ID 000525265800008

    View details for PubMedID 31449009

  • A deep error correction network for compressed sensing MRI. BMC biomedical engineering Sun, L., Wu, Y., Fan, Z., Ding, X., Huang, Y., Paisley, J. 2020; 2: 4

    Abstract

    CS-MRI (compressed sensing for magnetic resonance imaging) exploits image sparsity properties to reconstruct MRI from very few Fourier k-space measurements. Due to imperfect modelings in the inverse imaging, state-of-the-art CS-MRI methods tend to leave structural reconstruction errors. Compensating such errors in the reconstruction could help further improve the reconstruction quality.In this work, we propose a DECN (deep error correction network) for CS-MRI. The DECN model consists of three parts, which we refer to as modules: a guide, or template, module, an error correction module, and a data fidelity module. Existing CS-MRI algorithms can serve as the template module for guiding the reconstruction. Using this template as a guide, the error correction module learns a CNN (convolutional neural network) to map the k-space data in a way that adjusts for the reconstruction error of the template image. We propose a deep error correction network. Our experimental results show the proposed DECN CS-MRI reconstruction framework can considerably improve upon existing inversion algorithms by supplementing with an error-correcting CNN.In the proposed a deep error correction framework, any off-the-shelf CS-MRI algorithm can be used as template generation. Then a deep neural network is used to compensate reconstruction errors. The promising experimental results validate the effectiveness and utility of the proposed framework.

    View details for DOI 10.1186/s42490-020-0037-5

    View details for PubMedID 32903379

    View details for PubMedCentralID PMC7422575

  • A Deep Information Sharing Network for Multi-Contrast Compressed Sensing MRI Reconstruction IEEE TRANSACTIONS ON IMAGE PROCESSING Sun, L., Fan, Z., Fu, X., Huang, Y., Ding, X., Paisley, J. 2019; 28 (12): 6141-6153

    Abstract

    Compressed sensing (CS) theory can accelerate multi-contrast magnetic resonance imaging (MRI) by sampling fewer measurements within each contrast. However, conventional optimization-based reconstruction models suffer several limitations, including a strict assumption of shared sparse support, time-consuming optimization, and "shallow" models with difficulties in encoding the patterns contained in massive MRI data. In this paper, we propose the first deep learning model for multi-contrast CS-MRI reconstruction. We achieve information sharing through feature sharing units, which significantly reduces the number of model parameters. The feature sharing unit combines with a data fidelity unit to comprise an inference block, which are then cascaded with dense connections, allowing for efficient information transmission across different depths of the network. Experiments on various multi-contrast MRI datasets show that the proposed model outperforms both state-of-the-art single-contrast and multi-contrast MRI methods in accuracy and efficiency. We demonstrate that improved reconstruction quality can bring benefits to subsequent medical image analysis. Furthermore, the robustness of the proposed model to misregistration shows its potential in real MRI applications.

    View details for DOI 10.1109/TIP.2019.2925288

    View details for Web of Science ID 000575374700008

    View details for PubMedID 31295112

  • Region-of-interest undersampled MRI reconstruction: A deep convolutional neural network approach MAGNETIC RESONANCE IMAGING Sun, L., Fan, Z., Ding, X., Huang, Y., Paisley, J. 2019; 63: 185-192

    Abstract

    Compressive sensing enables fast magnetic resonance imaging (MRI) reconstruction with undersampled k-space data. However, in most existing MRI reconstruction models, the whole MR image is targeted and reconstructed without taking specific tissue regions into consideration. This may fails to emphasize the reconstruction accuracy on important and region-of-interest (ROI) tissues for diagnosis. In some ROI-based MRI reconstruction models, the ROI mask is extracted by human experts in advance, which is laborious when the MRI datasets are too large. In this paper, we propose a deep neural network architecture for ROI MRI reconstruction called ROIRecNet to improve reconstruction accuracy of the ROI regions in under-sampled MRI. In the model, we obtain the ROI masks by feeding an initially reconstructed MRI from a pre-trained MRI reconstruction network (RecNet) to a pre-trained MRI segmentation network (ROINet). Then we fine-tune the RecNet with a binary weighted ℓ2 loss function using the produced ROI mask. The resulting ROIRecNet can offer more focus on the ROI. We test the model on the MRBrainS13 dataset with different brain tissues being ROIs. The experiment shows the proposed ROIRecNet can significantly improve the reconstruction quality of the region of interest.

    View details for DOI 10.1016/j.mri.2019.07.010

    View details for Web of Science ID 000500653000023

    View details for PubMedID 31352015

  • A divide-and-conquer approach to compressed sensing MRI MAGNETIC RESONANCE IMAGING Sun, L., Fan, Z., Ding, X., Cai, C., Huang, Y., Paisley, J. 2019; 63: 37-48

    Abstract

    Compressed sensing (CS) theory assures us that we can accurately reconstruct magnetic resonance images using fewer k-space measurements than the Nyquist sampling rate requires. In traditional CS-MRI inversion methods, the fact that the energy within the Fourier measurement domain is distributed non-uniformly is often neglected during reconstruction. As a result, more densely sampled low frequency information tends to dominate penalization schemes for reconstructing MRI at the expense of high frequency details. In this paper, we propose a new framework for CS-MRI inversion in which we decompose the observed k-space data into "subspaces" via sets of filters in a lossless way, and reconstruct the images in these various spaces individually using off-the-shelf algorithms. We then fuse the results to obtain the final reconstruction. In this way, we are able to focus reconstruction on frequency information within the entire k-space more equally, preserving both high and low frequency details. We demonstrate that the proposed framework is competitive with state-of-the-art methods in CS-MRI in terms of quantitative performance, and often improves an algorithm's results qualitatively compared with its direct application to k-space.

    View details for DOI 10.1016/j.mri.2019.06.014

    View details for Web of Science ID 000500653000005

    View details for PubMedID 31306732

  • Joint CS-MRI Reconstruction and Segmentation with a Unified Deep Network Sun, L., Fan, Z., Ding, X., Huang, Y., Paisley, J., Chung, A. C., Gee, J. C., Yushkevich, P. A., Bao, S. SPRINGER INTERNATIONAL PUBLISHING AG. 2019: 492-504
  • A Deep Ensemble Network for Compressed Sensing MRI Wu, H., Wu, Y., Sun, L., Cai, C., Huang, Y., Ding, X., Cheng, L., Leung, A. C., Ozawa, S. SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 162-171
  • Compressed Sensing MRI Using a Recursive Dilated Network Thirty-Second AAAI Conference on Artificial Intelligence Sun, L., Fan, Z., Huang, Y., Ding, X., Paisley, J. Association for the Advancement of Artificial Intelligence. 2018

    View details for DOI 10.1609/aaai.v32i1.11869

  • A Segmentation-Aware Deep Fusion Network for Compressed Sensing MRI Fan, Z., Sun, L., Ding, X., Huang, Y., Cai, C., Paisley, J., Ferrari, Hebert, M., Sminchisescu, C., Weiss, Y. SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 55-70
  • COMPRESSED SENSING MRI USING TOTAL VARIATION REGULARIZATION WITH K-SPACE DECOMPOSITION Sun, L., Huang, Y., Cai, C., Ding, X., IEEE IEEE. 2017: 3061-3065
  • A novel nonlocal MRI reconstruction algorithm with patch-based low rank regularization Sun, L., Chen, J., Zeng, D., Ding, X., IEEE IEEE. 2015: 398-402
  • Patch-based nonlocal dynamic MRI reconstruction with low-rank prior IEEE 17th International Workshop on Multimedia Signal Processing Sun, L., Chen, J., Zhang, X., Ding, X. IEEE. 2015