Bio


Dr. Liu is a clinical assistant professor and a board certified medical physicist in the Department of Radiation Oncology. She received her PhD in electrical engineering from the University of Michigan and completed her residency training at Michigan Medicine. At Stanford, she is involved with the radiosurgery program, the MR-guided radiotherapy program and the general external beam radiotherapy program. Her research has been focusing on optimizing radiotherapy workflow through AI, including sparse medical imaging, medical image synthesis and radiotherapy beam data modeling.

Academic Appointments


Administrative Appointments


  • Associate Editor, Medical Physics Journal (2023 - Present)

Honors & Awards


  • Seed Grant, Stanford Institute for Human-Centered Artificial Intelligence (2021)
  • Summa Cum Laude Merit Award, International Society for Magnetic Resonance in Medicine (ISMRM) (2018)
  • Barbour Scholarship, University of Michigan (2017)

Boards, Advisory Committees, Professional Organizations


  • Member, Women in Radiation Oncology Work Groups, ASTRO (American Society for Radiation Oncology) (2024 - Present)
  • Member, AAPM (American Association of Physicists in Medicine) (2018 - Present)

Professional Education


  • PhD, University of Michigan, Electrical Engineering (2018)
  • Physics Resident, University of Michigan Health System, Therapeutic Medical Physics (2020)

Patents


  • James Balter, Yue Cao, Lianli Liu, Adam Johansson. "United StatesHierarchical Motion Modeling from Dynamic Magnetic Resonance Imaging"

Current Research and Scholarly Interests


Dr. Liu's research has focused on optimizing radiotherapy workflow through AI. Specifically, she is interested in
1. Optimize medical imaging for image-guided radiotherapy, including:
Sparse imaging for real time monitoring of patient dynamics;
Accelerated longitudinal imaging for efficient post-treatment patient follow up;
High quality functional imaging for treatment response prediction and treatment plan adaptation;
Medical image synthesis for reduced imaging modalities and costs.

2. Optimize clinical workflow for radiation beam commissioning and quality assurance, including:
Sparse beam dosimetry through beam data modeling;
Model-based radiation measurement error detection;
Longitudinal prediction of radiation beam changes;
Monte Carlo phase space modeling for efficient data representation and fast dose calculation.

All Publications


  • Volumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior-augmented implicit neural representation learning. Medical physics Liu, L., Shen, L., Johansson, A., Balter, J. M., Cao, Y., Vitzthum, L., Xing, L. 2023

    Abstract

    Volumetric reconstruction of magnetic resonance imaging (MRI) from sparse samples is desirable for 3D motion tracking and promises to improve magnetic resonance (MR)-guided radiation treatment precision. Data-driven sparse MRI reconstruction, however, requires large-scale training datasets for prior learning, which is time-consuming and challenging to acquire in clinical settings.To investigate volumetric reconstruction of MRI from sparse samples of two orthogonal slices aided by sparse priors of two static 3D MRI through implicit neural representation (NeRP) learning, in support of 3D motion tracking during MR-guided radiotherapy.A multi-layer perceptron network was trained to parameterize the NeRP model of a patient-specific MRI dataset, where the network takes 4D data coordinates of voxel locations and motion states as inputs and outputs corresponding voxel intensities. By first training the network to learn the NeRP of two static 3D MRI with different breathing motion states, prior information of patient breathing motion was embedded into network weights through optimization. The prior information was then augmented from two motion states to 31 motion states by querying the optimized network at interpolated and extrapolated motion state coordinates. Starting from the prior-augmented NeRP model as an initialization point, we further trained the network to fit sparse samples of two orthogonal MRI slices and the final volumetric reconstruction was obtained by querying the trained network at 3D spatial locations. We evaluated the proposed method using 5-min volumetric MRI time series with 340 ms temporal resolution for seven abdominal patients with hepatocellular carcinoma, acquired using golden-angle radial MRI sequence and reconstructed through retrospective sorting. Two volumetric MRI with inhale and exhale states respectively were selected from the first 30 s of the time series for prior embedding and augmentation. The remaining 4.5-min time series was used for volumetric reconstruction evaluation, where we retrospectively subsampled each MRI to two orthogonal slices and compared model-reconstructed images to ground truth images in terms of image quality and the capability of supporting 3D target motion tracking.Across the seven patients evaluated, the peak signal-to-noise-ratio between model-reconstructed and ground truth MR images was 38.02 ± 2.60 dB and the structure similarity index measure was 0.98 ± 0.01. Throughout the 4.5-min time period, gross tumor volume (GTV) motion estimated by deforming a reference state MRI to model-reconstructed and ground truth MRI showed good consistency. The 95-percentile Hausdorff distance between GTV contours was 2.41 ± 0.77 mm, which is less than the voxel dimension. The mean GTV centroid position difference between ground truth and model estimation was less than 1 mm in all three orthogonal directions.A prior-augmented NeRP model has been developed to reconstruct volumetric MRI from sparse samples of orthogonal cine slices. Only one exhale and one inhale 3D MRI were needed to train the model to learn prior information of patient breathing motion for sparse image reconstruction. The proposed model has the potential of supporting 3D motion tracking during MR-guided radiotherapy for improved treatment precision and promises a major simplification of the workflow by eliminating the need for large-scale training datasets.

    View details for DOI 10.1002/mp.16845

    View details for PubMedID 38014764

  • Adaptive Region-Specific Loss for Improved Medical Image Segmentation. IEEE transactions on pattern analysis and machine intelligence Chen, Y., Yu, L., Wang, J., Panjwani, N., Obeid, J., Liu, W., Liu, L., Kovalchuk, N., Gensheimer, M. F., Vitzthum, L. K., Beadle, B. M., Chang, D. T., Le, Q., Han, B., Xing, L. 2023; PP

    Abstract

    Defining the loss function is an important part of neural network design and critically determines the success of deep learning modeling. A significant shortcoming of the conventional loss functions is that they weight all regions in the input image volume equally, despite the fact that the system is known to be heterogeneous (i.e., some regions can achieve high prediction performance more easily than others). Here, we introduce a region-specific loss to lift the implicit assumption of homogeneous weighting for better learning. We divide the entire volume into multiple sub-regions, each with an individualized loss constructed for optimal local performance. Effectively, this scheme imposes higher weightings on the sub-regions that are more difficult to segment, and vice versa. Furthermore, the regional false positive and false negative errors are computed for each input image during a training step and the regional penalty is adjusted accordingly to enhance the overall accuracy of the prediction. Using different public and in-house medical image datasets, we demonstrate that the proposed regionally adaptive loss paradigm outperforms conventional methods in the multi-organ segmentations, without any modification to the neural network architecture or additional data preparation.

    View details for DOI 10.1109/TPAMI.2023.3289667

    View details for PubMedID 37363838

  • Modeling linear accelerator (Linac) beam data by implicit neural representation learning for commissioning and quality assurance applications. Medical physics Liu, L., Shen, L., Yang, Y., Schüler, E., Zhao, W., Wetzstein, G., Xing, L. 2023

    Abstract

    Linear accelerator (Linac) beam data commissioning and quality assurance (QA) play a vital role in accurate radiation treatment delivery and entail a large number of measurements using a variety of field sizes. How to optimize the effort in data acquisition while maintaining high quality of medical physics practice has been sought after.We propose to model Linac beam data through implicit neural representation (NeRP) learning. The potential of the beam model in predicting beam data from sparse measurements and detecting data collection errors was evaluated, with the goal of using the beam model to verify beam data collection accuracy and simplify the commissioning and QA process.NeRP models with continuous and differentiable functions parameterized by multilayer perceptrons (MLPs) were used to represent various beam data including percentage depth dose and profiles of 6 MV beams with and without flattening filter. Prior knowledge of the beam data was embedded into the MLP network by learning the NeRP of a vendor-provided "golden" beam dataset. The prior-embedded network was then trained to fit clinical beam data collected at one field size and used to predict beam data at other field sizes. We evaluated the prediction accuracy by comparing network-predicted beam data to water tank measurements collected from 14 clinical Linacs. Beam datasets with intentionally introduced errors were used to investigate the potential use of the NeRP model for beam data verification, by evaluating the model performance when trained with erroneous beam data samples.Linac beam data predicted by the model agreed well with water tank measurements, with averaged Gamma passing rates (1%/1mm passing criteria) higher than 95% and averaged mean absolute errors less than 0.6%. Beam data samples with measurement errors were revealed by inconsistent beam predictions between networks trained with correct versus erroneous data samples, characterized by a Gamma passing rate lower than 90%.A NeRP beam data modeling technique has been established for predicting beam characteristics from sparse measurements. The model provides a valuable tool to verify beam data collection accuracy and promises to simplify commissioning/QA processes by reducing the number of measurements without compromising the quality of medical physics service. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.16212

    View details for PubMedID 36621812

  • Real Time Volumetric MRI for 3D Motion Tracking via Geometry-Informed Deep Learning. Medical physics Liu, L., Shen, L., Johansson, A., Balter, J. M., Cao, Y., Chang, D., Xing, L. 2022

    Abstract

    To develop a geometry-informed deep learning framework for volumetric MRI with sub-second acquisition time in support of 3D motion tracking, which is highly desirable for improved radiotherapy precision but hindered by the long image acquisition time.A 2D-3D deep learning network with an explicitly defined geometry module that embeds geometric priors of the k-space encoding pattern was investigated, where a 2D generation network first augmented the sparsely sampled image dataset by generating new 2D representations of the underlying 3D subject. A geometry module then unfolded the 2D representations to the volumetric space. Finally, a 3D refinement network took the unfolded 3D data and outputted high-resolution volumetric images. Patient-specific models were trained for 7 abdominal patients to reconstruct volumetric MRI from both orthogonal cine slices and sparse radial samples. To evaluate the robustness of the proposed method to longitudinal patient anatomy and position changes, we tested the trained model on separate datasets acquired more than one month later and evaluated 3D target motion tracking accuracy using the model-reconstructed images by deforming a reference MRI with gross tumor volume (GTV) contours to a 5-min time series of both ground truth and model-reconstructed volumetric images with a temporal resolution of 340 ms.Across the 7 patients evaluated, the median distances between model-predicted and ground truth GTV centroids in the superior-inferior direction were 0.4±0.3 mm and 0.5±0.4 mm for cine and radial acquisitions respectively. The 95-percentile Hausdorff distances between model-predicted and ground truth GTV contours were 4.7±1.1 mm and 3.2±1.5 mm for cine and radial acquisitions, which are of the same scale as cross-plane image resolution.Incorporating geometric priors into deep learning model enables volumetric imaging with high spatial and temporal resolution, which is particularly valuable for 3D motion tracking and has the potential of greatly improving MRI-guided radiotherapy precision. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15822

    View details for PubMedID 35766221

  • Volumetric prediction of breathing and slow drifting motion in the abdomen using radial MRI and multi-temporal resolution modeling. Physics in medicine and biology Liu, L., Johansson, A., Cao, Y., Lawrence, T. S., Balter, J. M. 2021; 66 (17)

    Abstract

    Abdominal organ motions introduce geometric uncertainties to radiotherapy. This study investigates a multi-temporal resolution 3D motion prediction scheme that accounts for both breathing and slow drifting motion in the abdomen in support of MRI-guided radiotherapy. Ten-minute MRI scans were acquired for 8 patients using a volumetric golden-angle stack-of-stars sequence. The first five-minutes was used for patient-specific motion modeling. Fast breathing motion was modeled from high temporal resolution radial k-space samples, which served as a navigator signal to sort k-space data into different bins for high spatial resolution reconstruction of breathing motion states. Slow drifting motion was modeled from a lower temporal resolution image time series which was reconstructed by sequentially combining a large number of breathing-corrected k-space samples. Principal components analysis (PCA) was performed on deformation fields between different motion states. Gaussian kernel regression and linear extrapolation were used to predict PCA coefficients of future motion states for breathing motion (340 ms ahead of acquisition) and slow drifting motion (8.5 s ahead of acquisition) respectively. k-space data from the remaining five-minutes was used to compare ground truth motions states obtained from retrospective reconstruction/deformation with predictions. Median distances between predicted and ground truth centroid positions of gross tumor volume (GTV) and organs at risk (OARs) were less than 1 mm on average. 95- percentile Hausdorff distances between predicted and ground truth GTV contours of various breathing motions states were 2 mm on average, which was smaller than the imaging resolution and 95-percentile Hausdorff distances between predicted and ground truth OAR contours of different slow drifting motion states were less than 0.2 mm. These results suggest that multi-temporal resolution motion models are capable of volumetric predictions of breathing and slow drifting motion with sufficient accuracy and temporal resolution for MRI-based tracking, and thus have potential for supporting MRI-guided abdominal radiotherapy.

    View details for DOI 10.1088/1361-6560/ac1f37

    View details for PubMedID 34412047

  • Modeling intra-fractional abdominal configuration changes using breathing motion-corrected radial MRI PHYSICS IN MEDICINE AND BIOLOGY Liu, L., Johansson, A., Cao, Y., Kashani, R., Lawrence, T. S., Balter, J. M. 2021; 66 (8)

    Abstract

    Abdominal organ motions introduce geometric uncertainties to gastrointestinal radiotherapy. This study investigated slow drifting motion induced by changes of internal anatomic organ arrangements using a 3D radial MRI sequence with a scan length of 20 minutes. Breathing motion and cyclic GI motion were first removed through multi-temporal resolution image reconstruction. Slow drifting motion analysis was performed using an image time series consisting of 72 image volumes with a temporal sampling rate of 17 seconds. B-spline deformable registration was performed to align image volumes of the time series to a reference volume. The resulting deformation fields were used for motion velocity evaluation and patient-specific motion model construction through principal component analysis (PCA). Geometric uncertainties introduced by slow drifting motion were assessed by Hausdorff distances between unions of organs at risk (OARs) at different motion states and reference OAR contours as well as probabilistic distributions of OARs predicted using the PCA model. Thirteen examinations from 11 patients were included in this study. The averaged motion velocities ranged from 0.8 to 1.9 mm/min, 0.7 to 1.6 mm/min, 0.6 to 2.0 mm/min and 0.7 to 1.4 mm/min for the small bowel, colon, duodenum and stomach respectively; the averaged Hausdorff distances were 5.6 mm, 5.3 mm, 5.1 mm and 4.6 mm. On average, a margin larger than 4.5 mm was needed to cover a space with OAR occupancy probability higher than 55%. Temporal variations of geometric uncertainties were evaluated by comparing across four 5-min sub-scans extracted from the full scan. Standard deviations of Hausdorff distances across sub-scans were less than 1mm for most examinations, indicating stability of relative margin estimates from separate time windows. These results suggested slow drifting motion of GI organs is significant and geometric uncertainties introduced by such motion should be accounted for during radiotherapy planning and delivery.

    View details for DOI 10.1088/1361-6560/abef42

    View details for Web of Science ID 000639521700001

    View details for PubMedID 33725676