Bio


Dr. Liu is a clinical assistant professor and a board certified medical physicist in the Department of Radiation Oncology. She received her PhD in electrical engineering from the University of Michigan and completed her residency training at Michigan Medicine. At Stanford, she is involved with the radiosurgery program, the MR-guided radiotherapy program and the general external beam radiotherapy program. Her research has been focusing on optimizing radiotherapy workflow through AI, including sparse medical imaging, medical image synthesis and radiotherapy beam data modeling.

Academic Appointments


Administrative Appointments


  • Associate Editor, Medical Physics Journal (2023 - Present)

Honors & Awards


  • Seed Grant, Stanford Institute for Human-Centered Artificial Intelligence (2021)
  • Summa Cum Laude Merit Award, International Society for Magnetic Resonance in Medicine (ISMRM) (2018)
  • Barbour Scholarship, University of Michigan (2017)

Boards, Advisory Committees, Professional Organizations


  • Member, Women in Radiation Oncology Work Groups, ASTRO (American Society for Radiation Oncology) (2024 - Present)
  • Member, AAPM (American Association of Physicists in Medicine) (2018 - Present)

Professional Education


  • PhD, University of Michigan, Electrical Engineering (2018)
  • Physics Resident, University of Michigan Health System, Therapeutic Medical Physics (2020)

Patents


  • James Balter, Yue Cao, Lianli Liu, Adam Johansson. "United StatesHierarchical Motion Modeling from Dynamic Magnetic Resonance Imaging"

Current Research and Scholarly Interests


Dr. Liu's research has focused on optimizing radiotherapy workflow through AI. Specifically, she is interested in
1. Optimize medical imaging for image-guided radiotherapy, including:
Sparse imaging for real time monitoring of patient dynamics;
Accelerated longitudinal imaging for efficient post-treatment patient follow up;
High quality functional imaging for treatment response prediction and treatment plan adaptation;
Medical image synthesis for reduced imaging modalities and costs.

2. Optimize clinical workflow for radiation beam commissioning and quality assurance, including:
Sparse beam dosimetry through beam data modeling;
Model-based radiation measurement error detection;
Longitudinal prediction of radiation beam changes;
Monte Carlo phase space modeling for efficient data representation and fast dose calculation.

All Publications


  • Parametric response mapping of co-registered intravoxel incoherent motion magnetic resonance imaging and positron emission tomography in locally advanced cervical cancer undergoing concurrent chemoradiation therapy PHYSICS & IMAGING IN RADIATION ONCOLOGY Capaldi, D. I., Wang, J., Liu, L., Sheth, V. R., Kidd, E. A., Hristov, D. H. 2024; 31
  • Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel, Switzerland) Kim, M., Wang, J. Y., Lu, W., Jiang, H., Stojadinovic, S., Wardak, Z., Dan, T., Timmerman, R., Wang, L., Chuang, C., Szalkowski, G., Liu, L., Pollom, E., Rahimy, E., Soltys, S., Chen, M., Gu, X. 2024; 11 (5)

    Abstract

    Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.

    View details for DOI 10.3390/bioengineering11050454

    View details for PubMedID 38790322

  • Volumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior-augmented implicit neural representation learning. Medical physics Liu, L., Shen, L., Johansson, A., Balter, J. M., Cao, Y., Vitzthum, L., Xing, L. 2023

    Abstract

    Volumetric reconstruction of magnetic resonance imaging (MRI) from sparse samples is desirable for 3D motion tracking and promises to improve magnetic resonance (MR)-guided radiation treatment precision. Data-driven sparse MRI reconstruction, however, requires large-scale training datasets for prior learning, which is time-consuming and challenging to acquire in clinical settings.To investigate volumetric reconstruction of MRI from sparse samples of two orthogonal slices aided by sparse priors of two static 3D MRI through implicit neural representation (NeRP) learning, in support of 3D motion tracking during MR-guided radiotherapy.A multi-layer perceptron network was trained to parameterize the NeRP model of a patient-specific MRI dataset, where the network takes 4D data coordinates of voxel locations and motion states as inputs and outputs corresponding voxel intensities. By first training the network to learn the NeRP of two static 3D MRI with different breathing motion states, prior information of patient breathing motion was embedded into network weights through optimization. The prior information was then augmented from two motion states to 31 motion states by querying the optimized network at interpolated and extrapolated motion state coordinates. Starting from the prior-augmented NeRP model as an initialization point, we further trained the network to fit sparse samples of two orthogonal MRI slices and the final volumetric reconstruction was obtained by querying the trained network at 3D spatial locations. We evaluated the proposed method using 5-min volumetric MRI time series with 340 ms temporal resolution for seven abdominal patients with hepatocellular carcinoma, acquired using golden-angle radial MRI sequence and reconstructed through retrospective sorting. Two volumetric MRI with inhale and exhale states respectively were selected from the first 30 s of the time series for prior embedding and augmentation. The remaining 4.5-min time series was used for volumetric reconstruction evaluation, where we retrospectively subsampled each MRI to two orthogonal slices and compared model-reconstructed images to ground truth images in terms of image quality and the capability of supporting 3D target motion tracking.Across the seven patients evaluated, the peak signal-to-noise-ratio between model-reconstructed and ground truth MR images was 38.02 ± 2.60 dB and the structure similarity index measure was 0.98 ± 0.01. Throughout the 4.5-min time period, gross tumor volume (GTV) motion estimated by deforming a reference state MRI to model-reconstructed and ground truth MRI showed good consistency. The 95-percentile Hausdorff distance between GTV contours was 2.41 ± 0.77 mm, which is less than the voxel dimension. The mean GTV centroid position difference between ground truth and model estimation was less than 1 mm in all three orthogonal directions.A prior-augmented NeRP model has been developed to reconstruct volumetric MRI from sparse samples of orthogonal cine slices. Only one exhale and one inhale 3D MRI were needed to train the model to learn prior information of patient breathing motion for sparse image reconstruction. The proposed model has the potential of supporting 3D motion tracking during MR-guided radiotherapy for improved treatment precision and promises a major simplification of the workflow by eliminating the need for large-scale training datasets.

    View details for DOI 10.1002/mp.16845

    View details for PubMedID 38014764

  • Adaptive Region-Specific Loss for Improved Medical Image Segmentation. IEEE transactions on pattern analysis and machine intelligence Chen, Y., Yu, L., Wang, J., Panjwani, N., Obeid, J., Liu, W., Liu, L., Kovalchuk, N., Gensheimer, M. F., Vitzthum, L. K., Beadle, B. M., Chang, D. T., Le, Q., Han, B., Xing, L. 2023; PP

    Abstract

    Defining the loss function is an important part of neural network design and critically determines the success of deep learning modeling. A significant shortcoming of the conventional loss functions is that they weight all regions in the input image volume equally, despite the fact that the system is known to be heterogeneous (i.e., some regions can achieve high prediction performance more easily than others). Here, we introduce a region-specific loss to lift the implicit assumption of homogeneous weighting for better learning. We divide the entire volume into multiple sub-regions, each with an individualized loss constructed for optimal local performance. Effectively, this scheme imposes higher weightings on the sub-regions that are more difficult to segment, and vice versa. Furthermore, the regional false positive and false negative errors are computed for each input image during a training step and the regional penalty is adjusted accordingly to enhance the overall accuracy of the prediction. Using different public and in-house medical image datasets, we demonstrate that the proposed regionally adaptive loss paradigm outperforms conventional methods in the multi-organ segmentations, without any modification to the neural network architecture or additional data preparation.

    View details for DOI 10.1109/TPAMI.2023.3289667

    View details for PubMedID 37363838

  • Modeling linear accelerator (Linac) beam data by implicit neural representation learning for commissioning and quality assurance applications. Medical physics Liu, L., Shen, L., Yang, Y., Schüler, E., Zhao, W., Wetzstein, G., Xing, L. 2023

    Abstract

    Linear accelerator (Linac) beam data commissioning and quality assurance (QA) play a vital role in accurate radiation treatment delivery and entail a large number of measurements using a variety of field sizes. How to optimize the effort in data acquisition while maintaining high quality of medical physics practice has been sought after.We propose to model Linac beam data through implicit neural representation (NeRP) learning. The potential of the beam model in predicting beam data from sparse measurements and detecting data collection errors was evaluated, with the goal of using the beam model to verify beam data collection accuracy and simplify the commissioning and QA process.NeRP models with continuous and differentiable functions parameterized by multilayer perceptrons (MLPs) were used to represent various beam data including percentage depth dose and profiles of 6 MV beams with and without flattening filter. Prior knowledge of the beam data was embedded into the MLP network by learning the NeRP of a vendor-provided "golden" beam dataset. The prior-embedded network was then trained to fit clinical beam data collected at one field size and used to predict beam data at other field sizes. We evaluated the prediction accuracy by comparing network-predicted beam data to water tank measurements collected from 14 clinical Linacs. Beam datasets with intentionally introduced errors were used to investigate the potential use of the NeRP model for beam data verification, by evaluating the model performance when trained with erroneous beam data samples.Linac beam data predicted by the model agreed well with water tank measurements, with averaged Gamma passing rates (1%/1mm passing criteria) higher than 95% and averaged mean absolute errors less than 0.6%. Beam data samples with measurement errors were revealed by inconsistent beam predictions between networks trained with correct versus erroneous data samples, characterized by a Gamma passing rate lower than 90%.A NeRP beam data modeling technique has been established for predicting beam characteristics from sparse measurements. The model provides a valuable tool to verify beam data collection accuracy and promises to simplify commissioning/QA processes by reducing the number of measurements without compromising the quality of medical physics service. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.16212

    View details for PubMedID 36621812

  • Real Time Volumetric MRI for 3D Motion Tracking via Geometry-Informed Deep Learning. Medical physics Liu, L., Shen, L., Johansson, A., Balter, J. M., Cao, Y., Chang, D., Xing, L. 2022

    Abstract

    To develop a geometry-informed deep learning framework for volumetric MRI with sub-second acquisition time in support of 3D motion tracking, which is highly desirable for improved radiotherapy precision but hindered by the long image acquisition time.A 2D-3D deep learning network with an explicitly defined geometry module that embeds geometric priors of the k-space encoding pattern was investigated, where a 2D generation network first augmented the sparsely sampled image dataset by generating new 2D representations of the underlying 3D subject. A geometry module then unfolded the 2D representations to the volumetric space. Finally, a 3D refinement network took the unfolded 3D data and outputted high-resolution volumetric images. Patient-specific models were trained for 7 abdominal patients to reconstruct volumetric MRI from both orthogonal cine slices and sparse radial samples. To evaluate the robustness of the proposed method to longitudinal patient anatomy and position changes, we tested the trained model on separate datasets acquired more than one month later and evaluated 3D target motion tracking accuracy using the model-reconstructed images by deforming a reference MRI with gross tumor volume (GTV) contours to a 5-min time series of both ground truth and model-reconstructed volumetric images with a temporal resolution of 340 ms.Across the 7 patients evaluated, the median distances between model-predicted and ground truth GTV centroids in the superior-inferior direction were 0.4±0.3 mm and 0.5±0.4 mm for cine and radial acquisitions respectively. The 95-percentile Hausdorff distances between model-predicted and ground truth GTV contours were 4.7±1.1 mm and 3.2±1.5 mm for cine and radial acquisitions, which are of the same scale as cross-plane image resolution.Incorporating geometric priors into deep learning model enables volumetric imaging with high spatial and temporal resolution, which is particularly valuable for 3D motion tracking and has the potential of greatly improving MRI-guided radiotherapy precision. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15822

    View details for PubMedID 35766221

  • Volumetric prediction of breathing and slow drifting motion in the abdomen using radial MRI and multi-temporal resolution modeling. Physics in medicine and biology Liu, L., Johansson, A., Cao, Y., Lawrence, T. S., Balter, J. M. 2021; 66 (17)

    Abstract

    Abdominal organ motions introduce geometric uncertainties to radiotherapy. This study investigates a multi-temporal resolution 3D motion prediction scheme that accounts for both breathing and slow drifting motion in the abdomen in support of MRI-guided radiotherapy. Ten-minute MRI scans were acquired for 8 patients using a volumetric golden-angle stack-of-stars sequence. The first five-minutes was used for patient-specific motion modeling. Fast breathing motion was modeled from high temporal resolution radial k-space samples, which served as a navigator signal to sort k-space data into different bins for high spatial resolution reconstruction of breathing motion states. Slow drifting motion was modeled from a lower temporal resolution image time series which was reconstructed by sequentially combining a large number of breathing-corrected k-space samples. Principal components analysis (PCA) was performed on deformation fields between different motion states. Gaussian kernel regression and linear extrapolation were used to predict PCA coefficients of future motion states for breathing motion (340 ms ahead of acquisition) and slow drifting motion (8.5 s ahead of acquisition) respectively. k-space data from the remaining five-minutes was used to compare ground truth motions states obtained from retrospective reconstruction/deformation with predictions. Median distances between predicted and ground truth centroid positions of gross tumor volume (GTV) and organs at risk (OARs) were less than 1 mm on average. 95- percentile Hausdorff distances between predicted and ground truth GTV contours of various breathing motions states were 2 mm on average, which was smaller than the imaging resolution and 95-percentile Hausdorff distances between predicted and ground truth OAR contours of different slow drifting motion states were less than 0.2 mm. These results suggest that multi-temporal resolution motion models are capable of volumetric predictions of breathing and slow drifting motion with sufficient accuracy and temporal resolution for MRI-based tracking, and thus have potential for supporting MRI-guided abdominal radiotherapy.

    View details for DOI 10.1088/1361-6560/ac1f37

    View details for PubMedID 34412047

  • Modeling intra-fractional abdominal configuration changes using breathing motion-corrected radial MRI PHYSICS IN MEDICINE AND BIOLOGY Liu, L., Johansson, A., Cao, Y., Kashani, R., Lawrence, T. S., Balter, J. M. 2021; 66 (8)

    Abstract

    Abdominal organ motions introduce geometric uncertainties to gastrointestinal radiotherapy. This study investigated slow drifting motion induced by changes of internal anatomic organ arrangements using a 3D radial MRI sequence with a scan length of 20 minutes. Breathing motion and cyclic GI motion were first removed through multi-temporal resolution image reconstruction. Slow drifting motion analysis was performed using an image time series consisting of 72 image volumes with a temporal sampling rate of 17 seconds. B-spline deformable registration was performed to align image volumes of the time series to a reference volume. The resulting deformation fields were used for motion velocity evaluation and patient-specific motion model construction through principal component analysis (PCA). Geometric uncertainties introduced by slow drifting motion were assessed by Hausdorff distances between unions of organs at risk (OARs) at different motion states and reference OAR contours as well as probabilistic distributions of OARs predicted using the PCA model. Thirteen examinations from 11 patients were included in this study. The averaged motion velocities ranged from 0.8 to 1.9 mm/min, 0.7 to 1.6 mm/min, 0.6 to 2.0 mm/min and 0.7 to 1.4 mm/min for the small bowel, colon, duodenum and stomach respectively; the averaged Hausdorff distances were 5.6 mm, 5.3 mm, 5.1 mm and 4.6 mm. On average, a margin larger than 4.5 mm was needed to cover a space with OAR occupancy probability higher than 55%. Temporal variations of geometric uncertainties were evaluated by comparing across four 5-min sub-scans extracted from the full scan. Standard deviations of Hausdorff distances across sub-scans were less than 1mm for most examinations, indicating stability of relative margin estimates from separate time windows. These results suggested slow drifting motion of GI organs is significant and geometric uncertainties introduced by such motion should be accounted for during radiotherapy planning and delivery.

    View details for DOI 10.1088/1361-6560/abef42

    View details for Web of Science ID 000639521700001

    View details for PubMedID 33725676

  • Abdominal synthetic CT generation from MR Dixon images using a U-net trained with 'semi-synthetic' CT data PHYSICS IN MEDICINE AND BIOLOGY Liu, L., Johansson, A., Cao, Y., Dow, J., Lawrence, T. S., Balter, J. M. 2020; 65 (12): 125001

    Abstract

    Magnetic resonance imaging (MRI) is gaining popularity in guiding radiation treatment for intrahepatic cancers due to its superior soft tissue contrast and potential of monitoring individual motion and liver function. This study investigates a deep learning-based method that generates synthetic CT volumes from T1-weighted MR Dixon images in support of MRI-based intrahepatic radiotherapy treatment planning. Training deep neutral networks for this purpose has been challenged by mismatches between CT and MR images due to motion and different organ filling status. This work proposes to resolve such challenge by generating 'semi-synthetic' CT images from rigidly aligned CT and MR image pairs. Contrasts within skeletal elements of the 'semi-synthetic' CT images were determined from CT images, while contrasts of soft tissue and air volumes were determined from voxel-wise intensity classification results on MR images. The resulting 'semi-synthetic' CT images were paired with their corresponding MR images and used to train a simple U-net model without adversarial components. MR and CT scans of 46 patients were investigated and the proposed method was evaluated for 31 patients with clinical radiotherapy plans, using 3-fold cross validation. The averaged mean absolute errors between synthetic CT and CT images across patients were 24.10 HU for liver, 28.62 HU for spleen, 47.05 HU for kidneys, 29.79 HU for spinal cord, 105.68 HU for lungs and 110.09 HU for vertebral bodies. VMAT and IMRT plans were optimized using CT-derived electron densities, and doses were recalculated using corresponding synthetic CT-derived density grids. Resulting dose differences to planning target volumes and various organs at risk were small, with the average difference less than 0.15 Gy for all dose metrics evaluated. The similarities in both image intensity and radiation dose distributions between CT and synthetic CT volumes demonstrate the accuracy of the method and its potential in supporting MRI-only radiotherapy treatment planning.

    View details for DOI 10.1088/1361-6560/ab8cd2

    View details for Web of Science ID 000542583000001

    View details for PubMedID 32330923

    View details for PubMedCentralID PMC7991979

  • ACCELERATED HIGH B-VALUE DIFFUSION-WEIGHTED MR IMAGING VIA PHASE-CONSTRAINED LOW-RANK TENSOR MODEL Liu, L., Johansson, A., Balter, J. M., Cao, Y., Fessler, J. A., IEEE IEEE. 2018: 344-348
  • Female pelvic synthetic CT generation based on joint intensity and shape analysis PHYSICS IN MEDICINE AND BIOLOGY Liu, L., Jolly, S., Cao, Y., Vineberg, K., Fessler, J. A., Balter, J. M. 2017; 62 (8): 2935-2949
  • Synthetic CT for MRI-based liver stereotactic body radiotherapy treatment planning PHYSICS IN MEDICINE AND BIOLOGY Bredfeldt, J. S., Liu, L., Feng, M., Cao, Y., Balter, J. M. 2017; 62 (8): 2922-2934

    Abstract

    A technique for generating MRI-derived synthetic CT volumes (MRCTs) is demonstrated in support of adaptive liver stereotactic body radiation therapy (SBRT). Under IRB approval, 16 subjects with hepatocellular carcinoma were scanned using a single MR pulse sequence (T1 Dixon). Air-containing voxels were identified by intensity thresholding on T1-weighted, water and fat images. The envelope of the anterior vertebral bodies was segmented from the fat image and fuzzy-C-means (FCM) was used to classify each non-air voxel as mid-density, lower-density, bone, or marrow in the abdomen, with only bone and marrow classified within the vertebral body envelope. MRCT volumes were created by integrating the product of the FCM class probability with its assigned class density for each voxel. MRCTs were deformably aligned with corresponding planning CTs and 2-ARC-SBRT-VMAT plans were optimized on MRCTs. Fluence was copied onto the CT density grids, dose recalculated, and compared. The liver, vertebral bodies, kidneys, spleen and cord had median Hounsfield unit differences of less than 60. Median target dose metrics were all within 0.1 Gy with maximum differences less than 0.5 Gy. OAR dose differences were similarly small (median: 0.03 Gy, std:0.26 Gy). Results demonstrate that MRCTs derived from a single abdominal imaging sequence are promising for use in SBRT dose calculation.

    View details for DOI 10.1088/1361-6560/aa5059

    View details for Web of Science ID 000425860000001

    View details for PubMedID 28306547

    View details for PubMedCentralID PMC5495654

  • Female pelvic synthetic CT generation based on joint intensity and shape analysis. Physics in medicine and biology Liu, L., Jolly, S., Cao, Y., Vineberg, K., Fessler, J. A., Balter, J. M. 2017; 62 (8): 2935-2949

    Abstract

    Using MRI for radiotherapy treatment planning and image guidance is appealing as it provides superior soft tissue information over CT scans and avoids possible systematic errors introduced by aligning MR to CT images. This study presents a method that generates Synthetic CT (MRCT) volumes by performing probabilistic tissue classification of voxels from MRI data using a single imaging sequence (T1 Dixon). The intensity overlap between different tissues on MR images, a major challenge for voxel-based MRCT generation methods, is addressed by adding bone shape information to an intensity-based classification scheme. A simple pelvic bone shape model, built from principal component analysis of pelvis shape from 30 CT image volumes, is fitted to the MR volumes. The shape model generates a rough bone mask that excludes air and covers bone along with some surrounding soft tissues. Air regions are identified and masked out from the tissue classification process by intensity thresholding outside the bone mask. A regularization term is added to the fuzzy c-means classification scheme that constrains voxels outside the bone mask from being assigned memberships in the bone class. MRCT image volumes are generated by multiplying the probability of each voxel being represented in each class with assigned attenuation values of the corresponding class and summing the result across all classes. The MRCT images presented intensity distributions similar to CT images with a mean absolute error of 13.7 HU for muscle, 15.9 HU for fat, 49.1 HU for intra-pelvic soft tissues, 129.1 HU for marrow and 274.4 HU for bony tissues across 9 patients. Volumetric modulated arc therapy (VMAT) plans were optimized using MRCT-derived electron densities, and doses were recalculated using corresponding CT-derived density grids. Dose differences to planning target volumes were small with mean/standard deviation of 0.21/0.42 Gy for D0.5cc and 0.29/0.33 Gy for D99%. The results demonstrate the accuracy of the method and its potential in supporting MRI only radiotherapy treatment planning.

    View details for DOI 10.1088/1361-6560/62/8/2935

    View details for PubMedID 28306550

    View details for PubMedCentralID PMC5495652

  • A female pelvic bone shape model for air/bone separation in support of synthetic CT generation for radiation therapy PHYSICS IN MEDICINE AND BIOLOGY Liu, L., Cao, Y., Fessler, J. A., Jolly, S., Balter, J. M. 2016; 61 (1): 169-182

    Abstract

    Separating bone from air in MR data is one of the major challenges in using MR images to derive synthetic CT. The problem is further complicated when the anatomic regions filled with air are altered across scans due to air mobility, for instance, in pelvic regions, thereby the air regions estimated using an ultrashort echo time (UTE) sequence are invalid in other image series acquired for multispectral classification. This study aims to develop and investigate a female pelvic bone shape model to identify low intensity regions in MRI where air is unlikely to be present in support of synthetic CT generation without UTE imaging. CT scans of 30 patients were collected for the study, 17 of them also have corresponding MR scans. The shape model was built from the CT dataset, where the reference image was aligned to each of the training images using B-spline deformable registration. Principal component analysis was performed on B-spline coefficients for a compact model where shape variance was described by linear combination of principal modes. The model was applied to identify pelvic bone in MR images by deforming the corresponding MR data of the reference image to target MR images, where the search space of the deformation process was constrained within the subspace spanned by principal modes. The local minima in the search space were removed effectively by the shape model, thus supporting an efficient binary search for the optimal solution. We evaluated the model by its efficacy in identifying bone voxels and excluding air regions. The model was tested across the 17 patients that have corresponding MR scans using a leave-one-out cross validation. A simple model using the first leading principal mode only was found to achieve reasonable accuracy, where an averaged 87% of bone voxels were correctly identified. Finally dilation of the optimally fit bone mask by 5 mm was found to cover 96% of bone voxels while minimally impacting the overlap with air (below 0.4%).

    View details for DOI 10.1088/0031-9155/61/1/169

    View details for Web of Science ID 000369075400014

    View details for PubMedID 26624989

    View details for PubMedCentralID PMC4718197