Bio


Dr. Yongkai Liu is a postdoctoral scholar at Stanford’s Center for Advanced Functional Neuroimaging, led by Drs. Greg Zaharchuk and Michael Moseley. His specific study areas include Medical Image Segmentation and Classification, PET/MRI, and Artificial Intelligence. Dr. Liu received his Ph.D. in Physics&Biology in Medicine from the University of California, Los Angeles (UCLA), under the supervision of Prof. Kyung Sung. He studied CT Virtual Colonoscopy under the supervision of Prof. Jerome Liang and Chaijie Duan during his master's degree. He served as a peer reviewer in several critical journals in medical imaging, such as IEEE Transactions on Medical Imaging (TMI), Medical Physics, IEEE Transactions on Radiation and Plasma Medical Sciences, and IEEE Transactions on Biomedical Engineering.

Professional Education


  • M.S., Tsinghua University, Biomedical Engineering (2017)
  • Ph.D, University of California, Los Angeles, Physics&Biology in Medicine (2022)

Stanford Advisors


All Publications


  • Evaluation of Spatial Attentive Deep Learning for Automatic Placental Segmentation on Longitudinal MRI JOURNAL OF MAGNETIC RESONANCE IMAGING Liu, Y., Zabihollahy, F., Yan, R., Lee, B., Janzen, C., Devaskar, S., Sung, K. 2022: 1533-1540

    Abstract

    Automated segmentation of the placenta by MRI in early pregnancy may help predict normal and aberrant placenta function, which could improve the efficiency of placental assessment and the prediction of pregnancy outcomes. An automated segmentation method that works at one gestational age may not transfer effectively to other gestational ages.To evaluate a spatial attentive deep learning method (SADL) for automated placental segmentation on longitudinal placental MRI scans.Prospective, single-center.A total of 154 pregnant women who underwent MRI scans at both 14-18 weeks of gestation and at 19-24 weeks of gestation, divided into training (N = 108), validation (N = 15), and independent testing datasets (N = 31).A 3 T, T2-weighted half Fourier single-shot turbo spin-echo (T2-HASTE) sequence.The reference standard of placental segmentation was manual delineation on T2-HASTE by a third-year neonatology clinical fellow (B.L.) under the supervision of an experienced maternal-fetal medicine specialist (C.J. with 20 years of experience) and an MRI scientist (K.S. with 19 years of experience).The three-dimensional Dice similarity coefficient (DSC) was used to measure the automated segmentation performance compared to the manual placental segmentation. A paired t-test was used to compare the DSCs between SADL and U-Net methods. A Bland-Altman plot was used to analyze the agreement between manual and automated placental volume measurements. A P value < 0.05 was considered statistically significant.In the testing dataset, SADL achieved average DSCs of 0.83 ± 0.06 and 0.84 ± 0.05 in the first and second MRI, which were significantly higher than those achieved by U-Net (0.77 ± 0.08 and 0.76 ± 0.10, respectively). A total of 6 out of 62 MRI scans (9.6%) had volume measurement differences between the SADL-based automated and manual volume measurements that were out of 95% limits of agreement.SADL can automatically detect and segment the placenta with high performance in MRI at two different gestational ages.4 TECHNICAL EFFICACY STAGE: 2.

    View details for DOI 10.1002/jmri.28403

    View details for PubMedCentralID PMC10080136

  • Multiparametric MRI-based radiomics model to predict pelvic lymph node invasion for patients with prostate cancer EUROPEAN RADIOLOGY Zheng, H., Miao, Q., Liu, Y., Mirak, S., Hosseiny, M., Scalzo, F., Raman, S. S., Sung, K. 2022

    Abstract

    To identify which patient with prostate cancer (PCa) could safely avoid extended pelvic lymph node dissection (ePLND) by predicting lymph node invasion (LNI), via a radiomics-based machine learning approach.An integrative radiomics model (IRM) was proposed to predict LNI, confirmed by the histopathologic examination, integrating radiomics features, extracted from prostatic index lesion regions on MRI images, and clinical features via SVM. The study cohort comprised 244 PCa patients with MRI and followed by radical prostatectomy (RP) and ePLND within 6 months between 2010 and 2019. The proposed IRM was trained in training/validation set and evaluated in an internal independent testing set. The model's performance was measured by area under the curve (AUC), sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV). AUCs were compared via Delong test with 95% confidence interval (CI), and the rest measurements were compared via chi-squared test or Fisher's exact test.Overall, 17 (10.6%) and 14 (16.7%) patients with LNI were included in training/validation set and testing set, respectively. Shape and first-order radiomics features showed usefulness in building the IRM. The proposed IRM achieved an AUC of 0.915 (95% CI: 0.846-0.984) in the testing set, superior to pre-existing nomograms whose AUCs were from 0.698 to 0.724 (p < 0.05).The proposed IRM could be potentially feasible to predict the risk of having LNI for patients with PCa. With the improved predictability, it could be utilized to assess which patients with PCa could safely avoid ePLND, thus reduce the number of unnecessary ePLND.• The combination of MRI-based radiomics features with clinical information improved the prediction of lymph node invasion, compared with the model using only radiomics features or clinical features. • With improved prediction performance on predicting lymph node invasion, the number of extended pelvic lymph node dissection (ePLND) could be reduced by the proposed integrative radiomics model (IRM), compared with the existing nomograms.

    View details for DOI 10.1007/s00330-022-08625-6

    View details for Web of Science ID 000763863700003

    View details for PubMedID 35238971

  • Deep Learning Enables Prostate MRI Segmentation: A Large Cohort Evaluation With Inter-Rater Variability Analysis FRONTIERS IN ONCOLOGY Liu, Y., Miao, Q., Surawech, C., Zheng, H., Nguyen, D., Yang, G., Raman, S. S., Sung, K. 2021; 11: 801876

    Abstract

    Whole-prostate gland (WPG) segmentation plays a significant role in prostate volume measurement, treatment, and biopsy planning. This study evaluated a previously developed automatic WPG segmentation, deep attentive neural network (DANN), on a large, continuous patient cohort to test its feasibility in a clinical setting. With IRB approval and HIPAA compliance, the study cohort included 3,698 3T MRI scans acquired between 2016 and 2020. In total, 335 MRI scans were used to train the model, and 3,210 and 100 were used to conduct the qualitative and quantitative evaluation of the model. In addition, the DANN-enabled prostate volume estimation was evaluated by using 50 MRI scans in comparison with manual prostate volume estimation. For qualitative evaluation, visual grading was used to evaluate the performance of WPG segmentation by two abdominal radiologists, and DANN demonstrated either acceptable or excellent performance in over 96% of the testing cohort on the WPG or each prostate sub-portion (apex, midgland, or base). Two radiologists reached a substantial agreement on WPG and midgland segmentation (κ = 0.75 and 0.63) and moderate agreement on apex and base segmentation (κ = 0.56 and 0.60). For quantitative evaluation, DANN demonstrated a dice similarity coefficient of 0.93 ± 0.02, significantly higher than other baseline methods, such as DeepLab v3+ and UNet (both p values < 0.05). For the volume measurement, 96% of the evaluation cohort achieved differences between the DANN-enabled and manual volume measurement within 95% limits of agreement. In conclusion, the study showed that the DANN achieved sufficient and consistent WPG segmentation on a large, continuous study cohort, demonstrating its great potential to serve as a tool to measure prostate volume.

    View details for DOI 10.3389/fonc.2021.801876

    View details for Web of Science ID 000739069500001

    View details for PubMedID 34993152

    View details for PubMedCentralID PMC8724207

  • Textured-Based Deep Learning in Prostate Cancer Classification with 3T Multiparametric MRI: Comparison with PI-RADS-Based Classification DIAGNOSTICS Liu, Y., Zheng, H., Liang, Z., Miao, Q., Brisbane, W. G., Marks, L. S., Raman, S. S., Reiter, R. E., Yang, G., Sung, K. 2021; 11 (10)

    Abstract

    The current standardized scheme for interpreting MRI requires a high level of expertise and exhibits a significant degree of inter-reader and intra-reader variability. An automated prostate cancer (PCa) classification can improve the ability of MRI to assess the spectrum of PCa. The purpose of the study was to evaluate the performance of a texture-based deep learning model (Textured-DL) for differentiating between clinically significant PCa (csPCa) and non-csPCa and to compare the Textured-DL with Prostate Imaging Reporting and Data System (PI-RADS)-based classification (PI-RADS-CLA), where a threshold of PI-RADS ≥ 4, representing highly suspicious lesions for csPCa, was applied. The study cohort included 402 patients (60% (n = 239) of patients for training, 10% (n = 42) for validation, and 30% (n = 121) for testing) with 3T multiparametric MRI matched with whole-mount histopathology after radical prostatectomy. For a given suspicious prostate lesion, the volumetric patches of T2-Weighted MRI and apparent diffusion coefficient images were cropped and used as the input to Textured-DL, consisting of a 3D gray-level co-occurrence matrix extractor and a CNN. PI-RADS-CLA by an expert reader served as a baseline to compare classification performance with Textured-DL in differentiating csPCa from non-csPCa. Sensitivity and specificity comparisons were performed using Mcnemar's test. Bootstrapping with 1000 samples was performed to estimate the 95% confidence interval (CI) for AUC. CIs of sensitivity and specificity were calculated by the Wald method. The Textured-DL model achieved an AUC of 0.85 (CI [0.79, 0.91]), which was significantly higher than the PI-RADS-CLA (AUC of 0.73 (CI [0.65, 0.80]); p < 0.05) for PCa classification, and the specificity was significantly different between Textured-DL and PI-RADS-CLA (0.70 (CI [0.59, 0.82]) vs. 0.47 (CI [0.35, 0.59]); p < 0.05). In sub-analyses, Textured-DL demonstrated significantly higher specificities in the peripheral zone (PZ) and solitary tumor lesions compared to the PI-RADS-CLA (0.78 (CI [0.66, 0.90]) vs. 0.42 (CI [0.28, 0.57]); 0.75 (CI [0.54, 0.96]) vs. 0.38 [0.14, 0.61]; all p values < 0.05). Moreover, Textured-DL demonstrated a high negative predictive value of 92% while maintaining a high positive predictive value of 58% among the lesions with a PI-RADS score of 3. In conclusion, the Textured-DL model was superior to the PI-RADS-CLA in the classification of PCa. In addition, Textured-DL demonstrated superior performance in the specificities for the peripheral zone and solitary tumors compared with PI-RADS-based risk assessment.

    View details for DOI 10.3390/diagnostics11101785

    View details for Web of Science ID 000712235200001

    View details for PubMedID 34679484

    View details for PubMedCentralID PMC8535024

  • Integrative Machine Learning Prediction of Prostate Biopsy Results From Negative Multiparametric MRI JOURNAL OF MAGNETIC RESONANCE IMAGING Zheng, H., Miao, Q., Liu, Y., Raman, S. S., Scalzo, F., Sung, K. 2022; 55 (1): 100-110

    Abstract

    Multiparametric MRI (mpMRI) is commonly recommended as a triage test prior to any prostate biopsy. However, there exists limited consensus on which patients with a negative prostate mpMRI could avoid prostate biopsy.To identify which patient could safely avoid prostate biopsy when the prostate mpMRI is negative, via a radiomics-based machine learning approach.Retrospective.Three hundred thirty patients with negative prostate 3T mpMRI between January 2016 and December 2018 were included.A 3.0 T/T2-weighted turbo spin echo (TSE) imaging (T2 WI) and diffusion-weighted imaging (DWI).The integrative machine learning (iML) model was trained to predict negative prostate biopsy results, utilizing both radiomics and clinical features. The final study cohort comprised 330 consecutive patients with negative mpMRI (PI-RADS < 3) who underwent systematic transrectal ultrasound-guided (TRUS) or MR-ultrasound fusion (MRUS) biopsy within 6 months. A secondary analysis of biopsy naïve subcohort (n = 227) was also conducted.The Mann-Whitney U test and Chi-Squared test were utilized to evaluate the significance of difference of clinical features between prostate biopsy positive and negative groups. The model performance was validated using leave-one-out cross-validation (LOOCV) and measured by AUC, sensitivity, specificity, and negative predictive value (NPV).Overall, 306/330 (NPV 92.7%) of the final study cohort patients had negative biopsies, and 207/227 (NPV 91.2%) of the biopsy naïve subcohort patients had negative biopsies. Our iML model achieved NPVs of 98.3% and 98.0% for the study cohort and subcohort, respectively, superior to prostate-specific antigen density (PSAD)-based risk assessment with NPVs of 94.9% and 93.9%, respectively.The proposed iML model achieved high performance in predicting negative prostate biopsy results for patients with negative mpMRI. With improved NPVs, the proposed model can be used to stratify patients who in whom we might obviate biopsies, thus reducing the number of unnecessary biopsies.3 TECHNICAL EFFICACY: Stage 2.

    View details for DOI 10.1002/jmri.27793

    View details for Web of Science ID 000664546200001

    View details for PubMedID 34160114

    View details for PubMedCentralID PMC8678175

  • ME-Net: Multi-encoder net framework for brain tumor segmentation INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY Zhang, W., Yang, G., Huang, H., Yang, W., Xu, X., Liu, Y., Lai, X. 2021; 31 (4): 1834-1848

    View details for DOI 10.1002/ima.22571

    View details for Web of Science ID 000625884900001

  • 3D PBV-Net: An automated prostate MRI data segmentation method COMPUTERS IN BIOLOGY AND MEDICINE Jin, Y., Yang, G., Fang, Y., Li, R., Xu, X., Liu, Y., Lai, X. 2021; 128: 104160

    Abstract

    Prostate cancer is one of the most common deadly diseases in men worldwide, which is seriously affecting people's life and health. Reliable and automated segmentation of the prostate gland in MRI data is exceptionally critical for diagnosis and treatment planning of prostate cancer. Although many automated segmentation methods have emerged, including deep learning based approaches, segmentation performance is still poor due to the large variability of image appearance, anisotropic spatial resolution, and imaging interference. This study proposes an automated prostate MRI data segmentation approach using bicubic interpolation with improved 3D V-Net (dubbed 3D PBV-Net). Considering the low-frequency components in the prostate gland, the bicubic interpolation is applied to preprocess the MRI data. On this basis, a 3D PBV-Net is developed to perform prostate MRI data segmentation. To illustrate the effectiveness of our approach, we evaluate the proposed 3D PBV-Net on two clinical prostate MRI data datasets, i.e., PROMISE 12 and TPHOH, with the manual delineations available as the ground truth. Our approach generates promising segmentation results, which have achieved 97.65% and 98.29% of average accuracy, 0.9613 and 0.9765 of Dice metric, 3.120 mm and 0.9382 mm of Hausdorff distance, and average boundary distance of 1.708, 0.7950 on PROMISE 12 and TPHOH datasets, respectively. Our method has effectively improved the accuracy of automated segmentation of the prostate MRI data and is promising to meet the accuracy requirements for telehealth applications.

    View details for DOI 10.1016/j.compbiomed.2020.104160

    View details for Web of Science ID 000604568300002

    View details for PubMedID 33310694

  • Exploring Uncertainty Measures in Bayesian Deep Attentive Neural Networks for Prostate Zonal Segmentation IEEE ACCESS Liu, Y., Yang, G., Hosseiny, M., Azadikhah, A., Mirak, S., Miao, Q., Raman, S. S., Sung, K. 2020; 8: 151817-151828

    Abstract

    Automatic segmentation of prostatic zones on multiparametric MRI (mpMRI) can improve the diagnostic workflow of prostate cancer. We designed a spatial attentive Bayesian deep learning network for the automatic segmentation of the peripheral zone (PZ) and transition zone (TZ) of the prostate with uncertainty estimation. The proposed method was evaluated by using internal and external independent testing datasets, and overall uncertainties of the proposed model were calculated at different prostate locations (apex, middle, and base). The study cohort included 351 MRI scans, of which 304 scans were retrieved from a de-identified publicly available datasets (PROSTATEX) and 47 scans were extracted from a large U.S. tertiary referral center (external testing dataset; ETD)). All the PZ and TZ contours were drawn by research fellows under the supervision of expert genitourinary radiologists. Within the PROSTATEX dataset, 259 and 45 patients (internal testing dataset; ITD) were used to develop and validate the model. Then, the model was tested independently using the ETD only. The segmentation performance was evaluated using the Dice Similarity Coefficient (DSC). For PZ and TZ segmentation, the proposed method achieved mean DSCs of 0.80±0.05 and 0.89±0.04 on ITD, as well as 0.79±0.06 and 0.87±0.07 on ETD. For both PZ and TZ, there was no significant difference between ITD and ETD for the proposed method. This DL-based method enabled the accuracy of the PZ and TZ segmentation, which outperformed the state-of-art methods (Deeplab V3+, Attention U-Net, R2U-Net, USE-Net and U-Net). We observed that segmentation uncertainty peaked at the junction between PZ, TZ and AFS. Also, the overall uncertainties were highly consistent with the actual model performance between PZ and TZ at three clinically relevant locations of the prostate.

    View details for DOI 10.1109/ACCESS.2020.3017168

    View details for Web of Science ID 000564244600001

    View details for PubMedID 33564563

    View details for PubMedCentralID PMC7869831

  • Automatic Prostate Zonal Segmentation Using Fully Convolutional Network With Feature Pyramid Attention IEEE ACCESS Liu, Y., Yang, G., Afshari Mirak, S., Hosseiny, M., Azadikhah, A., Zhong, X., Reiter, R. E., Lee, Y., Raman, S. S., Sung, K. 2019; 7: 163626-163632
  • Haustral loop extraction for CT colonography using geodesics INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY Liu, Y., Duan, C., Liang, J., Hu, J., Lu, H., Luo, M. 2017; 12 (3): 379-388

    Abstract

    The human colon has complex geometric structures because of its haustral folds, which are thin flat protrusions on the colon wall. The haustral loop is the curve (approximately triangular in shape) that encircles the highly convex region of the haustral fold, and is regarded as the natural landmark of the colon, intersecting the longitude of the colon in the middle. Haustral loop extraction can assist in reducing the structural complexity of the colon, and the loops can also serve as anatomic markers for computed tomographic colonography (CTC). Moreover, haustral loop sectioning of the colon can help with the performance of precise prone-supine registration.We propose an accurate approach of extracting haustral loops for CT virtual colonoscopy based on geodesics. First, the longitudinal geodesic (LG) connecting the start and end points is tracked by the geodesic method and the colon is cut along the LG. Second, key points are extracted from the LG, after which paired points that are used for seeking the potential haustral loops are calculated according to the key points. Next, for each paired point, the shortest distance (geodesic line) between the paired points twice is calculated, namely one on the original surface and the other on the cut surface. Then, the two geodesics are combined to form a potential haustral loop. Finally, erroneous and nonstandard potential loops are removed.To evaluate the haustral loop extraction algorithm, we first utilized the algorithm to extract the haustral loops. Then, we let the clinicians determine whether the haustral loops were correct and then identify the missing haustral loops. The extraction algorithm successfully detected 91.87% of all of the haustral loops with a very low false positive rate.We believe that haustral loop extraction may benefit many post-procedures in CTC, such as supine-prone registration, computer-aided diagnosis, and taenia coli extraction.

    View details for DOI 10.1007/s11548-016-1497-x

    View details for Web of Science ID 000394539600003

    View details for PubMedID 27854032

    View details for PubMedCentralID PMC5313587