Stanford Advisors


  • Lei Xing, Postdoctoral Faculty Sponsor

All Publications


  • Quantifying particle concentration via AI-enhanced optical coherence tomography. Nanoscale Ye, S., Xing, L., Myung, D., Chen, F. 2024

    Abstract

    Efficient and robust quantification of the number of nanoparticles in solution is not only essential but also insufficient in nanotechnology and biomedical research. This paper proposes to use optical coherence tomography (OCT) to quantify the number of gold nanorods, which exemplify the nanoparticles with high light scattering signals. Additionally, we have developed an AI-enhanced OCT image processing to improve the accuracy and robustness of the quantification result.

    View details for DOI 10.1039/d4nr00195h

    View details for PubMedID 38511606

  • Super-resolution biomedical imaging via reference-free statistical implicit neural representation. Physics in medicine and biology Ye, S., Shen, L., Islam, M. T., Xing, L. 2023

    Abstract

    Supervised deep learning for image super-resolution (SR) has limitations in biomedical imaging due to the lack of large amounts of low- and high-resolution image pairs for model training. In this work, we propose a reference-free statistical implicit neural representation (INR) framework, which needs only a single or a few observed low-resolution (LR) image(s), to generate high-quality SR images. Approach. The framework models the statistics of the observed LR images via maximum likelihood estimation and trains the INR network to represent the latent high-resolution (HR) image as a continuous function in the spatial domain. The INR network is constructed as a coordinate-based multi-layer perceptron (MLP), whose inputs are image spatial coordinates and outputs are corresponding pixel intensities. The trained INR not only constrains functional smoothness but also allows an arbitrary scale in SR imaging. Main results. We demonstrate the efficacy of the proposed framework on various biomedical images, including CT, MRI, fluorescence microscopy images, and ultrasound images, across different SR magnification scales of 2×, 4×, and 8×. A limited number of LR images were used for each of the SR imaging tasks to show the potential of the proposed statistical INR framework. Significance. The proposed method provides an urgently needed unsupervised deep learning framework for numerous biomedical SR applications that lack HR reference images.

    View details for DOI 10.1088/1361-6560/acfdf1

    View details for PubMedID 37757838

  • Unified Supervised-Unsupervised (SUPER) Learning for X-Ray CT Image Reconstruction IEEE TRANSACTIONS ON MEDICAL IMAGING Ye, S., Li, Z., McCann, M. T., Long, Y., Ravishankar, S. 2021; 40 (11): 2986-3001

    Abstract

    Traditional model-based image reconstruction (MBIR) methods combine forward and noise models with simple object priors. Recent machine learning methods for image reconstruction typically involve supervised learning or unsupervised learning, both of which have their advantages and disadvantages. In this work, we propose a unified supervised-unsupervised (SUPER) learning framework for X-ray computed tomography (CT) image reconstruction. The proposed learning formulation combines both unsupervised learning-based priors (or even simple analytical priors) together with (supervised) deep network-based priors in a unified MBIR framework based on a fixed point iteration analysis. The proposed training algorithm is also an approximate scheme for a bilevel supervised training optimization problem, wherein the network-based regularizer in the lower-level MBIR problem is optimized using an upper-level reconstruction loss. The training problem is optimized by alternating between updating the network weights and iteratively updating the reconstructions based on those weights. We demonstrate the learned SUPER models' efficacy for low-dose CT image reconstruction, for which we use the NIH AAPM Mayo Clinic Low Dose CT Grand Challenge dataset for training and testing. In our experiments, we studied different combinations of supervised deep network priors and unsupervised learning-based or analytical priors. Both numerical and visual results show the superiority of the proposed unified SUPER methods over standalone supervised learning-based methods, iterative MBIR methods, and variations of SUPER obtained via ablation studies. We also show that the proposed algorithm converges rapidly in practice.

    View details for DOI 10.1109/TMI.2021.3095310

    View details for Web of Science ID 000711848900005

    View details for PubMedID 34232871

  • SPULTRA: Low-Dose CT Image Reconstruction With Joint Statistical and Learned Image Models IEEE TRANSACTIONS ON MEDICAL IMAGING Ye, S., Ravishankar, S., Long, Y., Fessler, J. A. 2020; 39 (3): 729-741

    Abstract

    Low-dose CT image reconstruction has been a popular research topic in recent years. A typical reconstruction method based on post-log measurements is called penalized weighted-least squares (PWLS). Due to the underlying limitations of the post-log statistical model, the PWLS reconstruction quality is often degraded in low-dose scans. This paper investigates a shifted-Poisson (SP) model based likelihood function that uses the pre-log raw measurements that better represents the measurement statistics, together with a data-driven regularizer exploiting a Union of Learned TRAnsforms (SPULTRA). Both the SP induced data-fidelity term and the regularizer in the proposed framework are nonconvex. The proposed SPULTRA algorithm uses quadratic surrogate functions for the SP induced data-fidelity term. Each iteration involves a quadratic subproblem for updating the image, and a sparse coding and clustering subproblem that has a closed-form solution. The SPULTRA algorithm has a similar computational cost per iteration as its recent counterpart PWLS-ULTRA that uses post-log measurements, and it provides better image reconstruction quality than PWLS-ULTRA, especially in low-dose scans.

    View details for DOI 10.1109/TMI.2019.2934933

    View details for Web of Science ID 000525262100017

    View details for PubMedID 31425021

    View details for PubMedCentralID PMC7170173