Bio


ZHICHENG ZHANG received his Ph.D from University of Chinese Academy of Sciences and B.S. degree from Sun Yat-sen University. He is currently a postdoctoral fellow in Stanford University. From 2017 to 2018, he had been a Visiting scholar with the Virginia Tech-Wake Forest University, School of Biomedical Engineering and Sciences, Virginia Polytechnic Institute and State University, USA. His research interests are medical data analysis, computer vision and deep learning.

Professional Education


  • Ph.D, University of Chinese Academy of Sciences
  • B.S, Sun Yat-Sen University

Stanford Advisors


  • Lei Xing, Postdoctoral Faculty Sponsor

All Publications


  • Metal artifact reduction in 2D CT images with self-supervised cross-domain learning. Physics in medicine and biology Yu, L., Zhang, Z., Li, X., Ren, H., Zhao, W., Xing, L. 2021

    Abstract

    The presence of metallic implants often introduces severe metal artifacts in the X-ray CT images, which could adversely influence clinical diagnosis or dose calculation in radiation therapy. In this work, we present a novel deep-learning-based approach for metal artifact reduction (MAR). In order to alleviate the need for anatomically identical CT image pairs (\ie, metal artifact-corrupted CT image and metal artifact-free CT image) for network learning, we propose a self-supervised cross-domain learning framework. Specifically, we train a neural network to restore the metal trace region values in the given metal-free sinogram, where the metal trace is identified by the forward projection of metal masks. We then design a novel FBP reconstruction loss to encourage the network to generate more perfect completion results and a residual-learning-based image refinement module to reduce the secondary artifacts in the reconstructed CT images. To preserve the fine structure details and fidelity of the final MAR image, instead of directly adopting CNN-refined images as output, we incorporate the metal trace replacement into our framework and replace the metal-affected projections of the original sinogram with the prior sinogram generated by the forward projection of the CNN output. We then use the filtered backward projection (FBP) algorithms for final MAR image reconstruction. We conduct an extensive evaluation on simulated and real artifact data to show the effectiveness of our design. Our method produces superior MAR results and outperforms other compelling methods. We also demonstrate the potential of our framework for other organ sites.

    View details for DOI 10.1088/1361-6560/ac195c

    View details for PubMedID 34330119

  • Noise2Context: Context-assisted Learning 3D Thin-layer for Low Dose CT. Medical physics Zhang, Z., Liang, X., Zhao, W., Xing, L. 2021

    Abstract

    PURPOSE: Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of X-ray radiation exposure attract more and more attention. To lower the X-ray radiation, low-dose CT (LDCT) has been widely adopted in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a deep learning-based method that can train denoising neural networks without any clean data.METHODS: In this work, for 3D thin-slice LDCT scanning, we first drive an unsupervised loss function which was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Then, we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a single 3D thin-layer LDCT scanning, simultaneously. In essence, with some latent assumptions, we proposed an unsupervised loss function to train the denoising neural network in an unsupervised manner, which integrated the similarity between adjacent CT slices in 3D thin-layer LDCT.RESULTS: Further experiments on Mayo LDCT dataset and a realistic pig head were carried out. In the experiments using Mayo LDCT dataset, our unsupervised method can obtain performance comparable to that of the supervised baseline. With the realistic pig head, our method can achieve optimal performance at different noise levels as compared to all the other methods that demonstrated the superiority and robustness of the proposed Noise2Context.CONCLUSIONS: In this work, we present a generalizable LDCT image denoising method without any clean data. As a result, our method not only gets rid of the complex artificial image priors but also amounts of paired high-quality training datasets.

    View details for DOI 10.1002/mp.15119

    View details for PubMedID 34287948

  • Incorporating the hybrid deformable model for improving the performance of abdominal CT segmentation via multi-scale feature fusion network. Medical image analysis Liang, X., Li, N., Zhang, Z., Xiong, J., Zhou, S., Xie, Y. 2021; 73: 102156

    Abstract

    Automated multi-organ abdominal Computed Tomography (CT) image segmentation can assist the treatment planning, diagnosis, and improve many clinical workflows' efficiency. The 3-D Convolutional Neural Network (CNN) recently attained state-of-the-art accuracy, which typically relies on supervised training with many manual annotated data. Many methods used the data augmentation strategy with a rigid or affine spatial transformation to alleviate the over-fitting problem and improve the network's robustness. However, the rigid or affine spatial transformation fails to capture the complex voxel-based deformation in the abdomen, filled with many soft organs. We developed a novel Hybrid Deformable Model (HDM), which consists of the inter-and intra-patient deformation for more effective data augmentation to tackle this issue. The inter-patient deformations were extracted from the learning-based deformable registration between different patients, while the intra-patient deformations were formed using the random 3-D Thin-Plate-Spline (TPS) transformation. Incorporating the HDM enabled the network to capture many of the subtle deformations of abdominal organs. To find a better solution and achieve faster convergence for network training, we fused the pre-trained multi-scale features into the a 3-D attention U-Net. We directly compared the segmentation accuracy of the proposed method to the previous techniques on several centers' datasets via cross-validation. The proposed method achieves the average Dice Similarity Coefficient (DSC) 0.852, which outperformed the other state-of-the-art on multi-organ abdominal CT segmentation results.

    View details for DOI 10.1016/j.media.2021.102156

    View details for PubMedID 34274689

  • Artificial intelligence in image-guided radiotherapy: a review of treatment target localization QUANTITATIVE IMAGING IN MEDICINE AND SURGERY Zhao, W., Shen, L., Islam, M., Qin, W., Zhang, Z., Liang, X., Zhang, G., Xu, S., Li, X. 2021
  • Iterative stripe artifact correction framework for TOF-MRA. Computers in biology and medicine Li, N., Zhou, S., Zhao, G., Zhang, Z., Xie, Y., Liang, X. 2021; 134: 104456

    Abstract

    The purpose of this study is to develop a practical stripe artifacts correction framework on three-dimensional (3-D) time-of-flight magnetic resonance angiography (TOF-MRA) obtained by multiple overlapping thin slab acquisitions (MOTSA) technology. In this work, the stripe artifacts in TOF-MRA were considered as a part of image texture. To separate the image structure and the texture, the relative total variation (RTV) was firstly employed to smooth the TOF-MRA for generating the template image with fewer image textures. Then a residual image was generated, which was the difference between the template image and the raw TOF-MRA. The residual image was served as the image texture, which contained the image details and stripe artifacts. Then, we obtained the artifact image from the residual image via a filter in a specific direction since the image artifacts appeared as stripes. The image details were then produced from the difference between the artifact image and the image texture. To produce the corrected images, we finally compensated the image details to the RTV smoothing image. The proposed method was continued until the stripe artifacts during the iteration vary as little as possible. The digital phantom and the real patients' TOF-MRA were used to test the approach. The spatial uniformity was increased from 74% to 82% and the structural similarity was improved from 86% to 98% in the digital phantom test by using the proposed algorithm. Our approach proved to be highly successful in eliminating stripe artifacts in real patient data tests while retaining image details. The proposed iterative framework on TOF-MRA stripe artifact correction is effective and appealing for enhancing the imaging performance of multi-slab 3-D acquisitions.

    View details for DOI 10.1016/j.compbiomed.2021.104456

    View details for PubMedID 34010790

  • Modularized Data-Driven Reconstruction Framework for Non-ideal Focal Spot Effect Elimination in Computed Tomography. Medical physics Zhang, Z., Yu, L., Zhao, W., Xing, L. 2021

    Abstract

    PURPOSE: High-performance computed tomography (CT) plays a vital role in clinical decision making. However, the performance of CT imaging is adversely affected by the non-ideal focal spot size of the X-ray source or degraded by an enlarged focal spot size due to aging. In this work, we aim to develop a deep learning-based strategy to mitigate the problem so that high spatial resolution CT images can be obtained even in the case of a non-ideal X-ray source.METHODS: To reconstruct high-quality CT images from blurred sinograms via joint image and sinogram learning, a cross-domain hybrid model is formulated via deep learning into a modularized data-driven reconstruction (MDR) framework. The proposed MDR framework comprises several blocks, and all the blocks share the same network architecture and network parameters. In essence, each block utilizes two sub-models to generate an estimated blur kernel and a high-quality CT image simultaneously. In this way, our framework generates not only a final high-quality CT image but also a series of intermediate images with gradually improved anatomical details, enhancing the visual perception for clinicians through the dynamic process. We used simulated training datasets to train our model in an end-to-end manner and tested our model on both simulated and realistic experimental datasets.RESULTS: On the simulated testing datasets, our approach increases the information fidelity criterion (IFC) by up to 34.2%, the universal quality index (UQI) by up to 20.3%, the signal-to-noise (SNR) by up to 6.7%, and reduces the root mean square error (RMSE) by up to 10.5% as compared with FBP. Compared with the iterative deconvolution method (NSM), MDR increases IFC by up to 24.7%, UQI by up to 16.7%, SNR by up to 6.0%, and reduces RMSE by up to 9.4%. In the modulation transfer function (MTF) experiment, our method improves the MTF50% by 34.5% and MTF10% by 18.7% as compared with FBP, Similarly remarkably, our method improves MTF50% by 14.3% and MTF10% by 0.9% as compared with NSM. Also, our method shows better imaging results in the edge of bony structures and other tiny structures in the experiments using phantom consisting of ham and a bottle of peanuts.CONCLUSIONS: A modularized data-driven CT reconstruction framework is established to mitigate the blurring effect caused by a non-ideal X-ray source with relatively large focal spot. The proposed method enables us to obtain high-resolution images with less ideal X-ray source.

    View details for DOI 10.1002/mp.14785

    View details for PubMedID 33595900

  • Deep Sinogram Completion With Image Prior for Metal Artifact Reduction in CT Images IEEE TRANSACTIONS ON MEDICAL IMAGING Yu, L., Zhang, Z., Li, X., Xing, L. 2021; 40 (1): 228–38

    Abstract

    Computed tomography (CT) has been widely used for medical diagnosis, assessment, and therapy planning and guidance. In reality, CT images may be affected adversely in the presence of metallic objects, which could lead to severe metal artifacts and influence clinical diagnosis or dose calculation in radiation therapy. In this article, we propose a generalizable framework for metal artifact reduction (MAR) by simultaneously leveraging the advantages of image domain and sinogram domain-based MAR techniques. We formulate our framework as a sinogram completion problem and train a neural network (SinoNet) to restore the metal-affected projections. To improve the continuity of the completed projections at the boundary of metal trace and thus alleviate new artifacts in the reconstructed CT images, we train another neural network (PriorNet) to generate a good prior image to guide sinogram learning, and further design a novel residual sinogram learning strategy to effectively utilize the prior image information for better sinogram completion. The two networks are jointly trained in an end-to-end fashion with a differentiable forward projection (FP) operation so that the prior image generation and deep sinogram completion procedures can benefit from each other. Finally, the artifact-reduced CT images are reconstructed using the filtered backward projection (FBP) from the completed sinogram. Extensive experiments on simulated and real artifacts data demonstrate that our method produces superior artifact-reduced results while preserving the anatomical structures and outperforms other MAR methods.

    View details for DOI 10.1109/TMI.2020.3025064

    View details for Web of Science ID 000604883800020

    View details for PubMedID 32956044

  • matFR: a matlab toolbox for feature ranking. Bioinformatics (Oxford, England) Zhang, Z. n., Liang, X. n., Qin, W. n., Yu, S. n., Xie, Y. n. 2020

    Abstract

    Nowadays, it is feasible to collect massive features for quantitative representation and precision medicine, and thus, automatic ranking to figure out the most informative and discriminative ones becomes increasingly important. To address this issue, 42 feature ranking (FR) methods are integrated to form a MATLAB toolbox (matFR). The methods apply mutual information, statistical analysis, structure clustering and other principles to estimate the relative importance of features in specific measure spaces. Specifically, these methods are summarized and an example shows how to apply a FR method to sort mammographic breast lesion features. The toolbox is easy to use and flexible to integrate additional methods. Importantly, it provides a tool to compare, investigate and interpret the features selected for various applications.The toolbox is freely available at http://github.com/NicoYuCN/matFR. A tutorial and an example with a data set are provided.

    View details for DOI 10.1093/bioinformatics/btaa621

    View details for PubMedID 32637981

  • Scatter correction for a clinical cone-beam CT system using an optimized stationary beam blocker in a single scan MEDICAL PHYSICS Liang, X., Jiang, Y., Zhao, W., Zhang, Z., Luo, C., Xiong, J., Yu, S., Yang, X., Sun, J., Zhou, Q., Niu, T., Xie, Y. 2019; 46 (7): 3165–79

    View details for DOI 10.1002/mp.13568

    View details for Web of Science ID 000475671900022

  • Scatter correction for a clinical cone-beam CT system using an optimized stationary beam blocker in a single scan. Medical physics Liang, X., Jiang, Y., Zhao, W., Zhang, Z., Luo, C., Xiong, J., Yu, S., Yang, X., Sun, J., Zhou, Q., Niu, T., Xie, Y. 2019

    Abstract

    PURPOSE: Scatter contamination in the cone-beam CT (CBCT) leads to CT number inaccuracy, spatial non-uniformity, and loss of image contrast. In our previous work, we proposed a single scan scatter correction approach using a stationary partial beam blocker. Although the previous method works effectively on a tabletop CBCT system, it fails to achieve high image quality on a clinical CBCT system mainly due to the wobble of the LINAC gantry during scan acquisition. Due to the mechanical deformation of CBCT gantry, the wobbling effect is observed in the clinical CBCT scan, and more missing data present using the previous blocker with the uniformly distributed lead strips.METHODS: An optimal blocker distribution is proposed to minimize the missing data. In the objective function of the missing data, the motion of the beam blocker in each projection is estimated using the segmentation due to its high contrast in the blocked area. The scatter signals from the blocker are also estimated using an air scan with the inserted blocker. The final image is generated using the forward projection to compensate for the missing data.RESULTS: On the Catphan©504 phantom, our approach reduces the average CT number error from 86 Hounsfield unit (HU) to 9 HU and improves the image contrast by a factor of 1.45 in the high-contrast rods. On a head patient, the CT number error is reduced from 97 HU to 6 HU in the soft-tissue region and the image spatial non-uniformity is decreased from 27% to 5%.CONCLUSIONS: The results suggest that the proposed method is promising for clinical applications. This article is protected by copyright. All rights reserved.

    View details for PubMedID 31055835

  • A Technical Review of Convolutional Neural Network-Based Mammographic Breast Cancer Diagnosis COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE Zou, L., Yu, S., Meng, T., Zhang, Z., Liang, X., Xie, Y. 2019: 6509357

    Abstract

    This study reviews the technique of convolutional neural network (CNN) applied in a specific field of mammographic breast cancer diagnosis (MBCD). It aims to provide several clues on how to use CNN for related tasks. MBCD is a long-standing problem, and massive computer-aided diagnosis models have been proposed. The models of CNN-based MBCD can be broadly categorized into three groups. One is to design shallow or to modify existing models to decrease the time cost as well as the number of instances for training; another is to make the best use of a pretrained CNN by transfer learning and fine-tuning; the third is to take advantage of CNN models for feature extraction, and the differentiation of malignant lesions from benign ones is fulfilled by using machine learning classifiers. This study enrolls peer-reviewed journal publications and presents technical details and pros and cons of each model. Furthermore, the findings, challenges and limitations are summarized and some clues on the future work are also given. Conclusively, CNN-based MBCD is at its early stage, and there is still a long way ahead in achieving the ultimate goal of using deep learning tools to facilitate clinical practice. This review benefits scientific researchers, industrial engineers, and those who are devoted to intelligent cancer diagnosis.

    View details for DOI 10.1155/2019/6509357

    View details for Web of Science ID 000464727600001

    View details for PubMedID 31019547

    View details for PubMedCentralID PMC6452645