Bio


Xiaohan Xing is a Postdoctoral researcher at the Department of Radiation Oncology, Stanford University. Before joining Stanford, she worked as a Postdoctoral researcher at the City University of Hong Kong. She obtained her Ph.D. degree from The Chinese University of Hong Kong in 2021 and her B.S. degree from Shandong University in 2017.
Her research interests include medical image analysis, omics data analysis, and multi-modal based disease diagnosis.

Stanford Advisors


  • Lei Xing, Postdoctoral Faculty Sponsor

All Publications


  • Comprehensive learning and adaptive teaching: Distilling multi-modal knowledge for pathological glioma grading. Medical image analysis Xing, X., Zhu, M., Chen, Z., Yuan, Y. 2023; 91: 102990

    Abstract

    The fusion of multi-modal data, e.g., pathology slides and genomic profiles, can provide complementary information and benefit glioma grading. However, genomic profiles are difficult to obtain due to the high costs and technical challenges, thus limiting the clinical applications of multi-modal diagnosis. In this work, we investigate the realistic problem where paired pathology-genomic data are available during training, while only pathology slides are accessible for inference. To solve this problem, a comprehensive learning and adaptive teaching framework is proposed to improve the performance of pathological grading models by transferring the privileged knowledge from the multi-modal teacher to the pathology student. For comprehensive learning of the multi-modal teacher, we propose a novel Saliency-Aware Masking (SA-Mask) strategy to explore richer disease-related features from both modalities by masking the most salient features. For adaptive teaching of the pathology student, we first devise a Local Topology Preserving and Discrepancy Eliminating Contrastive Distillation (TDC-Distill) module to align the feature distributions of the teacher and student models. Furthermore, considering the multi-modal teacher may include incorrect information, we propose a Gradient-guided Knowledge Refinement (GK-Refine) module that builds a knowledge bank and adaptively absorbs the reliable knowledge according to their agreement in the gradient space. Experiments on the TCGA GBM-LGG dataset show that our proposed distillation framework improves the pathological glioma grading and outperforms other KD methods. Notably, with the sole pathology slides, our method achieves comparable performance with existing multi-modal methods. The code is available at https://github.com/CUHK-AIM-Group/MultiModal-learning.

    View details for DOI 10.1016/j.media.2023.102990

    View details for PubMedID 37864912

  • Medical federated learning with joint graph purification for noisy label learning. Medical image analysis Chen, Z., Li, W., Xing, X., Yuan, Y. 2023; 90: 102976

    Abstract

    In terms of increasing privacy issues, Federated Learning (FL) has received extensive attention in medical imaging. Through collaborative training, FL can produce superior diagnostic models with global knowledge, while preserving private data locally. In practice, medical diagnosis suffers from intra-/inter-observer variability, thus label noise is inevitable in dataset preparation. Different from existing studies on centralized datasets, the label noise problem in FL scenarios confronts more challenges, due to data inaccessibility and even noise heterogeneity. In this work, we propose a federated framework with joint Graph Purification (FedGP) to address the label noise in FL through server and clients collaboration. Specifically, to overcome the impact of label noise on local training, we first devise a noisy graph purification on the client side to generate reliable pseudo labels by progressively expanding the purified graph with topological knowledge. Then, we further propose a graph-guided negative ensemble loss to exploit the topology of the client-side purified graph with robust complementary supervision against label noise. Moreover, to address the FL label noise with data silos, we propose a global centroid aggregation on the server side to produce a robust classifier with global knowledge, which can be optimized collaboratively in the FL framework. Extensive experiments are conducted on endoscopic and pathological images with the comparison under the homogeneous, heterogeneous, and real-world label noise for medical FL. Among these diverse noisy FL settings, our FedGP framework significantly outperforms denoising and noisy FL state-of-the-arts by a large margin. The source code is available at https://github.com/CUHK-AIM-Group/FedGP.

    View details for DOI 10.1016/j.media.2023.102976

    View details for PubMedID 37806019

  • Gradient modulated contrastive distillation of low-rank multi-modal knowledge for disease diagnosis MEDICAL IMAGE ANALYSIS Xing, X., Chen, Z., Hou, Y., Yuan, Y. 2023; 88: 102874

    Abstract

    The fusion of multi-modal data, e.g., medical images and genomic profiles, can provide complementary information and further benefit disease diagnosis. However, multi-modal disease diagnosis confronts two challenges: (1) how to produce discriminative multi-modal representations by exploiting complementary information while avoiding noisy features from different modalities. (2) how to obtain an accurate diagnosis when only a single modality is available in real clinical scenarios. To tackle these two issues, we present a two-stage disease diagnostic framework. In the first multi-modal learning stage, we propose a novel Momentum-enriched Multi-Modal Low-Rank (M3LR) constraint to explore the high-order correlations and complementary information among different modalities, thus yielding more accurate multi-modal diagnosis. In the second stage, the privileged knowledge of the multi-modal teacher is transferred to the unimodal student via our proposed Discrepancy Supervised Contrastive Distillation (DSCD) and Gradient-guided Knowledge Modulation (GKM) modules, which benefit the unimodal-based diagnosis. We have validated our approach on two tasks: (i) glioma grading based on pathology slides and genomic data, and (ii) skin lesion classification based on dermoscopy and clinical images. Experimental results on both tasks demonstrate that our proposed method consistently outperforms existing approaches in both multi-modal and unimodal diagnoses.

    View details for DOI 10.1016/j.media.2023.102874

    View details for Web of Science ID 001037418500001

    View details for PubMedID 37423056

  • Gradient and Feature Conformity-Steered Medical Image Classification with Noisy Labels Xing, X., Chen, Z., Gao, Z., Yuan, Y., Greenspan, H., Madabhushi, A., Mousavi, P., Salcudean, S., Duncan, J., Syeda-Mahmood, T., Taylor, R. SPRINGER INTERNATIONAL PUBLISHING AG. 2023: 75-84