Xiruo Ding
Postdoctoral Scholar, Anesthesiology, Perioperative and Pain Medicine
Bio
I am a postdoctoral scholar at Stanford University, advised by Dr. Nima Aghaeepour, focusing on EHR-related modeling and phenotyping. My research interests lie in the application of general machine learning and deep learning methods to enhance healthcare outcomes.
Professional Education
-
Doctor of Philosophy, Tongji University (2024)
-
Master of Science, Tongji University (2017)
-
Master of Science, Duke University (2017)
-
Bachelor of Science, Tongji University (2015)
-
Doctor of Philosophy, University of Washington (2024)
-
PhD, University of Washington, Biomedical Informatics (2025)
Research Interests
-
Data Sciences
-
Research Methods
All Publications
-
Tailoring task arithmetic to address bias in models trained on multi-institutional datasets.
Journal of biomedical informatics
2025; 168: 104858
Abstract
OBJECTIVE: Multi-institutional datasets are widely used for machine learning from clinical data, to increase dataset size and improve generalization. However, deep learning models in particular may learn to recognize the source of a data element, leading to biased predictions. For example, deep learning models for image recognition trained on chest radiographs with COVID-19 positive and negative examples drawn from different data sources can respond to indicators of provenance (e.g., radiological annotations outside the lung area per institution-specific practices) rather than pathology, generalizing poorly beyond their training data. Bias of this sort, called confounding by provenance, is of concern in natural language processing (NLP) because provenance indicators (e.g., institution-specific section headers, or region-specific dialects) are pervasive in language data. Prior work on addressing such bias has focused on statistical methods, without providing a solution for deep learning models for NLP.METHODS: Recent work in representation learning has shown that representing the weights of a trained deep network as task vectors allows for their arithmetic composition to govern model capabilities towards desired behaviors. In this work, we evaluate the extent to which reducing a model's ability to distinguish between contributing sites with such task arithmetic can mitigate confounding by provenance. To do so, we propose two model-agnostic methods, Task Arithmetic for Provenance Effect Reduction (TAPER) and Dominance-Aligned Polarized Provenance Effect Reduction (DAPPER), extending the task vectors approach to a novel problem domain.RESULTS: Evaluation on three datasets shows improved robustness to confounding by provenance for both RoBERTa and Llama-2 models with the task vector approach, with improved performance at the extremes of distribution shift.CONCLUSION: This work emphasizes the importance of adjusting for confounding by provenance, especially in extreme cases of the shift. In use of deep learning models, DAPPER and TAPER show efficiency in mitigating such bias. They provide a novel mitigation strategy for confounding by provenance, with broad applicability to address other sources of bias in composite clinical data sets. Source code is available within the DeconDTN toolkit: https://github.com/LinguisticAnomalies/DeconDTN-toolkit.
View details for DOI 10.1016/j.jbi.2025.104858
View details for PubMedID 40494422
https://orcid.org/0000-0002-8703-1744