All Publications


  • A visual-language foundation model for pathology image analysis using medical Twitter. Nature medicine Huang, Z., Bianchi, F., Yuksekgonul, M., Montine, T. J., Zou, J. 2023

    Abstract

    The lack of annotated publicly available medical images is a major barrier for computational research and education innovations. At the same time, many de-identified images and much knowledge are shared by clinicians on public forums such as medical Twitter. Here we harness these crowd platforms to curate OpenPath, a large dataset of 208,414 pathology images paired with natural language descriptions. We demonstrate the value of this resource by developing pathology language-image pretraining (PLIP), a multimodal artificial intelligence with both image and text understanding, which is trained on OpenPath. PLIP achieves state-of-the-art performances for classifying new pathology images across four external datasets: for zero-shot classification, PLIP achieves F1 scores of 0.565-0.832 compared to F1 scores of 0.030-0.481 for previous contrastive language-image pretrained model. Training a simple supervised classifier on top of PLIP embeddings also achieves 2.5% improvement in F1 scores compared to using other supervised model embeddings. Moreover, PLIP enables users to retrieve similar cases by either image or natural language search, greatly facilitating knowledge sharing. Our approach demonstrates that publicly shared medical information is a tremendous resource that can be harnessed to develop medical artificial intelligence for enhancing diagnosis, knowledge sharing and education.

    View details for DOI 10.1038/s41591-023-02504-3

    View details for PubMedID 37592105

    View details for PubMedCentralID 9883475

  • GPT detectors are biased against non-native English writers. Patterns (New York, N.Y.) Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., Zou, J. 2023; 4 (7): 100779

    Abstract

    GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a more equitable digital landscape.

    View details for DOI 10.1016/j.patter.2023.100779

    View details for PubMedID 37521038

  • Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations Abid, A., Yuksekgonul, M., Zou, J., Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022: 66-88