Clinical Focus


  • Pediatric Radiology

Academic Appointments


All Publications


  • Adapted large language models can outperform medical experts in clinical text summarization. Nature medicine Van Veen, D., Van Uden, C., Blankemeier, L., Delbrouck, J. B., Aali, A., Bluethgen, C., Pareek, A., Polacin, M., Reis, E. P., Seehofnerová, A., Rohatgi, N., Hosamani, P., Collins, W., Ahuja, N., Langlotz, C. P., Hom, J., Gatidis, S., Pauly, J., Chaudhari, A. S. 2024

    Abstract

    Analyzing vast textual data and summarizing key information from electronic health records imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown promise in natural language processing (NLP) tasks, their effectiveness on a diverse range of clinical summarization tasks remains unproven. Here we applied adaptation methods to eight LLMs, spanning four distinct clinical summarization tasks: radiology reports, patient questions, progress notes and doctor-patient dialogue. Quantitative assessments with syntactic, semantic and conceptual NLP metrics reveal trade-offs between models and adaptation methods. A clinical reader study with 10 physicians evaluated summary completeness, correctness and conciseness; in most cases, summaries from our best-adapted LLMs were deemed either equivalent (45%) or superior (36%) compared with summaries from medical experts. The ensuing safety analysis highlights challenges faced by both LLMs and medical experts, as we connect errors to potential medical harm and categorize types of fabricated information. Our research provides evidence of LLMs outperforming medical experts in clinical text summarization across multiple tasks. This suggests that integrating LLMs into clinical workflows could alleviate documentation burden, allowing clinicians to focus more on patient care.

    View details for DOI 10.1038/s41591-024-02855-5

    View details for PubMedID 38413730

    View details for PubMedCentralID 5593724

  • Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts. Research square Veen, D. V., Uden, C. V., Blankemeier, L., Delbrouck, J. B., Aali, A., Bluethgen, C., Pareek, A., Polacin, M., Reis, E. P., Seehofnerova, A., Rohatgi, N., Hosamani, P., Collins, W., Ahuja, N., Langlotz, C., Hom, J., Gatidis, S., Pauly, J., Chaudhari, A. 2023

    Abstract

    Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a diverse range of clinical summarization tasks has not yet been rigorously demonstrated. In this work, we apply domain adaptation methods to eight LLMs, spanning six datasets and four distinct clinical summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not improve results. Further, in a clinical reader study with ten physicians, we show that summaries from our best-adapted LLMs are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis highlights challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and the inherently human aspects of medicine.

    View details for DOI 10.21203/rs.3.rs-3483777/v1

    View details for PubMedID 37961377

    View details for PubMedCentralID PMC10635391

  • Vom Rontgen zum PET/MRT, und dann? - Zukunftsweisende Bildgebung in der Kinderradiologie. RoFo : Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin Staatz, G., Daldrup-Link, H. E., Herrmann, J., Hirsch, F. W., Schafer, J. F., Seehofnerova, A., Sorantin, E., Theruvath, A. J., Lollert, A. 2019; 191 (4): 357–66

    Abstract

    Significant changes can be expected in modern pediatric radiology. New imaging techniques are progressively added to basic modalities like Xrays and ultrasound. This essay summarizes recent advances and technical innovations in pediatric radiology, which are supposed to gain further importance in the future. Thus, CT dose reduction techniques including artificial intelligence as well as advances in the fields of magnetic resonance and molecular imaging are presented. KEY POINTS: · Technical innovations will lead to significant changes in pediatric radiology.. · CT dose reduction is crucial for pediatric patient collectives.. · New MR-techniques will lower the need for sedation and contrast media application.. · Functional MR-imaging might gain further importance in patients with chronic lung disease.. · Molecular imaging enables detection, characterization and quantification of molecular processes in tumors.. CITATION FORMAT: · Staatz G, Daldrup-Link HE, Herrmann J et al. From Xrays to PET/MR, and then? - Future imaging in pediatric radiology. Fortschr Rontgenstr 2019; 191: 357 - 366.

    View details for PubMedID 30897652

  • From Xrays to PET/MR, and then? - Future imaging in pediatric radiology ROFO-FORTSCHRITTE AUF DEM GEBIET DER RONTGENSTRAHLEN UND DER BILDGEBENDEN VERFAHREN Staatz, G., Daldrup-Link, H., Herrmann, J., Hirsch, F., Schaefer, J. F., Seehofnerova, A., Sorantin, E., Theruvath, A., Lollert, A. 2019; 191 (4): 357–66