Clinical Focus

  • Anesthesia

Academic Appointments

Professional Education

  • Board Certification: American Board of Anesthesiology, Anesthesia (2021)
  • Fellowship: UCSF Pulmonary and Critical Care Medicine Fellowship (2020) CA
  • Residency: Massachusetts General Hospital Anesthesiology Residency (2019) MA
  • Internship: Tufts Medical Center Surgery Residency (2016) MA
  • Medical Education: Imperial College, School of Medicine (2011) UK

All Publications

  • Negativity and Positivity in the ICU: Exploratory Development of Automated Sentiment Capture in the Electronic Health Record. Critical care explorations Kennedy, C. J., Chiu, C., Chapman, A. C., Gologorskaya, O., Farhan, H., Han, M., Hodgson, M., Lazzareschi, D., Ashana, D., Lee, S., Smith, A. K., Espejo, E., Boscardin, J., Pirracchio, R., Cobert, J. 2023; 5 (10): e0960


    OBJECTIVES: To develop proof-of-concept algorithms using alternative approaches to capture provider sentiment in ICU notes.DESIGN: Retrospective observational cohort study.SETTING: The Multiparameter Intelligent Monitoring of Intensive Care III (MIMIC-III) and the University of California, San Francisco (UCSF) deidentified notes databases.PATIENTS: Adult (≥18 yr old) patients admitted to the ICU.MEASUREMENTS AND MAIN RESULTS: We developed two sentiment models: 1) a keywords-based approach using a consensus-based clinical sentiment lexicon comprised of 72 positive and 103 negative phrases, including negations and 2) a Decoding-enhanced Bidirectional Encoder Representations from Transformers with disentangled attention-v3-based deep learning model (keywords-independent) trained on clinical sentiment labels. We applied the models to 198,944 notes across 52,997 ICU admissions in the MIMIC-III database. Analyses were replicated on an external sample of patients admitted to a UCSF ICU from 2018 to 2019. We also labeled sentiment in 1,493 note fragments and compared the predictive accuracy of our tools to three popular sentiment classifiers. Clinical sentiment terms were found in 99% of patient visits across 88% of notes. Our two sentiment tools were substantially more predictive (Spearman correlations of 0.62-0.84, p values < 0.00001) of labeled sentiment compared with general language algorithms (0.28-0.46).CONCLUSION: Our exploratory healthcare-specific sentiment models can more accurately detect positivity and negativity in clinical notes compared with general sentiment tools not designed for clinical usage.

    View details for DOI 10.1097/CCE.0000000000000960

    View details for PubMedID 37753238