All Publications

  • Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation. Journal of general internal medicine Schaye, V., Guzman, B., Burk-Rafel, J., Marin, M., Reinstein, I., Kudlowitz, D., Miller, L., Chun, J., Aphinyanaphongs, Y. 2022


    BACKGROUND: Residents receive infrequent feedback on their clinical reasoning (CR) documentation. While machine learning (ML) and natural language processing (NLP) have been used to assess CR documentation in standardized cases, no studies have described similar use in the clinical environment.OBJECTIVE: The authors developed and validated using Kane's framework a ML model for automated assessment of CR documentation quality in residents' admission notes.DESIGN, PARTICIPANTS, MAIN MEASURES: Internal medicine residents' and subspecialty fellows' admission notes at one medical center from July 2014 to March 2020 were extracted from the electronic health record. Using a validated CR documentation rubric, the authors rated 414 notes for the ML development dataset. Notes were truncated to isolate the relevant portion; an NLP software (cTAKES) extracted disease/disorder named entities and human review generated CR terms. The final model had three input variables and classified notes as demonstrating low- or high-quality CR documentation. The ML model was applied to a retrospective dataset (9591 notes) for human validation and data analysis. Reliability between human and ML ratings was assessed on 205 of these notes with Cohen's kappa. CR documentation quality by post-graduate year (PGY) was evaluated by the Mantel-Haenszel test of trend.KEY RESULTS: The top-performing logistic regression model had an area under the receiver operating characteristic curve of 0.88, a positive predictive value of 0.68, and an accuracy of 0.79. Cohen's kappa was 0.67. Of the 9591 notes, 31.1% demonstrated high-quality CR documentation; quality increased from 27.0% (PGY1) to 31.0% (PGY2) to 39.0% (PGY3) (p < .001 for trend). Validity evidence was collected in each domain of Kane's framework (scoring, generalization, extrapolation, and implications).CONCLUSIONS: The authors developed and validated a high-performing ML model that classifies CR documentation quality in resident admission notes in the clinical environment-a novel application of ML and NLP with many potential use cases.

    View details for DOI 10.1007/s11606-022-07526-0

    View details for PubMedID 35710676

  • NOTESENSE: DEVELOPMENT OF A MACHINE LEARNING ALGORITHM FOR FEEDBACK ON CLINICAL REASONING DOCUMENTATION Schaye, V., Guzman, B., Rafel, J., Kudlowitz, D., Reinstein, I., Miller, L., Cocks, P., Chun, J., Aphinyanaphongs, Y., Marin, M. SPRINGER. 2021: S110
  • Internal Medicine Residents' Perceptions of Virtual Morning Report: a Multicenter Survey. Journal of general internal medicine Albert, T. J., Bradley, J., Starks, H., Redinger, J., Arundel, C., Beard, A., Caputo, L., Chun, J., Gunderson, C. G., Heppe, D., Jagannath, A., Kent, K., Krug, M., Laudate, J., Palaniappan, V., Pensiero, A., Sargsyan, Z., Sladek, E., Tuck, M., Cornia, P. B. 2021


    IMPORTANCE: The COVID-19 pandemic disrupted graduate medical education, compelling training programs to abruptly transition to virtual educational formats despite minimal experience or proficiency. We surveyed residents from a national sample of internal medicine (IM) residency programs to describe their experiences with the transition to virtual morning report (MR), a highly valued core educational conference.OBJECTIVE: Assess resident views about virtual MR content and teaching strategies during the COVID-19 pandemic.DESIGN: Anonymous, web-based survey.PARTICIPANTS: Residents from 14 academically affiliated IM residency programs.MAIN MEASURES: The 25-item survey on virtual MR included questions on demographics; frequency and reason for attending; opinions on who should attend and teach; how the virtual format affects the learning environment; how virtual MR compares to in-person MR with regard to participation, engagement, and overall education; and whether virtual MR should continue after in-person conferences can safely resume. The survey included a combination of Likert-style, multiple option, and open-ended questions.RESULTS: Six hundred fifteen residents (35%) completed the survey, with a balanced sample of interns (39%), second-year (31%), and third-year (30%) residents. When comparing their overall assessment of in-person and virtual MR formats, 42% of residents preferred in-person, 18% preferred virtual, and 40% felt they were equivalent. Most respondents endorsed better peer-engagement, camaraderie, and group participation with in-person MR. Chat boxes, video participation, audience response systems, and smart boards/tablets enhanced respondents' educational experience during virtual MR. Most respondents (72%) felt that the option of virtual MR should continue when it is safe to resume in-person conferences.CONCLUSIONS: Virtual MR was a valued alternative to traditional in-person MR during the COVID-19 pandemic. Residents feel that the virtual platform offers unique educational benefits independent of and in conjunction with in-person conferences. Residents support the integration of a virtual platform into the delivery of MR in the future.

    View details for DOI 10.1007/s11606-021-06963-7

    View details for PubMedID 34173198

  • Experience and Education in Residency Training: Capturing the Resident Experience by Mapping Clinical Data. Academic medicine : journal of the Association of American Medical Colleges Rhee, D. W., Chun, J. W., Stern, D. T., Sartori, D. J. 2021


    PROBLEM: Internal medicine training programs operate under the assumption that the three-year residency training period is sufficient for trainees to achieve the depth and breadth of clinical experience necessary for independent practice; however, the medical conditions to which residents are exposed in clinical practice are not easily measured. As a result, residents' clinical educational experiences are poorly understood.APPROACH: A crosswalk tool (a repository of international classification of diseases [ICD]-10 codes linked to medical content areas) was developed to query routinely collected inpatient principal diagnosis codes and translate them into an educationally meaningful taxonomy. This tool provides a robust characterization of residents' inpatient clinical experiences.OUTCOMES: This pilot study has provided proof of principle that the crosswalk tool can effectively map one year of resident-attributed diagnosis codes to both the broad content category level (for example "Cardiovascular Disease") and to the more specific condition category level (for example "Myocardial Disease"). The authors uncovered content areas in their training program that are overrepresented and some that are underrepresented relative to material on the American Board of Internal Medicine (ABIM) Certification Exam.NEXT STEPS: The crosswalk tool introduced here translated residents' patient care activities into discrete, measurable educational content and enabled one internal medicine residency training program to characterize residents' inpatient educational experience with a high degree of resolution. Leaders of other programs seeking to profile the clinical exposure of their trainees may adopt this strategy. Such clinical content mapping drives innovation in the experiential curriculum, enables comparison across practice sites, and lays the groundwork to test associations between individual clinical exposure and competency-based outcomes, which, in turn, will allow medical educators to draw conclusions regarding how clinical experience reflects clinical competency.

    View details for DOI 10.1097/ACM.0000000000004162

    View details for PubMedID 33983144

  • Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback. Journal of general internal medicine Schaye, V., Miller, L., Kudlowitz, D., Chun, J., Burk-Rafel, J., Cocks, P., Guzman, B., Aphinyanaphongs, Y., Marin, M. 2021


    BACKGROUND: Residents and fellows receive little feedback on their clinical reasoning documentation. Barriers include lack of a shared mental model and variability in the reliability and validity of existing assessment tools. Of the existing tools, the IDEA assessment tool includes a robust assessment of clinical reasoning documentation focusing on four elements (interpretive summary, differential diagnosis, explanation of reasoning for lead and alternative diagnoses) but lacks descriptive anchors threatening its reliability.OBJECTIVE: Our goal was to develop a valid and reliable assessment tool for clinical reasoning documentation building off the IDEA assessment tool.DESIGN, PARTICIPANTS, AND MAIN MEASURES: The Revised-IDEA assessment tool was developed by four clinician educators through iterative review of admission notes written by medicine residents and fellows and subsequently piloted with additional faculty to ensure response process validity. A random sample of 252 notes from July 2014 to June 2017 written by 30 trainees across several chief complaints was rated. Three raters rated 20% of the notes to demonstrate internal structure validity. A quality cut-off score was determined using Hofstee standard setting.KEY RESULTS: The Revised-IDEA assessment tool includes the same four domains as the IDEA assessment tool with more detailed descriptive prompts, new Likert scale anchors, and a score range of 0-10. Intraclass correlation was high for the notes rated by three raters, 0.84 (95% CI 0.74-0.90). Scores ≥6 were determined to demonstrate high-quality clinical reasoning documentation. Only 53% of notes (134/252) were high-quality.CONCLUSIONS: The Revised-IDEA assessment tool is reliable and easy to use for feedback on clinical reasoning documentation in resident and fellow admission notes with descriptive anchors that facilitate a shared mental model for feedback.

    View details for DOI 10.1007/s11606-021-06805-6

    View details for PubMedID 33945113