All Publications


  • Natural language processing system for rapid detection and intervention of mental health crisis chat messages. NPJ digital medicine Swaminathan, A., Lopez, I., Mar, R. A., Heist, T., McClintock, T., Caoili, K., Grace, M., Rubashkin, M., Boggs, M. N., Chen, J. H., Gevaert, O., Mou, D., Nock, M. K. 2023; 6 (1): 213

    Abstract

    Patients experiencing mental health crises often seek help through messaging-based platforms, but may face long wait times due to limited message triage capacity. Here we build and deploy a machine-learning-enabled system to improve response times to crisis messages in a large, national telehealth provider network. We train a two-stage natural language processing (NLP) system with key word filtering followed by logistic regression on 721 electronic medical record chat messages, of which 32% are potential crises (suicidal/homicidal ideation, domestic violence, or non-suicidal self-injury). Model performance is evaluated on a retrospective test set (4/1/21-4/1/22, N=481) and a prospective test set (10/1/22-10/31/22, N=102,471). In the retrospective test set, the model has an AUC of 0.82 (95% CI: 0.78-0.86), sensitivity of 0.99 (95% CI: 0.96-1.00), and PPV of 0.35 (95% CI: 0.309-0.4). In the prospective test set, the model has an AUC of 0.98 (95% CI: 0.966-0.984), sensitivity of 0.98 (95% CI: 0.96-0.99), and PPV of 0.66 (95% CI: 0.626-0.692). The daily median time from message receipt to crisis specialist triage ranges from 8 to 13min, compared to 9h before the deployment of the system. We demonstrate that a NLP-based machine learning model can reliably identify potential crisis chat messages in a telehealth setting. Our system integrates into existing clinical workflows, suggesting that with appropriate training, humans can successfully leverage ML systems to facilitate triage of crisis messages.

    View details for DOI 10.1038/s41746-023-00951-3

    View details for PubMedID 37990134

  • Selective prediction for extracting unstructured clinical data. Journal of the American Medical Informatics Association : JAMIA Swaminathan, A., Lopez, I., Wang, W., Srivastava, U., Tran, E., Bhargava-Shah, A., Wu, J. Y., Ren, A. L., Caoili, K., Bui, B., Alkhani, L., Lee, S., Mohit, N., Seo, N., Macedo, N., Cheng, W., Liu, C., Thomas, R., Chen, J. H., Gevaert, O. 2023

    Abstract

    While there are currently approaches to handle unstructured clinical data, such as manual abstraction and structured proxy variables, these methods may be time-consuming, not scalable, and imprecise. This article aims to determine whether selective prediction, which gives a model the option to abstain from generating a prediction, can improve the accuracy and efficiency of unstructured clinical data abstraction.We trained selective classifiers (logistic regression, random forest, support vector machine) to extract 5 variables from clinical notes: depression (n = 1563), glioblastoma (GBM, n = 659), rectal adenocarcinoma (DRA, n = 601), and abdominoperineal resection (APR, n = 601) and low anterior resection (LAR, n = 601) of adenocarcinoma. We varied the cost of false positives (FP), false negatives (FN), and abstained notes and measured total misclassification cost.The depression selective classifiers abstained on anywhere from 0% to 97% of notes, and the change in total misclassification cost ranged from -58% to 9%. Selective classifiers abstained on 5%-43% of notes across the GBM and colorectal cancer models. The GBM selective classifier abstained on 43% of notes, which led to improvements in sensitivity (0.94 to 0.96), specificity (0.79 to 0.96), PPV (0.89 to 0.98), and NPV (0.88 to 0.91) when compared to a non-selective classifier and when compared to structured proxy variables.We showed that selective classifiers outperformed both non-selective classifiers and structured proxy variables for extracting data from unstructured clinical notes.Selective prediction should be considered when abstaining is preferable to making an incorrect prediction.

    View details for DOI 10.1093/jamia/ocad182

    View details for PubMedID 37769323

  • Critically reading machine learning literature in neurosurgery: a reader's guide and checklist for appraising prediction models. Neurosurgical focus Emani, S., Swaminathan, A., Grobman, B., Duvall, J. B., Lopez, I., Arnaout, O., Huang, K. T. 2023; 54 (6): E3

    Abstract

    OBJECTIVE: Machine learning (ML) has become an increasingly popular tool for use in neurosurgical research. The number of publications and interest in the field have recently seen significant expansion in both quantity and complexity. However, this also places a commensurate burden on the general neurosurgical readership to appraise this literature and decide if these algorithms can be effectively translated into practice. To this end, the authors sought to review the burgeoning neurosurgical ML literature and to develop a checklist to help readers critically review and digest this work.METHODS: The authors performed a literature search of recent ML papers in the PubMed database with the terms "neurosurgery" AND "machine learning," with additional modifiers "trauma," "cancer," "pediatric," and "spine" also used to ensure a diverse selection of relevant papers within the field. Papers were reviewed for their ML methodology, including the formulation of the clinical problem, data acquisition, data preprocessing, model development, model validation, model performance, and model deployment.RESULTS: The resulting checklist consists of 14 key questions for critically appraising ML models and development techniques; these are organized according to their timing along the standard ML workflow. In addition, the authors provide an overview of the ML development process, as well as a review of key terms, models, and concepts referenced in the literature.CONCLUSIONS: ML is poised to become an increasingly important part of neurosurgical research and clinical care. The authors hope that dissemination of education on ML techniques will help neurosurgeons to critically review new research better and more effectively integrate this technology into their practices.

    View details for DOI 10.3171/2023.3.FOCUS2352

    View details for PubMedID 37283326

  • Post-traumatic growth in PhD students during the COVID-19 pandemic. Psychiatry research communications Tu, A., Restivo, J., O'Neill, K., Swaminathan, A., Choi, K., Lee, H., Smoller, J., Patel, V., Barreira, P., Liu, C., Naslund, J. 2023; 3 (1): 100104

    Abstract

    Throughout the COVID-19 pandemic, graduate students have faced increased risk of mental health challenges. Research suggests that experiencing adversity may induce positive psychological changes, called post-traumatic growth (PTG). These changes can include improved relationships with others, perceptions of oneself, and enjoyment of life. Few existing studies have explored this phenomenon among graduate students. This secondary data analysis of a survey conducted in November 2020 among graduate students at a private R1 University in the northeast United States examined graduate students' levels and correlates of PTG during the COVID-19 pandemic. Students had a low level of PTG, with a mean score of 10.31 out of 50. Linear regression models showed significant positive relationships between anxiety and PTG and between a measure of self-reported impact of the pandemic and PTG. Non-White minorities also had significantly greater PTG than White participants. Experiencing more negative impact due to the pandemic and ruminating about the pandemic were correlated with greater PTG. These findings advance research on the patterns of PTG during the COVID-19 pandemic and can inform future studies of graduate students' coping mechanisms and support efforts to promote pandemic recovery and resilience.

    View details for DOI 10.1016/j.psycom.2023.100104

    View details for PubMedID 36743383

    View details for PubMedCentralID PMC9886426