Lecturer, Computer Science
MS, Stanford University, Computer Science (2019)
MS, Stanford University, Statistics (2019)
- Gap-filling eddy covariance methane fluxes: Comparison of machine learning model predictions and uncertainties at FLUXNET-CH4 wetlands AGRICULTURAL AND FOREST METEOROLOGY 2021; 308
Improving Hospital Readmission Prediction using Individualized Utility Analysis.
Journal of biomedical informatics
Machine learning (ML) models for allocating readmission-mitigating interventions are typically selected according to their discriminative ability, which may not necessarily translate into utility in allocation of resources. Our objective was to determine whether ML models for allocating readmission-mitigating interventions have different usefulness based on their overall utility and discriminative ability.We conducted a retrospective utility analysis of ML models using claims data acquired from the Optum Clinformatics Data Mart, including 513,495 commercially-insured inpatients (mean [SD] age 69  years; 294,895 [57%] Female) over the period January 2016 through January 2017 from all 50 states with mean 90 day cost of $11,552. Utility analysis estimates the cost, in dollars, of allocating interventions for lowering readmission risk based on the reduction in the 90-day cost.Allocating readmission-mitigating interventions based on a GBDT model trained to predict readmissions achieved an estimated utility gain of $104 per patient, and an AUC of 0.76 (95% CI 0.76, 0.77); allocating interventions based on a model trained to predict cost as a proxy achieved a higher utility of $175.94 per patient, and an AUC of 0.62 (95% CI 0.61, 0.62). A hybrid model combining both intervention strategies is comparable with the best models on either metric. Estimated utility varies by intervention cost and efficacy, with each model performing the best under different intervention settings.We demonstrate that machine learning models may be ranked differently based on overall utility and discriminative ability. Machine learning models for allocation of limited health resources should consider directly optimizing for utility.
View details for DOI 10.1016/j.jbi.2021.103826
View details for PubMedID 34087428
A framework for making predictive models useful in practice.
Journal of the American Medical Informatics Association : JAMIA
OBJECTIVE: To analyze the impact of factors in healthcare delivery on the net benefit of triggering an Advanced Care Planning (ACP) workflow based on predictions of 12-month mortality.MATERIALS AND METHODS: We built a predictive model of 12-month mortality using electronic health record data and evaluated the impact of healthcare delivery factors on the net benefit of triggering an ACP workflow based on the models' predictions. Factors included nonclinical reasons that make ACP inappropriate: limited capacity for ACP, inability to follow up due to patient discharge, and availability of an outpatient workflow to follow up on missed cases. We also quantified the relative benefits of increasing capacity for inpatient ACP versus outpatient ACP.RESULTS: Work capacity constraints and discharge timing can significantly reduce the net benefit of triggering the ACP workflow based on a model's predictions. However, the reduction can be mitigated by creating an outpatient ACP workflow. Given limited resources to either add capacity for inpatient ACP versus developing outpatient ACP capability, the latter is likely to provide more benefit to patient care.DISCUSSION: The benefit of using a predictive model for identifying patients for interventions is highly dependent on the capacity to execute the workflow triggered by the model. We provide a framework for quantifying the impact of healthcare delivery factors and work capacity constraints on achieved benefit.CONCLUSION: An analysis of the sensitivity of the net benefit realized by a predictive model triggered clinical workflow to various healthcare delivery factors is necessary for making predictive models useful in practice.
View details for DOI 10.1093/jamia/ocaa318
View details for PubMedID 33355350
Countdown Regression: Sharp and Calibrated Survival Predictions
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2020: 145-155
View details for Web of Science ID 000722423500013
NGBoost: Natural Gradient Boosting for Probabilistic Prediction
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2020
View details for Web of Science ID 000683178502076
- Ambulatory Atrial Fibrillation Monitoring Using Wearable Photoplethysmography with Deep Learning ASSOC COMPUTING MACHINERY. 2019: 1909–16
Improving palliative care with deep learning.
BMC medical informatics and decision making
2018; 18 (Suppl 4): 122
BACKGROUND: Access to palliative care is a key quality metric which most healthcare organizations strive to improve. The primary challenges to increasing palliative care access are a combination of physicians over-estimating patient prognoses, and a shortage of palliative staff in general. This, in combination with treatment inertia can result in a mismatch between patient wishes, and their actual care towards the end of life.METHODS: In this work, we address this problem, with Institutional Review Board approval, using machine learning and Electronic Health Record (EHR) data of patients. We train a Deep Neural Network model on the EHR data of patients from previous years, to predict mortality of patients within the next 3-12 month period. This prediction is used as a proxy decision for identifying patients who could benefit from palliative care.RESULTS: The EHR data of all admitted patients are evaluated every night by this algorithm, and the palliative care team is automatically notified of the list of patients with a positive prediction. In addition, we present a novel technique for decision interpretation, using which we provide explanations for the model's predictions.CONCLUSION: The automatic screening and notification saves the palliative care team the burden of time consuming chart reviews of all patients, and allows them to take a proactive approach in reaching out to such patients rather then relying on referrals from the treating physicians.
View details for PubMedID 30537977
Automated and flexible identification of complex disease: building a model for systemic lupus erythematosus using noisy labeling.
Journal of the American Medical Informatics Association : JAMIA
Accurate and efficient identification of complex chronic conditions in the electronic health record (EHR) is an important but challenging task that has historically relied on tedious clinician review and oversimplification of the disease. Here we adapt methods that allow for automated "noisy labeling" of positive and negative controls to create a "silver standard" for machine learning to automate identification of systemic lupus erythematosus (SLE). Our final model, which includes both structured data as well as text processing of clinical notes, outperformed all existing algorithms for SLE (AUC 0.97). In addition, we demonstrate how the probabilistic outputs of this model can be adapted to various clinical needs, selecting high thresholds when specificity is the priority and lower thresholds when a more inclusive patient population is desired. Deploying a similar methodology to other complex diseases has the potential to dramatically simplify the landscape of population identification in the EHR.MeSH terms: Electronic Health Records, Machine Learning, Lupus Erythematosus, Phenotype, Algorithms.
View details for PubMedID 30476175
Performance of Machine Learning Methods Using Electronic Medical Records to Predict Varicella Zoster Virus Infection
View details for Web of Science ID 000411824106394