Dr. Dash is an emergency medicine physician. He delivers care in the Stanford Health Care level 1 trauma center. He is an assistant professor in the Department of Emergency Medicine at Stanford University School of Medicine.
He received fellowship training in clinical informatics at Stanford Health Care. He earned a Master of Public Health (MPH) degree from Harvard University.
His research interests include computer vision and natural language processing. He is also interested in quality assurance and quality improvement in digital health initiatives.
Other research projects of Dr. Dash include development of an image classification algorithm that helps predict hypoxic outcomes. He also worked on the development of a hardware and software system designed to provide real-time feedback about cardiac function at the patient’s bedside.
Dr. Dash was vice president of the American Medical Informatics Association Clinical Fellows while completing his fellowship. He was also a post-doctoral research fellow at the Stanford University Center for Artificial Intelligence in Medicine & Imaging.
He is a member of the American College of Emergency Physicians and American Academy of Emergency Medicine.
He speaks English and Oriya fluently. He also speaks, reads, and writes Japanese and Spanish with intermediate competence.
His interests outside of patient care include piano, computer programming, sustainable energy projects, and cooking multi-course East Asian meals.
- Artificial Intelligence
- Clinical Informatics
- Emergency Medicine
Clinical Assistant Professor, Emergency Medicine
Boards, Advisory Committees, Professional Organizations
Member, Sigma Pi Sigma (2008 - Present)
Member, ACEP (2018 - Present)
Fellowship: Stanford University Clinical Informatics Fellowship (2022) CA
Board Certification: American Board of Emergency Medicine, Emergency Medicine (2019)
Residency: University Hosptials Cleveland Medical Center Emergency Medicine Program (2018) OH
Medical Education: Baylor College of Medicine (2013) TX
MPH, Harvard School of Public Health, Epidemiology & Statistics (2018)
BS, University of Texas at Austin, Radiation Physics (2008)
MD, Baylor College of Medicine (2013)
Investigating real-world consequences of biases in commonly used clinical calculators.
The American journal of managed care
2023; 29 (1): e1-e7
OBJECTIVES: To evaluate whether one summary metric of calculator performance sufficiently conveys equity across different demographic subgroups, as well as to evaluate how calculator predictive performance affects downstream health outcomes.STUDY DESIGN: We evaluate 3 commonly used clinical calculators-Model for End-Stage Liver Disease (MELD), CHA2DS2-VASc, and simplified Pulmonary Embolism Severity Index (sPESI)-on the cohort extracted from the Stanford Medicine Research Data Repository, following the cohort selection process as described in respective calculator derivation papers.METHODS: We quantified the predictive performance of the 3 clinical calculators across sex and race. Then, using the clinical guidelines that guide care based on these calculators' output, we quantified potential disparities in subsequent health outcomes.RESULTS: Across the examined subgroups, the MELD calculator exhibited worse performance for female and White populations, CHA2DS2-VASc calculator for the male population, and sPESI for the Black population. The extent to which such performance differences translated into differential health outcomes depended on the distribution of the calculators' scores around the thresholds used to trigger a care action via the corresponding guidelines. In particular, under the old guideline for CHA2DS2-VASc, among those who would not have been offered anticoagulant therapy, the Hispanic subgroup exhibited the highest rate of stroke.CONCLUSIONS: Clinical calculators, even when they do not include variables such as sex and race as inputs, can have very different care consequences across those subgroups. These differences in health care outcomes across subgroups can be explained by examining the distribution of scores and their calibration around the thresholds encoded in the accompanying care guidelines.
View details for DOI 10.37765/ajmc.2023.89306
View details for PubMedID 36716157
Paging the Clinical Informatics Community: Respond STAT to Dobbs v Jackson's Women's Health Organization.
Applied clinical informatics
View details for DOI 10.1055/a-2000-7590
View details for PubMedID 36535703
Deep Learning System Boosts Radiologist Detection of Intracranial Hemorrhage.
2022; 14 (10): e30264
BACKGROUND: Intracranial hemorrhage (ICH) requires emergent medical treatment for positive outcomes. While previous artificial intelligence (AI) solutions achieved rapid diagnostics, none were shown to improve the performance of radiologists in detecting ICHs. Here, we show that the Caire ICH artificial intelligence system enhances a radiologist's ICH diagnosis performance.METHODS: A dataset of non-contrast-enhanced axial cranial computed tomography (CT) scans (n=532) were labeled for the presence or absence of an ICH. If an ICH was detected, its ICH subtype was identified. After a washout period, the three radiologists reviewed the same dataset with the assistance of the Caire ICH system. Performance was measured with respect to reader agreement, accuracy, sensitivity, and specificity when compared to the ground truth, defined as reader consensus.RESULTS: Caire ICH improved the inter-reader agreement on average by 5.76% in a dataset with an ICH prevalence of 74.3%. Further, radiologists using Caire ICH detected an average of 18 more ICHs and significantly increased their accuracy by 6.15%, their sensitivity by 4.6%, and their specificity by 10.62%. The Caire ICH system also improved the radiologist's ability to accurately identify the ICH subtypes present.CONCLUSION: The Caire ICH device significantly improves the performance of a cohort of radiologists. Such a device has the potential to be a tool that can improve patient outcomes and reduce misdiagnosis of ICH.
View details for DOI 10.7759/cureus.30264
View details for PubMedID 36381767
Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review.
JAMA network open
2022; 5 (8): e2227779
Importance: Various model reporting guidelines have been proposed to ensure clinical prediction models are reliable and fair. However, no consensus exists about which model details are essential to report, and commonalities and differences among reporting guidelines have not been characterized. Furthermore, how well documentation of deployed models adheres to these guidelines has not been studied.Objectives: To assess information requested by model reporting guidelines and whether the documentation for commonly used machine learning models developed by a single vendor provides the information requested.Evidence Review: MEDLINE was queried using machine learning model card and reporting machine learning from November 4 to December 6, 2020. References were reviewed to find additional publications, and publications without specific reporting recommendations were excluded. Similar elements requested for reporting were merged into representative items. Four independent reviewers and 1 adjudicator assessed how often documentation for the most commonly used models developed by a single vendor reported the items.Findings: From 15 model reporting guidelines, 220 unique items were identified that represented the collective reporting requirements. Although 12 items were commonly requested (requested by 10 or more guidelines), 77 items were requested by just 1 guideline. Documentation for 12 commonly used models from a single vendor reported a median of 39% (IQR, 37%-43%; range, 31%-47%) of items from the collective reporting requirements. Many of the commonly requested items had 100% reporting rates, including items concerning outcome definition, area under the receiver operating characteristics curve, internal validation, and intended clinical use. Several items reported half the time or less related to reliability, such as external validation, uncertainty measures, and strategy for handling missing data. Other frequently unreported items related to fairness (summary statistics and subgroup analyses, including for race and ethnicity or sex).Conclusions and Relevance: These findings suggest that consistent reporting recommendations for clinical predictive models are needed for model developers to share necessary information for model deployment. The many published guidelines would, collectively, require reporting more than 200 items. Model documentation from 1 vendor reported the most commonly requested items from model reporting guidelines. However, areas for improvement were identified in reporting items related to model reliability and fairness. This analysis led to feedback to the vendor, which motivated updates to the documentation for future users.
View details for DOI 10.1001/jamanetworkopen.2022.27779
View details for PubMedID 35984654
Building a Learning Health System: Creating an Analytical Workflow for Evidence Generation to Inform Institutional Clinical Care Guidelines.
Applied clinical informatics
2022; 13 (1): 315-321
BACKGROUND: One key aspect of a learning health system (LHS) is utilizing data generated during care delivery to inform clinical care. However, institutional guidelines that utilize observational data are rare and require months to create, making current processes impractical for more urgent scenarios such as those posed by the COVID-19 pandemic. There exists a need to rapidly analyze institutional data to drive guideline creation where evidence from randomized control trials are unavailable.OBJECTIVES: This article provides a background on the current state of observational data generation in institutional guideline creation and details our institution's experience in creating a novel workflow to (1) demonstrate the value of such a workflow, (2) demonstrate a real-world example, and (3) discuss difficulties encountered and future directions.METHODS: Utilizing a multidisciplinary team of database specialists, clinicians, and informaticists, we created a workflow for identifying and translating a clinical need into a queryable format in our clinical data warehouse, creating data summaries and feeding this information back into clinical guideline creation.RESULTS: Clinical questions posed by the hospital medicine division were answered in a rapid time frame and informed creation of institutional guidelines for the care of patients with COVID-19. The cost of setting up a workflow, answering the questions, and producing data summaries required around 300hours of effort and $300,000 USD.CONCLUSION: A key component of an LHS is the ability to learn from data generated during care delivery. There are rare examples in the literature and we demonstrate one such example along with proposed thoughts of ideal multidisciplinary team formation and deployment.
View details for DOI 10.1055/s-0042-1743241
View details for PubMedID 35235994