Bio


Shivam Vedak, MD, MBA, is a Clinical Assistant Professor in the Division of Hospital Medicine at Stanford University School of Medicine. He earned his Bachelor of Science in Biology-Neuroscience from the Schreyer Honors College at The Pennsylvania State University, followed by a dual MD/MBA from the University of Illinois at Chicago (UIC). He completed his residency in Internal Medicine at UIC, where he was honored as the institution’s American College of Physicians Outstanding Resident of the Year in 2022, and subsequently completed a fellowship in Clinical Informatics at Stanford.

Clinically, Dr. Vedak practices as a surgical co-management hospitalist at Stanford Health Care (SHC). His academic and operational work centers on the practical integration of generative artificial intelligence (AI) into clinical workflows, ranging from safe and effective deployment and monitoring to the broader education of healthcare workers on these rapidly evolving technologies. He is frequently invited to speak at national conferences, academic institutions, and professional events, offering both engaging interactive workshops and structured didactic sessions on the fundamentals of large language models (LLMs) and evidence-based prompting techniques.

Clinical Focus


  • Internal Medicine

Academic Appointments


  • Clinical Assistant Professor, Medicine

Administrative Appointments


  • Medical Director for AI & Digital Health, Stanford Health Care (2025 - Present)
  • Associate Director for AI in Medical Education, Stanford University School of Medicine (2025 - Present)

Honors & Awards


  • Outstanding Resident of the Year, UIC Internal Medicine Residency Program, American College of Physicians (2022)
  • Interdisciplinary Honors in Biology and Psychology, Schreyer Honors College

Professional Education


  • Board Certification, American Board of Preventive Medicine, Clinical Informatics (2026)
  • Fellowship: Stanford University Clinical Informatics Fellowship (2025) CA
  • Board Certification: American Board of Internal Medicine, Internal Medicine (2023)
  • Residency: University of Illinois at Chicago Internal Medicine Residency (2023) IL
  • MBA, University of Illinois at Chicago Liautaud Graduate School of Business, Finance, Business Analytics (2020)
  • Medical Education: University of Illinois at Chicago College of Medicine (2020) IL
  • BS, The Pennsylvania State University Schreyer Honors College, Biology-Neuroscience

All Publications


  • Artificial intelligence-generated draft replies to patient messages in pediatrics. JAMIA open Liang, A. S., Vedak, S., Dussaq, A., Yao, D., Villarreal, J. A., Thomas, S., Chen, N., Townsend, T., Pageler, N. M., Morse, K. 2025; 8 (6): ooaf159

    Abstract

    Objectives: This study describes the utilization and experiences of artificial intelligence (AI)-generated draft responses to patient messages in pediatric ambulatory clinicians and contextualizes their experiences in relation to those of adult specialty clinicians.Materials and Methods: A prospective pilot was conducted from September 2023 to August 2024 in 2 pediatric clinics (General Pediatric and Adolescent Medicine) and 2 obstetric clinics (Reproductive Endocrinology and Infertility and General Obstetrics) within an academic health system in Northern California. Participants included physician, nurse, and medical assistant volunteers. The intervention involved a feature utilizing large language models embedded in the electronic health record to generate draft responses. Proportion of AI-generated draft used was collected, as were prepilot and follow-up surveys.Results: A total of 61 clinicians (26 pediatric, 35 obstetric) enrolled, with 46 (75%) completing both surveys. Pediatric clinicians utilized 13.3% (95% CI, 12.3%-14.4%) of AI-generated drafts, and usage rates when responding to patients vs their proxies was similar (15% vs 12.9%, P=.24). Despite using AI-generated drafts significantly less than obstetric clinicians (18.3% [17.2%-19.5%], P<.0001), pediatric clinicians reported a significant reduction in perceived task load (NASA Task Load Index: 59.9-50.9, P=.04) and were more likely to recommend the tool (LTR: 7.0 vs 5.2, P=.04).Discussion and Conclusion: Pediatric clinicians used AI-generated drafts at a rate within previously reported ranges in adult specialties and experienced utility. These findings suggest this tool has potential for enhancing efficiency and reducing task load in pediatric care.

    View details for DOI 10.1093/jamiaopen/ooaf159

    View details for PubMedID 41293120

  • Answering real-world clinical questions using large language model, retrieval-augmented generation, and agentic systems. Digital health Low, Y. S., Jackson, M. L., Hyde, R. J., Brown, R. E., Sanghavi, N. M., Baldwin, J. D., Pike, C. W., Muralidharan, J., Hui, G., Alexander, N., Hassan, H., Nene, R. V., Pike, M., Pokrzywa, C. J., Vedak, S., Yan, A. P., Yao, D. H., Zipursky, A. R., Dinh, C., Ballentine, P., Derieg, D. C., Polony, V., Chawdry, R. N., Davies, J., Hyde, B. B., Shah, N. H., Gombar, S. 2025; 11: 20552076251348850

    Abstract

    The practice of evidence-based medicine can be challenging when relevant data are lacking or difficult to contextualize for a specific patient. Large language models (LLMs) could potentially address both challenges by summarizing published literature or generating new studies using real-world data.We submitted 50 clinical questions to five LLM-based systems: OpenEvidence, which uses an LLM for retrieval-augmented generation (RAG); ChatRWD, which uses an LLM as an interface to a data extraction and analysis pipeline; and three general-purpose LLMs (ChatGPT-4, Claude 3 Opus, Gemini 1.5 Pro). Nine independent physicians evaluated the answers for relevance, quality of supporting evidence, and actionability (i.e., sufficient to justify or change clinical practice).General-purpose LLMs rarely produced relevant, evidence-based answers (2-10% of questions). In contrast, RAG-based and agentic LLM systems, respectively, produced relevant, evidence-based answers for 24% (OpenEvidence) to 58% (ChatRWD) of questions. OpenEvidence produced actionable results for 48% of questions with existing evidence, compared to 37% for ChatRWD and <5% for the general-purpose LLMs. ChatRWD provided actionable results for 52% of questions that lacked existing literature compared to <10% for other LLMs.Special-purpose LLM systems greatly outperformed general-purpose LLMs in producing answers to clinical questions. Retrieval-augmented generation-based LLM (OpenEvidence) performed well when existing data were available, while only the agentic ChatRWD was able to provide actionable answers when preexisting studies were lacking.Synergistic systems combining RAG-based evidence summarization and agentic generation of novel evidence could improve the availability of pertinent evidence for patient care.

    View details for DOI 10.1177/20552076251348850

    View details for PubMedID 40510193

    View details for PubMedCentralID PMC12159471

  • The VITALS Framework: Empowering Programs to Leverage Health Information Technology for Trainee-Led Health Care Decarbonization and Climate Adaptation. Journal of graduate medical education Vedak, S., DeTata, S. R., Sarabu, C., Leitner, S., Outterson, R., Li, R., Fayanju, O. 2024; 16 (6 Suppl): 28-34

    View details for DOI 10.4300/JGME-D-24-00067.1

    View details for PubMedID 39677901

    View details for PubMedCentralID PMC11644571

  • Perspectives on Artificial Intelligence-Generated Responses to Patient Messages. JAMA network open Kim, J., Chen, M. L., Rezaei, S. J., Liang, A. S., Seav, S. M., Onyeka, S., Lee, J. J., Vedak, S. C., Mui, D., Lal, R. A., Pfeffer, M. A., Sharp, C., Pageler, N. M., Asch, S. M., Linos, E. 2024; 7 (10): e2438535

    View details for DOI 10.1001/jamanetworkopen.2024.38535

    View details for PubMedID 39412810

  • Using a Large Language Model to Identify Adolescent Patient Portal Account Access by Guardians. JAMA network open Liang, A. S., Vedak, S., Dussaq, A., Yao, D. H., Morse, K., Ip, W., Pageler, N. M. 2024; 7 (6): e2418454

    View details for DOI 10.1001/jamanetworkopen.2024.18454

    View details for PubMedID 38916895

  • Perceptual Similarities among Wallpaper Group Exemplars SYMMETRY-BASEL Kohler, P. J., Vedak, S., Gilmore, R. O. 2022; 14 (5)