Academic Appointments


Honors & Awards


  • Alumni Impact Award, University of Washington Computer Science (2020)
  • Young Investigator Award, Office of Naval Research (2015)
  • CAREER Award, NSF (2014)
  • Faculty Fellowship, Microsoft (2012)

Program Affiliations


  • Symbolic Systems Program

Professional Education


  • PhD, Massachusetts Institute of Technology, Computer Science (2009)

2024-25 Courses


Stanford Advisees


All Publications


  • Reinforcement learning tutor better supported lower performers in a math task MACHINE LEARNING Ruan, S., Nie, A., Steenbergen, W., He, J., Zhang, J. Q., Guo, M., Liu, Y., Nguyen, K., Wang, C. Y., Ying, R., Landay, J. A., Brunskill, E. 2024
  • Texting and tutoring: Short-term K-3 reading interventions during the pandemic JOURNAL OF EDUCATIONAL RESEARCH Silverman, R. D., Keane, K., Hsieh, H., Southerton, E., Scott, R. C., Brunskill, E. 2023
  • Constraint Sampling Reinforcement Learning: Incorporating Expertise For Faster Learning Mu, T., Theocharous, G., Arbour, D., Brunskill, E., Assoc Advancement Artificial Intelligence ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2022: 7841-7849
  • Power Constrained Bandits. Proceedings of machine learning research Yao, J., Brunskill, E., Pan, W., Murphy, S., Doshi-Velez, F. 1800; 149: 209-259

    Abstract

    Contextual bandits often provide simple and effective personalization in decision making problems, making them popular tools to deliver personalized interventions in mobile health as well as other health applications. However, when bandits are deployed in the context of a scientific study-e.g. a clinical trial to test if a mobile health intervention is effective-the aim is not only to personalize for an individual, but also to determine, with sufficient statistical power, whether or not the system's intervention is effective. It is essential to assess the effectiveness of the intervention before broader deployment for better resource allocation. The two objectives are often deployed under different model assumptions, making it hard to determine how achieving the personalization and statistical power affect each other. In this work, we develop general meta-algorithms to modify existing algorithms such that sufficient power is guaranteed while still improving each user's well-being. We also demonstrate that our meta-algorithms are robust to various model mis-specifications possibly appearing in statistical studies, thus providing a valuable tool to study designers.

    View details for PubMedID 34927078

  • EnglishRot: An Al-Powered Conversational System for Second Language Learning Ruan, S., Jiang, L., Xu, Q., Davis, G. M., Liu, Z., Brunskill, E., Landay, J. A., ASSOC COMP MACHINERY ASSOC COMPUTING MACHINERY. 2021: 434-444
  • Automatic Adaptive Sequencing in a Webgame Mu, T., Wang, S., Andersen, E., Brunskill, E., Cristea, A. I., Troussas, C. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 430-438
  • Learning When-to-Treat Policies JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION Nie, X., Brunskill, E., Wager, S. 2020
  • Scaling up behavioral science interventions in online education. Proceedings of the National Academy of Sciences of the United States of America Kizilcec, R. F., Reich, J., Yeomans, M., Dann, C., Brunskill, E., Lopez, G., Turkay, S., Williams, J. J., Tingley, D. 2020

    Abstract

    Online education is rapidly expanding in response to rising demand for higher and continuing education, but many online students struggle to achieve their educational goals. Several behavioral science interventions have shown promise in raising student persistence and completion rates in a handful of courses, but evidence of their effectiveness across diverse educational contexts is limited. In this study, we test a set of established interventions over 2.5 y, with one-quarter million students, from nearly every country, across 247 online courses offered by Harvard, the Massachusetts Institute of Technology, and Stanford. We hypothesized that the interventions would produce medium-to-large effects as in prior studies, but this is not supported by our results. Instead, using an iterative scientific process of cyclically preregistering new hypotheses in between waves of data collection, we identified individual, contextual, and temporal conditions under which the interventions benefit students. Self-regulation interventions raised student engagement in the first few weeks but not final completion rates. Value-relevance interventions raised completion rates in developing countries to close the global achievement gap, but only in courses with a global gap. We found minimal evidence that state-of-the-art machine learning methods can forecast the occurrence of a global gap or learn effective individualized intervention policies. Scaling behavioral science interventions across various online learning contexts can reduce their average effectiveness by an order-of-magnitude. However, iterative scientific investigations can uncover what works where for whom.

    View details for DOI 10.1073/pnas.1921417117

    View details for PubMedID 32541050

  • Sublinear Optimal Policy Value Estimation in Contextual Bandits Kong, W., Valiant, G., Brunskill, E., Chiappa, S., Calandra, R. ADDISON-WESLEY PUBL CO. 2020: 4377–86
  • Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy Keramati, R., Tamkin, A., Dann, C., Brunskill, E., Assoc Advancement Artificial Intelligence ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2020: 4436-4443
  • Fake It Till You Make It: Learning-Compatible Performance Support Bragg, J., Brunskill, E., Adams, R. P., Gogate JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2020: 915-924
  • Off-Policy Policy Gradient with State Distribution Correction Liu, Y., Swaminathan, A., Agarwal, A., Brunskill, E., Adams, R. P., Gogate JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2020: 1180-1190
  • Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions Gottesman, O., Futoma, J., Liu, Y., Parbhoo, S., Celi, L., Brunskill, E., Doshi-Velez, F., Daume, H., Singh, A. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2020
  • Supporting Children's Math Learning with Feedback-Augmented Narrative Technology Ruan, S., He, J., Ying, R., Burkle, J., Hakim, D., Wang, A., Yin, Y., Zhou, L., Xu, Q., AbuHashem, A., Dietz, G., Murnane, E. L., Brunskill, E., Landay, J. A., ACM ASSOC COMPUTING MACHINERY. 2020: 567-580
  • Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning JOURNAL OF MACHINE LEARNING RESEARCH Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., Pineau, J. 2020; 21
  • Frequentist Regret Bounds for Randomized Least-Squares Value Iteration Zanette, A., Brandfonbrener, D., Brunskill, E., Pirotta, M., Lazaric, A., Chiappa, S., Calandra, R. ADDISON-WESLEY PUBL CO. 2020: 1954–63
  • Where's the Reward?: A Review of Reinforcement Learning for Instructional Sequencing INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION Doroudi, S., Aleven, V., Brunskill, E. 2019; 29 (4): 568–620
  • Preventing undesirable behavior of intelligent machines. Science (New York, N.Y.) Thomas, P. S., Castro da Silva, B., Barto, A. G., Giguere, S., Brun, Y., Brunskill, E. 2019; 366 (6468): 999–1004

    Abstract

    Intelligent machines using machine learning algorithms are ubiquitous, ranging from simple data analysis and pattern recognition tools to complex systems that achieve superhuman performance on various tasks. Ensuring that they do not exhibit undesirable behavior-that they do not, for example, cause harm to humans-is therefore a pressing problem. We propose a general and flexible framework for designing machine learning algorithms. This framework simplifies the problem of specifying and regulating undesirable behavior. To show the viability of this framework, we used it to create machine learning algorithms that precluded the dangerous behavior caused by standard machine learning algorithms in our experiments. Our framework for designing machine learning algorithms simplifies the safe and responsible application of machine learning.

    View details for DOI 10.1126/science.aag3311

    View details for PubMedID 31754000

  • Fairer but Not Fair Enough On the Equitability of Knowledge Tracing Doroudi, S., Brunskill, E., Azcona, D., Chung, R. ASSOC COMPUTING MACHINERY. 2019: 335–39
  • PLOTS: Procedure Learning from Observations using Subtask Structure Mu, T., Goel, K., Brunskill, E., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019: 1007–15
  • Almost Horizon-Free Structure-Aware Best Policy Identification with a Generative Model Zanette, A., Kochenderfer, M. J., Brunskill, E., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Limiting Extrapolation in Linear Approximate Value Iteration Zanette, A., Lazaric, A., Kochenderfer, M. J., Brunskill, E., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Offline Contextual Bandits with High Probability Fairness Guarantees Metevier, B., Giguere, S., Brockman, S., Kobren, A., Brun, Y., Brunskill, E., Thomas, P. S., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Value Driven Representation for Human-in-the-Loop Reinforcement Learning Keramati, R., Brunskill, E., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019: 176–80
  • QuizBot: A Dialogue-based Adaptive Learning System for Factual Knowledge Ruan, S., Jiang, L., Xu, J., Tham, B., Qiu, Z., Zhu, Y., Murnane, E. L., Brunskill, E., Landay, J. A., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019
  • Key Phrase Extraction for Generating Educational Question-Answer Pairs Willis, A., Davis, G., Ruan, S., Manoharan, L., Landay, J., Brunskill, E., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019
  • BookBuddy: Turning Digital Materials Into Interactive Foreign Language Lessons Through a Voice Chatbot Ruan, S., Willis, A., Xu, Q., Davis, G. M., Jiang, L., Brunskill, E., Landay, J. A., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019
  • Shared Autonomy for an Interactive AI System Zhou, S., Mu, T., Goel, K., Bernstein, M., Brunskill, E., ACM ASSOC COMPUTING MACHINERY. 2018: 20–22
  • Representation Balancing MDPs for Off-Policy Policy Evaluation Liu, Y., Gottesman, O., Raghu, A., Komorowski, M., Faisal, A., Doshi-Velez, F., Brunskill, E., Bengio, S., Wallach, H., Larochelle, H., Grauman, K., CesaBianchi, N., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2018
  • Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning Dann, C., Lattimore, T., Brunskill, E., Guyon, Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2017
  • Regret Minimization in MDPs with Options without Prior Knowledge Fruit, R., Pirotta, M., Lazaric, A., Brunskill, E., Guyon, Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2017
  • Using Options and Covariance Testing for Long Horizon Off-Policy Policy Evaluation Guo, Z., Thomas, P. S., Brunskill, E., Guyon, Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2017