Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University. Professor Finn's research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. Her work lies at the intersection of machine learning and robotic control, including topics such as end-to-end learning of visual perception and robotic manipulation skills, deep reinforcement learning of general skills from autonomously collected experience, and meta-learning algorithms that can enable fast learning of new concepts and behaviors.
Professor Finn received her Bachelors degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the ACM doctoral dissertation award, an NSF graduate fellowship, a Facebook fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across three universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers.


Academic Appointments

Honors & Awards

  • ACM Doctoral Dissertation Award, ACM (2019)
  • TR 35 Under 35 Innovator, MIT Technology Review (2018)
  • C.V. Ramamoorthy Distinguished Research Award, UC Berkeley (2017)

Program Affiliations

  • Symbolic Systems Program

Stanford Advisees

All Publications

  • Batch Exploration With Examples for Scalable Robotic Reinforcement Learning IEEE ROBOTICS AND AUTOMATION LETTERS Chen, A. S., Nam, H., Nair, S., Finn, C. 2021; 6 (3): 4401–8
  • Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones IEEE ROBOTICS AND AUTOMATION LETTERS Thananjeyan, B., Balakrishna, A., Nair, S., Luo, M., Srinivasan, K., Hwang, M., Gonzalez, J. E., Ibarz, J., Finn, C., Goldberg, K. 2021; 6 (3): 4915-4922
  • How to train your robot with deep reinforcement learning: lessons we have learned INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH Ibarz, J., Tan, J., Finn, C., Kalakrishnan, M., Pastor, P., Levine, S. 2021; 40 (4-5): 698-721
  • Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos Chen, A. S., Nair, S., Finn, C., Shell, D. A., Toussaint, M., Hsieh, M. A. RSS FOUNDATION-ROBOTICS SCIENCE & SYSTEMS FOUNDATION. 2021
  • Meta-Inverse Reinforcement Learning with Probabilistic Context Variables Yu, L., Yu, T., Finn, C., Ermon, S., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Unsupervised Curricula for Visual Meta-Reinforcement Learning Jabri, A., Hsu, K., Eysenbach, B., Gupta, A., Levine, S., Finn, C., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Unsupervised Visuomotor Control through Distributional Planning Networks Yu, T., Shevchuk, G., Sadigh, D., Finn, C., Bicchi, A., KressGazit, H., Hutchinson, S. MIT PRESS. 2019
  • One-Shot Composition of Vision-Based Skills from Demonstration Yu, T., Abbeel, P., Levine, S., Finn, C., IEEE IEEE. 2019: 2643–50