Bio


Chelsea Finn is an Assistant Professor in Computer Science and Electrical Engineering at Stanford University, and the William George and Ida Mary Hoover Faculty Fellow. Professor Finn's research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. Her work lies at the intersection of machine learning and robotic control, including topics such as end-to-end learning of visual perception and robotic manipulation skills, deep reinforcement learning of general skills from autonomously collected experience, and meta-learning algorithms that can enable fast learning of new concepts and behaviors. Professor Finn received her Bachelors degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley. Her research has been recognized through the ACM doctoral dissertation award, an NSF graduate fellowship, a Facebook fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley for underprivileged high school students, a mentoring program for underrepresented undergraduates across three universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers.

Website: https://ai.stanford.edu/~cbfinn

Academic Appointments


Honors & Awards


  • Research Fellowship, Alfred P. Sloan Foundation (2023)
  • Early Academic Career Award in Robotics and Automation, IEEE RAS (2022)
  • Young Investigator Award, Office of Naval Research (2021)
  • Microsoft Faculty Fellowship, Microsoft (2020)
  • ACM Doctoral Dissertation Award, ACM (2019)
  • 35 Under 35 Innovator, MIT Technology Review (2018)
  • C.V. Ramamoorthy Distinguished Research Award, UC Berkeley (2017)

Program Affiliations


  • Symbolic Systems Program

2024-25 Courses


Stanford Advisees


All Publications


  • Bayesian Embeddings for Few-Shot Open World Recognition. IEEE transactions on pattern analysis and machine intelligence Willes, J., Harrison, J., Harakeh, A., Finn, C., Pavone, M., Waslander, S. L. 2024; 46 (3): 1513-1529

    Abstract

    As autonomous decision-making agents move from narrow operating environments to unstructured worlds, learning systems must move from a closed-world formulation to an open-world and few-shot setting in which agents continuously learn new classes from small amounts of information. This stands in stark contrast to modern machine learning systems that are typically designed with a known set of classes and a large number of examples for each class. In this work we extend embedding-based few-shot learning algorithms to the open-world recognition setting. We combine Bayesian non-parametric class priors with an embedding-based pre-training scheme to yield a highly flexible framework which we refer to as few-shot learning for open world recognition (FLOWR). We benchmark our framework on open-world extensions of the common MiniImageNet and TieredImageNet few-shot learning datasets. Our results show, compared to prior methods, strong classification accuracy performance and up to a 12% improvement in H-measure (a measure of novel class detection) from our non-parametric open-world few-shot learning scheme.

    View details for DOI 10.1109/TPAMI.2022.3201541

    View details for PubMedID 36063507

  • A Fast and Accurate Machine Learning Autograder for the Breakout Assignment Liu, E., Yuan, D., Ahmed, A., Cornwall, E., Woodrow, J., Burns, K., Nie, A., Brunskill, E., Piech, C., Assoc Computing Machinery ASSOC COMPUTING MACHINERY. 2024: 736-742
  • NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via Novel-View Synthesis Zhou, A., Kim, M., Wang, L., Florence, P., Finn, C., IEEE IEEE COMPUTER SOC. 2023: 17907-17917
  • Disentanglement via Latent Quantization Hsu, K., Dorrell, W., Whittington, J. R., Wu, J., Finn, C., Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
  • Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning Nakamoto, M., Zhai, Y., Singh, A., Mark, M., Ma, Y., Finn, C., Kumar, A., Levine, S., Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
  • Neural Functional Transformers Zhou, A., Yang, K., Jiang, Y., Burns, K., Xu, W., Sokota, S., Kolter, J., Finn, C., Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
  • Permutation Equivariant Neural Functionals Zhou, A., Yang, K., Burns, K., Cardace, A., Jiang, Y., Sokota, S., Kolter, J., Finn, C., Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
  • Direct Preference Optimization: Your Language Model is Secretly a Reward Model Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C. D., Finn, C., Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
  • Train Offline, Test Online: A Real Robot Learning Benchmark Zhou, G., Dean, V., Srirama, M., Rajeswaran, A., Pari, J., Hatch, K., Jain, A., Yu, T., Abbeel, P., Pinto, L., Finn, C., Gupta, A., IEEE IEEE. 2023: 9197-9203
  • Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of Foundation Models Henderson, P., Mitchell, E., Manning, C. D., Jurafsky, D., Finn, C., ACM ASSOC COMPUTING MACHINERY. 2023: 287-296
  • Play it by Ear: Learning Skills amidst Occlusion through Audio-Visual Imitation Learning Du, M., Lee, O. Y., Nair, S., Finn, C., Hauser, K., Shell, D., Huang, S. RSS FOUNDATION-ROBOTICS SCIENCE & SYSTEMS FOUNDATION. 2022
  • Memory-Based Model Editing at Scale Mitchell, E., Lin, C., Bosselut, A., Manning, C. D., Finn, C., Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
  • A State-Distribution Matching Approach to Non-Episodic Reinforcement Learning Sharma, A., Ahmad, R., Finn, C., Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022: 19645-19657
  • Robust Policy Learning over Multiple Uncertainty Sets Xie, A., Sodhani, S., Finn, C., Pineau, J., Zhang, A., Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
  • How to Leverage Unlabeled Data in Offline Reinforcement Learning Yu, T., Kumar, A., Chebotar, Y., Hausman, K., Finn, C., Levine, S., Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
  • Improving Out-of-Distribution Robustness via Selective Augmentation Yao, H., Wang, Y., Li, S., Zhang, L., Liang, W., Zou, J., Finn, C., Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
  • Correct-N-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations Zhang, M., Sohoni, N. S., Zhang, H. R., Finn, C., Re, C., Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., Sabato, S. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
  • Training and Evaluation of Deep Policies Using Reinforcement Learning and Generative Models JOURNAL OF MACHINE LEARNING RESEARCH Ghadirzadeh, A., Poklukar, P., Arndt, K., Finn, C., Kyrki, V., Kragic, D., Bjorkman, M. 2022; 23
  • Bridge Data: Boosting Generalization of Robotic Skills with Cross-Domain Datasets Ebert, F., Yang, Y., Schmeckpeper, K., Bucher, B., Georgakis, G., Daniilidis, K., Finn, C., Levine, S., Hauser, K., Shell, D., Huang, S. RSS FOUNDATION-ROBOTICS SCIENCE & SYSTEMS FOUNDATION. 2022
  • Batch Exploration With Examples for Scalable Robotic Reinforcement Learning IEEE ROBOTICS AND AUTOMATION LETTERS Chen, A. S., Nam, H., Nair, S., Finn, C. 2021; 6 (3): 4401–8
  • Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones IEEE ROBOTICS AND AUTOMATION LETTERS Thananjeyan, B., Balakrishna, A., Nair, S., Luo, M., Srinivasan, K., Hwang, M., Gonzalez, J. E., Ibarz, J., Finn, C., Goldberg, K. 2021; 6 (3): 4915-4922
  • How to train your robot with deep reinforcement learning: lessons we have learned INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH Ibarz, J., Tan, J., Finn, C., Kalakrishnan, M., Pastor, P., Levine, S. 2021; 40 (4-5): 698-721
  • WILDS: A Benchmark of in-the-Wild Distribution Shifts Koh, P., Sagawa, S., Marklund, H., Xie, S., Zhang, M., Balsubramani, A., Hu, W., Yasunaga, M., Phillips, R., Gao, I., Lee, T., David, E., Stavness, I., Guo, W., Earnshaw, B. A., Haque, I. S., Beery, S., Leskovec, J., Kundaje, A., Pierson, E., Levine, S., Finn, C., Liang, P., Meila, M., Zhang, T. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
  • Bayesian Meta-Learning for Few-Shot Policy Adaptation Across Robotic Platforms Ghadirzadeh, A., Chen, X., Poklukar, P., Finn, C., Bjorkman, M., Kragic, D., IEEE IEEE. 2021: 1274-1280
  • Offline Meta-Reinforcement Learning with Advantage Weighting Mitchell, E., Rafailov, R., Peng, X., Levine, S., Finn, C., Meila, M., Zhang, T. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
  • Deep Reinforcement Learning amidst Continual Structured Non-Stationarity Xie, A., Harrison, J., Finn, C., Meila, M., Zhang, T. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
  • Learning Generalizable Robotic Reward Functions from "In-The-Wild" Human Videos Chen, A. S., Nair, S., Finn, C., Shell, D. A., Toussaint, M., Hsieh, M. A. RSS FOUNDATION-ROBOTICS SCIENCE & SYSTEMS FOUNDATION. 2021
  • Decoupling Exploration and Exploitation for Meta-Reinforcement Learning without Sacrifices Liu, E., Raghunathan, A., Liang, P., Finn, C., Meila, M., Zhang, T. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
  • Just Train Twice: Improving Group Robustness without Training Group Information Liu, E., Haghgoo, B., Chen, A. S., Raghunathan, A., Koh, P., Sagawa, S., Liang, P., Finn, C., Meila, M., Zhang, T. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
  • Catformer: Designing Stable Transformers via Sensitivity Analysis Davis, J., Gu, A., Choromanski, K., Dao, T., Re, C., Finn, C., Liang, P., Meila, M., Zhang, T. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
  • Actionable Models: Unsupervised Offline Reinforcement Learning of Robotic Skills Chebotar, Y., Hausman, K., Lu, Y., Xiao, T., Kalashnikov, D., Varley, J., Irpan, A., Eysenbach, B., Julian, R., Finn, C., Levine, S., Meila, M., Zhang, T. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
  • Greedy Hierarchical Variational Autoencoders for Large-Scale Video Prediction Wu, B., Nair, S., Martin-Martin, R., Li Fei-Fei, Finn, C., IEEE COMP SOC IEEE COMPUTER SOC. 2021: 2318-2328
  • Scalable Multi-Task Imitation Learning with Autonomous Improvement Singh, A., Jang, E., Irpan, A., Kappler, D., Dalal, M., Levinev, S., Khansari, M., Finn, C., IEEE IEEE. 2020: 2167-2173
  • OmniTact: A Multi-Directional High-Resolution Touch Sensor Padmanabha, A., Ebert, F., Tian, S., Calandra, R., Finn, C., Levine, S., IEEE IEEE. 2020: 618-624
  • Meta-Inverse Reinforcement Learning with Probabilistic Context Variables Yu, L., Yu, T., Finn, C., Ermon, S., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Unsupervised Curricula for Visual Meta-Reinforcement Learning Jabri, A., Hsu, K., Eysenbach, B., Gupta, A., Levine, S., Finn, C., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Unsupervised Visuomotor Control through Distributional Planning Networks Yu, T., Shevchuk, G., Sadigh, D., Finn, C., Bicchi, A., KressGazit, H., Hutchinson, S. MIT PRESS. 2019
  • One-Shot Composition of Vision-Based Skills from Demonstration Yu, T., Abbeel, P., Levine, S., Finn, C., IEEE IEEE. 2019: 2643–50