Bio


Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several awards, most notably the 2019 IEEE International Conference on Robotics and Automation (ICRA) Best Paper Award, the 2019 IEEE Robotics and Automation Society Early Career Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper Award.

Academic Appointments


Honors & Awards


  • Research Award, Okawa Foundation (2019)
  • Research Award, Amazon (2019)
  • Early Academic Career Award in Robotics and Automation, Robotics and Automation Society (2019)
  • Early Career Award, Robotics: Science and Systems (2020)
  • Research Fellowship, Alfred P. Sloan Foundation (2023)

2024-25 Courses


Stanford Advisees


All Publications


  • ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking IEEE ROBOTICS AND AUTOMATION LETTERS Sadjadpour, T., Li, J., Ambrus, R., Bohg, J. 2024; 9 (5): 4273-4280
  • Design and Control of Roller Grasper V3 for In-Hand Manipulation IEEE TRANSACTIONS ON ROBOTICS Yuan, S., Shao, L., Feng, Y., Sun, J., Xue, T., Yako, C. L., Bohg, J., Salisbury, J. 2024; 40: 4222-4234
  • Text2Motion: from natural language instructions to feasible plans AUTONOMOUS ROBOTS Lin, K., Agia, C., Migimatsu, T., Pavone, M., Bohg, J. 2023; 47 (8): 1345-1365
  • TidyBot: personalized robot assistance with large language models AUTONOMOUS ROBOTS Wu, J., Antonova, R., Kan, A., Lepert, M., Zeng, A., Song, S., Bohg, J., Rusinkiewicz, S., Funkhouser, T. 2023
  • Deep Learning Approaches to Grasp Synthesis: A Review IEEE TRANSACTIONS ON ROBOTICS Newbury, R., Gu, M., Chumbley, L., Mousavian, A., Eppner, C., Leitner, J., Bohg, J., Morales, A., Asfour, T., Kragic, D., Fox, D., Cosgun, A. 2023
  • The OBJECTFOLDER BENCHMARK: Multisensory Learning with <i>Neural</i> and <i>Real</i> Objects Gao, R., Dou, Y., Li, H., Agarwal, T., Bohg, J., Li, Y., Fei-Fei, L., Wu, J., IEEE IEEE COMPUTER SOC. 2023: 17276-17286
  • CARTO: Category and Joint Agnostic Reconstruction of ARTiculated Objects Heppert, N., Irshad, M., Zakharov, S., Liu, K., Ambrus, R., Bohg, J., Valada, A., Kollar, T., IEEE IEEE COMPUTER SOC. 2023: 21201-21210
  • STAP: Sequencing Task-Agnostic Policies Agia, C., Migimatsu, T., Wu, J., Bohg, J., IEEE IEEE. 2023: 7951-7958
  • Visuomotor Control in Multi-Object Scenes Using Object-Aware Representations Heravi, N., Wahid, A., Lynch, C., Florence, P., Armstrong, T., Tompson, J., Sermanet, P., Bohg, J., Dwibedi, D., IEEE IEEE. 2023: 9515-9522
  • KITE: Keypoint-Conditioned Policies for Semantic Manipulation Sundaresan, P., Belkhale, S., Sadigh, D., Bohg, J., Tan, J., Toussaint, M., Darvish, K. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2023
  • In-Hand Manipulation of Unknown Objects with Tactile Sensing for Insertion Pan, C., Lepert, M., Yuan, S., Antonova, R., Bohg, J., IEEE IEEE. 2023: 8765-8771
  • TidyBot: Personalized Robot Assistance with Large Language Models Wu, J., Antonova, R., Kan, A., Lepert, M., Zeng, A., Song, S., Bohg, J., Rusinkiewicz, S., Funkhouser, T., IEEE IEEE. 2023: 3546-3553
  • Active Task Randomization: Learning Robust Skills via Unsupervised Generation of Diverse and Feasible Tasks Fang, K., Migimatsu, T., Mandlekar, A., Fei-Fei, L., Bohg, J., IEEE IEEE. 2023: 1924-1931
  • Learning Tool Morphology for Contact-Rich Manipulation Tasks with Differentiable Simulation Li, M., Antonova, R., Sadigh, D., Bohg, J., IEEE IEEE. 2023: 1859-1865
  • A Bayesian Treatment of Real-to-Sim for Deformable Object Manipulation IEEE ROBOTICS AND AUTOMATION LETTERS Antonova, R., Yang, J., Sundaresan, P., Fox, D., Ramos, F., Bohg, J. 2022; 7 (3): 5819-5826
  • Predicting Hand-Object Interaction for Improved Haptic Feedback in Mixed Reality IEEE ROBOTICS AND AUTOMATION LETTERS Salvato, M., Heravi, N., Okamura, A. M., Bohg, J. 2022; 7 (2): 3851-3857
  • Vision-Only Robot Navigation in a Neural Radiance World IEEE ROBOTICS AND AUTOMATION LETTERS Adamkiewicz, M., Chen, T., Caccavale, A., Gardner, R., Culbertson, P., Bohg, J., Schwager, M. 2022; 7 (2): 4606-4613
  • Whisker-Inspired Tactile Sensing for Contact Localization on Robot Manipulators Lin, M. A., Reyes, E., Bohg, J., Cutkosky, M. R., IEEE IEEE. 2022: 7817-7824
  • DiffCloud: Real-to-Sim from Point Clouds with Differentiable Simulation and Rendering of Deformable Objects Sundaresan, P., Antonova, R., Bohg, J., IEEE IEEE. 2022: 10828-10835
  • Category-Independent Articulated Object Tracking with Factor Graphs Heppert, N., Migimatsu, T., Yi, B., Chen, C., Bohg, J., IEEE IEEE. 2022: 3800-3807
  • Grounding Predicates through Actions open(drawer) open(drawer) Migimatsu, T., Bohg, J., IEEE IEEE. 2022: 3498-3504
  • Symbolic State Estimation with Predicates for Contact-Rich Manipulation Tasks Migimatsu, T., Lian, W., Bohg, J., Schaal, S., IEEE IEEE. 2022: 1702-1709
  • OBJECTFOLDER 2.0: A Multisensory Object Dataset for Sim2Real Transfer Gao, R., Si, Z., Chang, Y., Clarke, S., Bohg, J., Li Fei-Fei, Yuan, W., Wu, J., IEEE COMP SOC IEEE COMPUTER SOC. 2022: 10588-10598
  • Dynamic multi-robot task allocation under uncertainty and temporal constraints AUTONOMOUS ROBOTS Choudhury, S., Gupta, J. K., Kochenderfer, M. J., Sadigh, D., Bohg, J. 2021
  • Concept2Robot: Learning manipulation concepts from instructions and human demonstrations INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH Shao, L., Migimatsu, T., Zhang, Q., Yang, K., Bohg, J. 2021
  • Learning latent actions to control assistive robots AUTONOMOUS ROBOTS Losey, D. P., Jeon, H., Li, M., Srinivasan, K., Mandlekar, A., Garg, A., Bohg, J., Sadigh, D. 2021: 1-33

    Abstract

    Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today's robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot's motion in the x-y plane, in another mode the joystick controls the robot's z-yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot's high-dimensional actions into low-dimensional and human-controllable latent actions. We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.

    View details for DOI 10.1007/s10514-021-10005-w

    View details for Web of Science ID 000681168800001

    View details for PubMedID 34366568

    View details for PubMedCentralID PMC8335729

  • How to train your differentiable filter AUTONOMOUS ROBOTS Kloss, A., Martius, G., Bohg, J. 2021
  • OmniHang: Learning to Hang Arbitrary Objects using Contact Point Correspondences and Neural Collision Estimation You, Y., Shao, L., Migimatsu, T., Bohg, J., IEEE IEEE. 2021: 5921-5927
  • Differentiable Factor Graph Optimization for Learning Smoothers Yi, B., Lee, M. A., Kloss, A., Martin-Martin, R., Bohg, J., IEEE IEEE. 2021: 1339-1345
  • Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous Driving Chiu, H., Lie, J., Ambrus, R., Bohg, J., IEEE IEEE. 2021: 14227-14233
  • Interpreting Contact Interactions to Overcome Failure in Robot Assembly Tasks Zachares, P. A., Lee, M. A., Lian, W., Bohg, J., IEEE IEEE. 2021: 3410-3417
  • Detect, Reject, Correct: Crossmodal Compensation of Corrupted Sensors Lee, M. A., Tan, M., Zhu, Y., Bohg, J., IEEE IEEE. 2021: 909-916
  • TrajectoTree: Trajectory Optimization Meets Tree Search for Planning Multi-contact Dexterous Manipulation 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Chen, C., Culbertson, P., Lepert, M., Schwager, M., Bohg, J. IEEE. 2021: 8262-8268
  • Combining learned and analytical models for predicting action effects from sensory data INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH Kloss, A., Schaal, S., Bohg, J. 2020
  • Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks IEEE TRANSACTIONS ON ROBOTICS Lee, M. A., Zhu, Y., Zachares, P., Tan, M., Srinivasan, K., Savarese, S., Fei-Fei, L., Garg, A., Bohg, J. 2020; 36 (3): 582–96
  • Object-Centric Task and Motion Planning in Dynamic Environments IEEE ROBOTICS AND AUTOMATION LETTERS Migimatsu, T., Bohg, J. 2020; 5 (2): 844–51
  • Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects IEEE ROBOTICS AND AUTOMATION LETTERS Yan, M., Zhu, Y., Jin, N., Bohg, J. 2020; 5 (2): 2372–79
  • UniGrasp: Learning a Unified Model to Grasp With Multifingered Robotic Hands IEEE ROBOTICS AND AUTOMATION LETTERS Shao, L., Ferreira, F., Jorda, M., Nambiar, V., Luo, J., Solowjow, E., Ojea, J., Khatib, O., Bohg, J. 2020; 5 (2): 2286–93
  • Learning Task-Oriented Grasping From Human Activity Datasets IEEE ROBOTICS AND AUTOMATION LETTERS Kokic, M., Kragic, D., Bohg, J. 2020; 5 (2): 3352–59
  • Concept2Robot: Learning Manipulation Concepts from Instructions and Human Demonstrations Shao, L., Migimatsu, T., Zhang, Q., Yang, K., Bohg, J., Toussaint, M., Bicchi, A., Hermans, T. MIT PRESS. 2020
  • Accurate Vision-based Manipulation through Contact Reasoning Kloss, A., Bauza, M., Wu, J., Tenenbaum, J. B., Rodriguez, A., Bohg, J., IEEE IEEE. 2020: 6738-6744
  • Learning Hierarchical Control for Robust In-Hand Manipulation Li, T., Srinivasan, K., Meng, M., Yuan, W., Bohg, J., IEEE IEEE. 2020: 8855-8862
  • Learning to Scaffold the Development of Robotic Manipulation Skills Shao, L., Migimatsu, T., Bohg, J., IEEE IEEE. 2020: 5671-5677
  • Learning User-Preferred Mappings for Intuitive Robot Control Li, M., Losey, D. P., Bohg, J., Sadigh, D., IEEE IEEE. 2020: 10960-10967
  • Multimodal Sensor Fusion with Differentiable Filters Lee, M. A., Yi, B., Martin-Martin, R., Savarese, S., Bohg, J., IEEE IEEE. 2020: 10444-10451
  • Learning Topological Motion Primitives for Knot Planning Yan, M., Li, G., Zhu, Y., Bohg, J., IEEE IEEE. 2020: 9457-9464
  • Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal Constraints Choudhury, S., Gupta, J. K., Kochendeefer, M. J., Sadigh, D., Bohg, J., Toussaint, M., Bicchi, A., Hermans, T. MIT PRESS. 2020
  • Predicting grasp success in the real world - A study of quality metrics and human assessment ROBOTICS AND AUTONOMOUS SYSTEMS Rubert, C., Kappler, D., Bohg, J., Morales, A. 2019; 121
  • Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks Lee, M. A., Zhu, Y., Srinivasan, K., Shah, P., Savarese, S., Li Fei-Fei, Garg, A., Bohg, J., IEEE, Howard, A., Althoefer, K., Arai, F., Arrichiello, F., Caputo, B., Castellanos, J., Hauser, K., Isler, Kim, J., Liu, H., Oh, P., Santos, Scaramuzza, D., Ude, A., Voyles, R., Yamane, K., Okamura, A. IEEE. 2019: 8943–50
  • Leveraging Contact Forces for Learning to Grasp Merzic, H., Bogdanovic, M., Kappler, D., Righetti, L., Bohg, J., IEEE, Howard, A., Althoefer, K., Arai, F., Arrichiello, F., Caputo, B., Castellanos, J., Hauser, K., Isler, Kim, J., Liu, H., Oh, P., Santos, Scaramuzza, D., Ude, A., Voyles, R., Yamane, K., Okamura, A. IEEE. 2019: 3615–21
  • Learning to Estimate Pose and Shape of Hand-Held Objects from RGB Images Kokic, M., Kragic, D., Bohg, J., IEEE IEEE. 2019: 3980–87
  • Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks Martin-Martin, R., Lee, M. A., Gardner, R., Savarese, S., Bohg, J., Garg, A., IEEE IEEE. 2019: 1010–17
  • MeteorNet: Deep Learning on Dynamic 3D Point Cloud Sequences Liu, X., Yan, M., Bohg, J., IEEE IEEE. 2019: 9245–54
  • Motion-Based Object Segmentation Based on Dense RGB-D Scene Flow IEEE ROBOTICS AND AUTOMATION LETTERS Shao, L., Shah, P., Dwaracherla, V., Bohg, J. 2018; 3 (4): 3797–3804
  • Interactive Perception: Leveraging Action in Perception and Perception in Action IEEE TRANSACTIONS ON ROBOTICS Bohg, J., Hausman, K., Sankaran, B., Brock, O., Kragic, D., Schaal, S., Sukhatme, G. S. 2017; 33 (6): 1273–91
  • Reports on the 2017 AAAI Spring Symposium Series AI MAGAZINE Bohg, J., Boix, X., Chang, N., Chu, V., Churchill, E. F., Fang, F., Feldman, J., Gonzalez, A. J., Kido, T., Lawless, W. F., Montana, J. L., Ontanon, S., Sinapov, J., Sofge, D., Steels, L., Steenson, M., Takadama, K., Yadav, A. 2017; 38 (4): 99–106
  • Probabilistic Articulated Real-Time Tracking for Robot Manipulation IEEE ROBOTICS AND AUTOMATION LETTERS Cifuentes, C., Issac, J., Wuethrich, M., Schaal, S., Bohg, J. 2017; 2 (2): 577–84
  • On the relevance of grasp metrics for predicting grasp success Rubert, C., Kappler, D., Morales, A., Schaal, S., Bohg, J., Bicchi, A., Okamura, A. IEEE. 2017: 265–72
  • Big Data on Robotics. Big data Bohg, J., Ciocarlie, M., Civera, J., Kavraki, L. E. 2016; 4 (4): 195-196

    View details for DOI 10.1089/big.2016.29013.rob

    View details for PubMedID 27992266

  • Learning Where to Search Using Visual Attention Kloss, A., Kappler, D., Lensch, H. A., Butz, M. V., Schaal, S., Bohg, J., IEEE IEEE. 2016: 5238–45
  • Optimizing for what matters: The Top Grasp Hypothesis Kappler, D., Schaal, S., Bohg, J., Okamura, A., Menciassi, A., Ude, A., Burschka, D., Lee, D., Arrichiello, F., Liu, H., Moon, H., Neira, J., Sycara, K., Yokoi, K., Martinet, P., Oh, P., Valdastri, P., Krovi IEEE. 2016: 2167–74
  • Robust Gaussian Filtering using a Pseudo Measurement Wuethrich, M., Cifuentes, C., Trimpe, S., Meier, F., Bohg, J., Issac, J., Schaal, S., IEEE IEEE. 2016: 3606–13
  • Robot Arm Pose Estimation by Pixel-wise Regression of Joint Angles Widmaier, F., Kappler, D., Schaal, S., Bohg, J., Okamura, A., Menciassi, A., Ude, A., Burschka, D., Lee, D., Arrichiello, F., Liu, H., Moon, H., Neira, J., Sycara, K., Yokoi, K., Martinet, P., Oh, P., Valdastri, P., Krovi IEEE. 2016: 616–23
  • Automatic LQR Tuning Based on Gaussian Process Global Optimization Marco, A., Hennig, P., Bohg, J., Schaal, S., Trimpe, S., Okamura, A., Menciassi, A., Ude, A., Burschka, D., Lee, D., Arrichiello, F., Liu, H., Moon, H., Neira, J., Sycara, K., Yokoi, K., Martinet, P., Oh, P., Valdastri, P., Krovi IEEE. 2016: 270–77
  • Depth-Based Object Tracking Using a Robust Gaussian Filter Issac, J., Wuethrich, M., Cifuentes, C., Bohg, J., Trimpe, S., Schaal, S., Okamura, A., Menciassi, A., Ude, A., Burschka, D., Lee, D., Arrichiello, F., Liu, H., Moon, H., Neira, J., Sycara, K., Yokoi, K., Martinet, P., Oh, P., Valdastri, P., Krovi IEEE. 2016: 608–15
  • Exemplar-based Prediction of Global Object Shape from Local Shape Similarity Bohg, J., Kappler, D., Schaal, S., Okamura, A., Menciassi, A., Ude, A., Burschka, D., Lee, D., Arrichiello, F., Liu, H., Moon, H., Neira, J., Sycara, K., Yokoi, K., Martinet, P., Oh, P., Valdastri, P., Krovi IEEE. 2016: 3398–3405
  • Leveraging Big Data for Grasp Planning Kappler, D., Bohg, J., Schaal, S., IEEE IEEE COMPUTER SOC. 2015: 4304–11
  • The Coordinate Particle Filter - A novel Particle Filter for High Dimensional Systems Wuethrich, M., Bohg, J., Kappler, D., Pfreundt, C., Schaal, S., IEEE IEEE COMPUTER SOC. 2015: 2454–61
  • Data-Driven Grasp Synthesis-A Survey IEEE TRANSACTIONS ON ROBOTICS Bohg, J., Morales, A., Asfour, T., Kragic, D. 2014; 30 (2): 289–309
  • Three-dimensional object reconstruction of symmetric objects by fusing visual and tactile sensing INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH Ilonen, J., Bohg, J., Kyrki, V. 2014; 33 (2): 321–41
  • Robot Arm Pose Estimation through Pixel-Wise Part Classification Bohg, J., Romero, J., Herzog, A., Schaal, S., IEEE IEEE. 2014: 3143–50
  • Dual Execution of Optimized Contact Interaction Trajectories Toussaint, M., Ratliff, N., Bohg, J., Righetti, L., Englert, P., Schaal, S., IEEE IEEE. 2014: 47–54
  • Fusing Visual and Tactile Sensing for 3-D Object Reconstruction While Grasping Ilonen, J., Bohg, J., Kyrki, V., IEEE IEEE. 2013: 3547–54
  • Probabilistic Object Tracking using a Range Camera Wuethrich, M., Pastor, P., Kalakrishnan, M., Bohg, J., Schaal, S., Amato, N. IEEE. 2013: 3195–3202
  • Visual servoing on unknown objects MECHATRONICS Gratal, X., Romero, J., Bohg, J., Kragic, D. 2012; 22 (4): 423–35
  • Mind the Gap - Robotic Grasping under Incomplete Observation IEEE International Conference on Robotics and Automation Bohg, J., Johnson-Roberson, M., Leon, B., Felip, J., Gratal, X., Bergstrom, N., Kragic, D., Morales, A. 2011
  • Enhanced Visual Scene Understanding through Human-Robot Dialog Johnson-Roberson, M., Bohg, J., Skantze, G., Gustafson, J., Carlson, R., Rasolzadeh, B., Kragic, D., IEEE IEEE. 2011: 3342–48
  • Learning grasping points with shape context ROBOTICS AND AUTONOMOUS SYSTEMS Bohg, J., Kragic, D. 2010; 58 (4): 362–77
  • OpenGRASP: A Toolkit for Robot Grasping Simulation Leon, B., Ulbrich, S., Diankov, R., Puche, G., Przybylski, M., Morales, A., Asfour, T., Moisio, S., Bohg, J., Kuffner, J., Dillmann, R., Ando, N., Balakirsky, S., Hemker, T., Reggiani, M., VonStryk, O. SPRINGER-VERLAG BERLIN. 2010: 109–20
  • Strategies for Multi-Modal Scene Exploration Bohg, J., Johnson-Roberson, M., Bjorkman, M., Kragic, D., IEEE IEEE. 2010: 4509–15
  • Attention-based Active 3D Point Cloud Segmentation Johnson-Roberson, M., Bohg, J., Bjorkman, M., Kragic, D., IEEE IEEE. 2010: 1165–70
  • TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS Bohg, J., Barck-Holst, C., Huebner, K., Ralph, M., Rasolzadeh, B., Song, D., Kragic, D. WORLD SCIENTIFIC PUBL CO PTE LTD. 2009: 387–434
  • Integration of Visual Cues for Robotic Grasping Bergstrom, N., Bohg, J., Kragic, D., Fritz, M., Schiele, B., Piater, J. H. SPRINGER-VERLAG BERLIN. 2009: 245–54