Krishnan Vijay Srinivasan
Ph.D. Student in Computer Science, admitted Autumn 2018
All Publications
-
Learning latent actions to control assistive robots
AUTONOMOUS ROBOTS
2021: 1-33
Abstract
Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today's robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot's motion in the x-y plane, in another mode the joystick controls the robot's z-yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot's high-dimensional actions into low-dimensional and human-controllable latent actions. We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.
View details for DOI 10.1007/s10514-021-10005-w
View details for Web of Science ID 000681168800001
View details for PubMedID 34366568
View details for PubMedCentralID PMC8335729
-
Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones
IEEE ROBOTICS AND AUTOMATION LETTERS
2021; 6 (3): 4915-4922
View details for DOI 10.1109/LRA.2021.3070252
View details for Web of Science ID 000642765100002
-
Using a Variable-Friction Robot Hand to Determine Proprioceptive Features for Object Classification During Within-Hand-Manipulation
IEEE TRANSACTIONS ON HAPTICS
2020; 13 (3): 600–610
Abstract
Interactions with an object during within-hand manipulation (WIHM) constitutes an assortment of gripping, sliding, and pivoting actions. In addition to manipulation benefits, the re-orientation and motion of the objects within-the-hand also provides a rich array of additional haptic information via the interactions to the sensory organs of the hand. In this article, we utilize variable friction (VF) robotic fingers to execute a rolling WIHM on a variety of objects, while recording 'proprioceptive' actuator data, which is then used for object classification (i.e., without tactile sensors). Rather than hand-picking a select group of features for this task, our approach begins with 66 general features, which are computed from actuator position and load profiles for each object-rolling manipulation, based on gradient changes. An Extra Trees classifier performs object classification while also ranking each feature's importance. Using only the six most-important 'Key Features' from the general set, a classification accuracy of 86% was achieved for distinguishing the six geometric objects included in our data set. Comparatively, when all 66 features are used, the accuracy is 89.8%.
View details for DOI 10.1109/TOH.2019.2958669
View details for Web of Science ID 000564299200016
View details for PubMedID 31831440
-
Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks
IEEE TRANSACTIONS ON ROBOTICS
2020; 36 (3): 582–96
View details for DOI 10.1109/TRO.2019.2959445
View details for Web of Science ID 000543027200001
-
Controlling Assistive Robots with Learned Latent Actions
IEEE. 2020: 378-384
View details for Web of Science ID 000712319500042
-
Learning Hierarchical Control for Robust In-Hand Manipulation
IEEE. 2020: 8855-8862
View details for Web of Science ID 000712319505116
-
Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks
IEEE. 2019: 8943–50
View details for Web of Science ID 000494942306083