Controlling Assistive Robots with Learned Latent Actions

20 Sep 2019  ·  Dylan P. Losey, Krishnan Srinivasan, Ajay Mandlekar, Animesh Garg, Dorsa Sadigh ·

Assistive robotic arms enable users with physical disabilities to perform everyday tasks without relying on a caregiver. Unfortunately, the very dexterity that makes these arms useful also makes them challenging to teleoperate: the robot has more degrees-of-freedom than the human can directly coordinate with a handheld joystick. Our insight is that we can make assistive robots easier for humans to control by leveraging latent actions. Latent actions provide a low-dimensional embedding of high-dimensional robot behavior: for example, one latent dimension might guide the assistive arm along a pouring motion. In this paper, we design a teleoperation algorithm for assistive robots that learns latent actions from task demonstrations. We formulate the controllability, consistency, and scaling properties that user-friendly latent actions should have, and evaluate how different low-dimensional embeddings capture these properties. Finally, we conduct two user studies on a robotic arm to compare our latent action approach to both state-of-the-art shared autonomy baselines and a teleoperation strategy currently used by assistive arms. Participants completed assistive eating and cooking tasks more efficiently when leveraging our latent actions, and also subjectively reported that latent actions made the task easier to perform. The video accompanying this paper can be found at: https://youtu.be/wjnhrzugBj4.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Robotics

Datasets


  Add Datasets introduced or used in this paper