Multi-Objective Policy Generation for Multi-Robot Systems Using Riemannian Motion Policies

14 Feb 2019  ·  Anqi Li, Mustafa Mukadam, Magnus Egerstedt, Byron Boots ·

In the multi-robot systems literature, control policies are typically obtained through descent rules for a potential function which encodes a single team-level objective. However, for multi-objective tasks, it can be hard to design a single control policy that fulfills all the objectives. In this paper, we exploit the idea of decomposing the multi-objective task into a set of simple subtasks. We associate each subtask with a potentially lower-dimensional manifold, and design Riemannian Motion Policies (RMPs) on these manifolds. Centralized and decentralized algorithms are proposed to combine these policies into a final control policy on the configuration space that the robots can execute. We propose a collection of RMPs for simple multi-robot tasks that can be used for building controllers for more complicated tasks. In particular, we prove that many existing multi-robot controllers can be closely approximated by combining the proposed RMPs. Theoretical analysis shows that the multi-robot system under the generated control policy is stable. The proposed framework is validated through both simulated tasks and robotic implementations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper