A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress

18 Jun 2018  ·  Saurabh Arora, Prashant Doshi ·

Inverse reinforcement learning (IRL) is the problem of inferring the reward function of an agent, given its policy or observed behavior. Analogous to RL, IRL is perceived both as a problem and as a class of methods. By categorically surveying the current literature in IRL, this article serves as a reference for researchers and practitioners of machine learning and beyond to understand the challenges of IRL and select the approaches best suited for the problem on hand. The survey formally introduces the IRL problem along with its central challenges such as the difficulty in performing accurate inference and its generalizability, its sensitivity to prior knowledge, and the disproportionate growth in solution complexity with problem size. The article elaborates how the current methods mitigate these challenges. We further discuss the extensions to traditional IRL methods for handling: inaccurate and incomplete perception, an incomplete model, multiple reward functions, and nonlinear reward functions. This survey concludes the discussion with some broad advances in the research area and currently open research questions.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here