Review of Metrics to Measure the Stability, Robustness and Resilience of Reinforcement Learning

22 Mar 2022  ·  Laura L. Pullum ·

Reinforcement learning has received significant interest in recent years, due primarily to the successes of deep reinforcement learning at solving many challenging tasks such as playing Chess, Go and online computer games. However, with the increasing focus on reinforcement learning, applications outside of gaming and simulated environments require understanding the robustness, stability, and resilience of reinforcement learning methods. To this end, we conducted a comprehensive literature review to characterize the available literature on these three behaviors as they pertain to reinforcement learning. We classify the quantitative and theoretical approaches used to indicate or measure robustness, stability, and resilience behaviors. In addition, we identified the action or event to which the quantitative approaches were attempting to be stable, robust, or resilient. Finally, we provide a decision tree useful for selecting metrics to quantify the behaviors. We believe that this is the first comprehensive review of stability, robustness and resilience specifically geared towards reinforcement learning.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here