1 code implementation • 18 Jul 2023 • Pedro Sequeira, Melinda Gervasio
However, existing systems lack the necessary mechanisms to provide humans with a holistic view of their competence, presenting an impediment to their adoption, particularly in critical applications where the decisions an agent makes can have significant consequences.
1 code implementation • 20 Feb 2023 • Haochen Wu, Pedro Sequeira, David V. Pynadath
We evaluate our approach in a simulated 2-player search-and-rescue operation where the goal of the agents, playing different roles, is to search for and evacuate victims in the environment.
1 code implementation • 11 Nov 2022 • Pedro Sequeira, Jesse Hostetler, Melinda Gervasio
In this paper, we extend a recently-proposed framework for explainable RL that is based on analyses of "interestingness."
no code implementations • 3 Nov 2022 • J. Brian Burns, Aravind Sundaresan, Pedro Sequeira, Vidyasagar Sadhu
We present an approach for autonomous sensor control for information gathering under partially observable, dynamic and sparsely sampled environments that maximizes information about entities present in that space.
1 code implementation • 17 Aug 2022 • Pedro Sequeira, Daniel Elenius, Jesse Hostetler, Melinda Gervasio
We present a framework for learning comprehensible models of sequential decision tasks in which agent strategies are characterized using temporal logic formulas.
no code implementations • 15 Jul 2022 • Eric Yeh, Pedro Sequeira, Jesse Hostetler, Melinda Gervasio
We present a novel generative method for producing unseen and plausible counterfactual examples for reinforcement learning (RL) agents based upon outcome variables that characterize agent behavior.
2 code implementations • 19 Dec 2019 • Pedro Sequeira, Melinda Gervasio
We propose an explainable reinforcement learning (XRL) framework that analyzes an agent's history of interaction with the environment to extract interestingness elements that help explain its behavior.