no code implementations • 12 Apr 2024 • Sarath Sreedharan, Malek Mechergui
Detecting and handling misspecified objectives, such as reward functions, has been widely recognized as one of the central challenges within the domain of Artificial Intelligence (AI) safety research.
no code implementations • 22 Nov 2023 • Turgay Caglar, Sirine Belhaj, Tathagata Chakraborti, Michael Katz, Sarath Sreedharan
This is the first work to look at the application of large language models (LLMs) for the purpose of model space edits in automated planning tasks.
1 code implementation • 13 Jun 2023 • Tathagata Chakraborti, Jungkoo Kang, Christian Muise, Sarath Sreedharan, Michael Walker, Daniel Szafir, Tom Williams
This paper describes TOBY, a visualization tool that helps a user explore the contents of an academic survey paper.
2 code implementations • 25 May 2023 • Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, Subbarao Kambhampati
We aim to evaluate (1) the effectiveness of LLMs in generating plans autonomously in commonsense planning tasks and (2) the potential of LLMs in LLM-Modulo settings where they act as a source of heuristic guidance for external planners and verifiers.
1 code implementation • 1 Mar 2023 • Brittany Cates, Anagha Kulkarni, Sarath Sreedharan
In this paper, we propose a planning framework to generate a defense strategy against an attacker who is working in an environment where a defender can operate without the attacker's knowledge.
no code implementations • 13 Feb 2023 • Karthik Valmeekam, Sarath Sreedharan, Matthew Marquez, Alberto Olmo, Subbarao Kambhampati
On this benchmark, we evaluate LLMs in three modes: autonomous, heuristic and human-in-the-loop.
no code implementations • 2 Feb 2023 • Malek Mechergui, Sarath Sreedharan
To address this lacuna, we propose a novel formulation for the value alignment problem, named goal alignment that focuses on a few central challenges related to value alignment.
no code implementations • 29 Jan 2023 • Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati
Handling trust is one of the core requirements for facilitating effective interaction between the human and the AI agent.
no code implementations • 27 Oct 2022 • Utkarsh Soni, Nupur Thakur, Sarath Sreedharan, Lin Guan, Mudit Verma, Matthew Marquez, Subbarao Kambhampati
If the relevant concept is not in the shared vocabulary, then it is learned.
2 code implementations • NeurIPS 2023 • Karthik Valmeekam, Matthew Marquez, Alberto Olmo, Sarath Sreedharan, Subbarao Kambhampati
PlanBench provides sufficient diversity in both the task domains and the specific planning capabilities.
no code implementations • 18 Feb 2022 • Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati
Through this paper, we will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.
1 code implementation • 6 Feb 2022 • Lin Guan, Sarath Sreedharan, Subbarao Kambhampati
At the low level, we learn a set of diverse policies for each possible task subgoal identified by the landmark, which are then stitched together.
no code implementations • 21 Sep 2021 • Subbarao Kambhampati, Sarath Sreedharan, Mudit Verma, Yantian Zha, Lin Guan
The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities.
no code implementations • 23 Jun 2021 • Utkarsh Soni, Sarath Sreedharan, Subbarao Kambhampati
The former is achieved by a data-driven clustering approach while for the latter, we compile our explanation generation problem into a POMDP.
1 code implementation • 14 Jun 2021 • Alberto Olmo, Sarath Sreedharan, Subbarao Kambhampati
Operations in many essential industries including finance and banking are often characterized by the need to perform repetitive sequential tasks.
no code implementations • 3 May 2021 • Zahra Zahedi, Mudit Verma, Sarath Sreedharan, Subbarao Kambhampati
The problem of trust management is particularly challenging in mixed human-robot teams where the human and the robot may have different models about the task at hand and thus may have different expectations regarding the current course of action, thereby forcing the robot to focus on the costly explicable behavior.
no code implementations • 21 Apr 2021 • Sarath Sreedharan, Anagha Kulkarni, David E. Smith, Subbarao Kambhampati
Existing approaches for generating human-aware agent behaviors have considered different measures of interpretability in isolation.
no code implementations • 22 Nov 2020 • Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, David E. Smith, Subbarao Kambhampati
Existing approaches for the design of interpretable agent behavior consider different measures of interpretability in isolation.
no code implementations • 21 Nov 2020 • Sarath Sreedharan, Tathagata Chakraborti, Yara Rizk, Yasaman Khazaeni
A new design of an AI assistant that has become increasingly popular is that of an "aggregated assistant" -- realized as an orchestrated composition of several individual skills or agents that can each perform atomic tasks.
no code implementations • 19 Nov 2020 • Karthik Valmeekam, Sarath Sreedharan, Sailik Sengupta, Subbarao Kambhampati
Decision support systems seek to enable informed decision-making.
no code implementations • 2 Jul 2020 • Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David Smith, Subbarao Kambhampati
Given structured environments (like warehouses and restaurants), it may be possible to design the environment so as to boost the interpretability of the robot's behavior or to shape the human's expectations of the robot's behavior.
no code implementations • 26 Feb 2020 • Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati
In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms.
no code implementations • ICLR 2022 • Sarath Sreedharan, Utkarsh Soni, Mudit Verma, Siddharth Srivastava, Subbarao Kambhampati
As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions.
no code implementations • 19 Mar 2019 • Sarath Sreedharan, Siddharth Srivastava, David Smith, Subbarao Kambhampati
Explainable planning is widely accepted as a prerequisite for autonomous agents to successfully work with humans.
no code implementations • 18 Mar 2019 • Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Subbarao Kambhampati
In this work, we present a new planning formalism called Expectation-Aware planning for decision making with humans in the loop where the human's expectations about an agent may differ from the agent's own model.
no code implementations • 17 Mar 2019 • Sarath Sreedharan, Alberto Olmo, Aditya Prasad Mishra, Subbarao Kambhampati
One such approach has been the idea of {\em explanation as model-reconciliation}.
no code implementations • 23 Nov 2018 • Tathagata Chakraborti, Anagha Kulkarni, Sarath Sreedharan, David E. Smith, Subbarao Kambhampati
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop.
no code implementations • 19 Feb 2018 • Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati
There is a growing interest within the AI research community to develop autonomous systems capable of explaining their behavior to users.
no code implementations • 3 Feb 2018 • Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, Subbarao Kambhampati
Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models.
no code implementations • 1 Aug 2017 • Tathagata Chakraborti, Sarath Sreedharan, Subbarao Kambhampati
In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a trade-off during the plan generation process itself by means of a model-space search method MEGA.
no code implementations • 28 Jan 2017 • Tathagata Chakraborti, Sarath Sreedharan, Yu Zhang, Subbarao Kambhampati
When AI systems interact with humans in the loop, they are often called on to provide explanations for their plans and behavior.
no code implementations • 25 May 2016 • Tathagata Chakraborti, Sarath Sreedharan, Sailik Sengupta, T. K. Satish Kumar, Subbarao Kambhampati
In this paper, we develop a computationally simpler version of the operator count heuristic for a particular class of domains.
no code implementations • 25 Nov 2015 • Yu Zhang, Sarath Sreedharan, Anagha Kulkarni, Tathagata Chakraborti, Hankz Hankui Zhuo, Subbarao Kambhampati
Hence, for such agents to be helpful, one important requirement is for them to synthesize plans that can be easily understood by humans.