no code implementations • 29 May 2024 • Stylianos Loukas Vasileiou, William Yeoh
In contrast, the explanatory hypothesis indicates that people are inherently driven to seek explanations for inconsistencies, thereby striving for explanatory coherence rather than minimal changes when revising beliefs.
no code implementations • 29 May 2024 • Stylianos Loukas Vasileiou, William Yeoh, Alessandro Previti, Tran Cao Son
In this paper, we propose a novel framework for generating probabilistic monolithic explanations and model reconciling explanations.
no code implementations • 28 May 2024 • Yinxu Tang, Stylianos Loukas Vasileiou, William Yeoh
Explainable AI Planning (XAIP) aims to develop AI agents that can effectively explain their decisions and actions to human users, fostering trust and facilitating human-AI collaboration.
no code implementations • 13 May 2024 • Silvia Tulli, Stylianos Loukas Vasileiou, Sarath Sreedharan
In this work, we retroactively try to provide an account of what constitutes a human-aware AI system.
no code implementations • 26 Jun 2023 • Stylianos Loukas Vasileiou, Ashwin Kumar, William Yeoh, Tran Cao Son, Francesca Toni
We present DR-HAI -- a novel argumentation-based framework designed to extend model reconciliation approaches, commonly used in human-aware planning, for enhanced human-AI interaction.
1 code implementation • 16 Dec 2020 • Stylianos Loukas Vasileiou, Alessandro Previti, William Yeoh
A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the human's model such that the plan is also optimal in the human's model.
no code implementations • 17 Nov 2020 • Stylianos Loukas Vasileiou, William Yeoh, Tran Cao Son
In this paper, we build upon notions from knowledge representation and reasoning (KR) to expand a preliminary logic-based framework that characterizes the model reconciliation problem for explainable planning.