Representation, Justification and Explanation in a Value Driven Agent: An Argumentation-Based Approach

13 Dec 2018  ·  Beishui Liao, Michael Anderson, Susan Leigh Anderson ·

Ethical and explainable artificial intelligence is an interdisciplinary research area involving computer science, philosophy, logic, the social sciences, etc. For an ethical autonomous system, the ability to justify and explain its decision making is a crucial aspect of transparency and trustworthiness. This paper takes a Value Driven Agent (VDA) as an example, explicitly representing implicit knowledge of a machine learning-based autonomous agent and using this formalism to justify and explain the decisions of the agent. For this purpose, we introduce a novel formalism to describe the intrinsic knowledge and solutions of a VDA in each situation. Based on this formalism, we formulate an approach to justify and explain the decision-making process of a VDA, in terms of a typical argumentation formalism, Assumption-based Argumentation (ABA). As a result, a VDA in a given situation is mapped onto an argumentation framework in which arguments are defined by the notion of deduction. Justified actions with respect to semantics from argumentation correspond to solutions of the VDA. The acceptance (rejection) of arguments and their premises in the framework provides an explanation for why an action was selected (or not). Furthermore, we go beyond the existing version of VDA, considering not only practical reasoning, but also epistemic reasoning, such that the inconsistency of knowledge of the VDA can be identified, handled and explained.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here