1 code implementation • ECCV 2020 • Steven Spratley, Krista Ehinger, Tim Miller
Humans have a remarkable capacity to draw parallels between concepts, generalising their experience to new domains.
no code implementations • 2 Feb 2024 • Thao Le, Tim Miller, Liz Sonenberg, Ronal Singh
Prior research on AI-assisted human decision-making has explored several different explainable AI (XAI) approaches.
no code implementations • 25 Aug 2023 • Lyndon Benke, Tim Miller, Michael Papasimeon, Nir Lipovetzky
Diverse, top-k, and top-quality planning are concerned with the generation of sets of solutions to sequential decision problems.
no code implementations • 20 Mar 2023 • Alan Lewis, Tim Miller
We propose the deceptive exploration ambiguity model (DEAM), which learns using the deceptive policy during training, leading to targeted exploration of the state space.
no code implementations • 10 Mar 2023 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction.
no code implementations • 9 Mar 2023 • Abeer Alshehri, Tim Miller, Mor Vered
We introduce and evaluate an eXplainable Goal Recognition (XGR) model that uses the Weight of Evidence (WoE) framework to explain goal recognition problems.
no code implementations • 24 Feb 2023 • Tim Miller
In this paper, we argue for a paradigm shift from the current model of explainable artificial intelligence (XAI), which may be counter-productive to better human decision making.
1 code implementation • CVPR 2023 • Steven Spratley, Krista A. Ehinger, Tim Miller
While progressive-matrix problems (PMPs) are becoming popular for the development and evaluation of analogical reasoning in computer vision, we argue that the dominant methodology in this area struggles to expose the lack of meaningful generalisation in solvers, and reinforces an objectivist stance on perception -- that objects can only be seen one way -- which we believe to be counter-productive.
no code implementations • 19 Nov 2022 • Gayda Mutahar, Tim Miller
This work features how important to have more understandable explanations when interpretability is crucial.
no code implementations • 31 Aug 2022 • Tim Miller
Trust should surely decrease if a model is of poor quality.
no code implementations • 6 Jun 2022 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help users better understand and better trust an AI model's prediction in human-subject studies.
no code implementations • 6 Oct 2021 • Christian Muise, Vaishak Belle, Paolo Felli, Sheila Mcilraith, Tim Miller, Adrian R. Pearce, Liz Sonenberg
Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents.
no code implementations • 29 Apr 2021 • Ronal Singh, Tim Miller, Darryn Reid
Results show that participants' constraints improved the expected return of the plans by 10% ($p < 0. 05$) relative to baseline plans, demonstrating that human insight can be used in collaborative planning for resilience.
no code implementations • 15 Apr 2021 • Ronal Singh, Upol Ehsan, Marc Cheong, Mark O. Riedl, Tim Miller
Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally.
no code implementations • 5 Feb 2021 • Zhengshang Liu, Yue Yang, Tim Miller, Peta Masters
However, in some situations, we may want to keep a reward function private; that is, to make it difficult for an observer to determine the reward function used.
no code implementations • 3 Feb 2021 • Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso, Frank Vetere
This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions.
no code implementations • 15 Nov 2020 • Simon Coghlan, Tim Miller, Jeannie Paterson
This article philosophically analyzes online exam supervision technologies, which have been thrust into the public spotlight due to campus lockdowns during the COVID-19 pandemic and the growing demand for online courses.
no code implementations • 15 Oct 2020 • Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i. e., trust between people).
1 code implementation • 27 Jun 2020 • Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A. Ehinger, Benjamin I. P. Rubinstein
Based on the requirements of fidelity (approximate models to target models) and interpretability (being meaningful to people), we design measurements and evaluate a range of matrix factorization methods with our framework.
no code implementations • 20 Feb 2020 • Dianbo Liu, Tim Miller
Large scale contextual representation models, such as BERT, have significantly advanced natural language processing (NLP) in recently years.
no code implementations • 28 Jan 2020 • Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere
In this paper we introduce and evaluate a distal explanation model for model-free reinforcement learning agents that can generate explanations for `why' and `why not' questions.
no code implementations • ICLR 2020 • Dianbo Liu, Kathe Fox, Griffin Weber, Tim Miller
We proposed and evaluated a confederated learning to training machine learning model to stratify the risk of several diseases among when data are horizontally separated by individual, vertically separated by data type, and separated by identity without patient ID matching.
no code implementations • 29 Jul 2019 • Mor Vered, Frank Dignum, Tim Miller
Current AI approaches have frequently been used to help personalize many aspects of medical experiences and tailor them to a specific individuals' needs.
2 code implementations • 27 May 2019 • Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere
In this paper, we use causal models to derive causal explanations of behaviour of reinforcement learning agents.
no code implementations • 28 Mar 2019 • Guang Hu, Tim Miller, Nir Lipovetzky
Epistemic planning --- planning with knowledge and belief --- is essential in many multi-agent and human-agent interaction domains.
no code implementations • 5 Mar 2019 • Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere
Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 7 Nov 2018 • Tim Miller
In this paper, we extend the structural causal model approach to define two complementary notions of contrastive explanation, and demonstrate them on two classical problems in artificial intelligence: classification and planning.
no code implementations • 21 Jun 2018 • Prashan Madumal, Tim Miller, Frank Vetere, Liz Sonenberg
We carry out further analysis to identify the relationships between components and sequences and cycles that occur in a dialog.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 2 Dec 2017 • Tim Miller, Piers Howe, Liz Sonenberg
As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running the asylum'.
no code implementations • 22 Jun 2017 • Tim Miller
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable.
no code implementations • 21 Feb 2016 • Liz Sonenberg, Tim Miller, Adrian Pearce, Paolo Felli, Christian Muise, Frank Dignum
Making a computational agent 'social' has implications for how it perceives itself and the environment in which it is situated, including the ability to recognise the behaviours of others.