no code implementations • 9 Feb 2021 • Shane T. Mueller, Elizabeth S. Veinott, Robert R. Hoffman, Gary Klein, Lamia Alam, Tauseef Mamun, William J. Clancey
XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 30 Sep 2020 • Robert R. Hoffman, William J. Clancey, Shane T. Mueller
It might be worthwhile to pursue this, to develop intelligent systems that allow for the observation and analysis of abductive reasoning and the assessment of abductive reasoning as a learnable skill.
no code implementations • 7 Feb 2020 • Shane T. Mueller
Modern AI image classifiers have made impressive advances in recent years, but their performance often appears strange or violates expectations of users.
no code implementations • 5 Feb 2019 • Shane T. Mueller, Robert R. Hoffman, William Clancey, Abigail Emrey, Gary Klein
That said, most of the key concepts and issues are expressed in this Report.
no code implementations • 11 Dec 2018 • Robert R. Hoffman, Shane T. Mueller, Gary Klein, Jordan Litman
The question addressed in this paper is: If we present to a user an AI system that explains how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI?