no code implementations • 17 May 2022 • Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 16 Jun 2021 • Tomas Folke, ZhaoBin Li, Ravi B. Sojitra, Scott Cheng-Hsin Yang, Patrick Shafto
Adversarial images highlight how vulnerable modern image classifiers are to perturbations outside of their training set.
no code implementations • 8 Jun 2021 • Tomas Folke, Scott Cheng-Hsin Yang, Sean Anderson, Patrick Shafto
Limited expert time is a key bottleneck in medical imaging.
no code implementations • 16 May 2021 • Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto
Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee.
1 code implementation • 7 Feb 2021 • Scott Cheng-Hsin Yang, Wai Keen Vong, Ravi B. Sojitra, Tomas Folke, Patrick Shafto
State-of-the-art deep-learning systems use decision rules that are challenging for humans to model.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1