no code implementations • 3 Jun 2024 • Sofie Goethals, Eoin Delaney, Brent Mittelstadt, Chris Russell
Access to resources strongly constrains the decisions we make.
no code implementations • 6 Nov 2023 • Eoin Kenny, Eoin Delaney, Mark Keane
Explainable AI (XAI) has been proposed as a valuable tool to assist in downstream tasks involving human and AI collaboration.
no code implementations • 16 Mar 2023 • Greta Warren, Mark T. Keane, Christophe Gueret, Eoin Delaney
Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e. g., with GDPR), and (iii) reliance on the contrastive nature of human explanation.
1 code implementation • 16 Dec 2022 • Eoin Delaney, Arjun Pakrashi, Derek Greene, Mark T. Keane
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed legal compliance.
no code implementations • 20 Jul 2021 • Eoin Delaney, Derek Greene, Mark T. Keane
Whilst an abundance of techniques have recently been proposed to generate counterfactual explanations for the predictions of opaque black-box systems, markedly less attention has been paid to exploring the uncertainty of these generated explanations.
no code implementations • 26 Feb 2021 • Mark T Keane, Eoin M Kenny, Eoin Delaney, Barry Smyth
In recent years, there has been an explosion of AI research on counterfactual explanations as a solution to the problem of eXplainable AI (XAI).
1 code implementation • 28 Sep 2020 • Eoin Delaney, Derek Greene, Mark T. Keane
In recent years, there has been a rapidly expanding focus on explaining the predictions made by black-box AI systems that handle image and tabular data.