no code implementations • 2 Aug 2023 • Susan Leavy, Emilie Pine, Mark T Keane
We present a text mining system to support the exploration of large volumes of text detailing the findings of government inquiries.
1 code implementation • 27 Jan 2023 • Saugat Aryal, Mark T Keane
Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e. g. a customer refused a loan might be told: If you asked for a loan with a shorter term, it would have been approved).
no code implementations • 19 Dec 2022 • Courtney Ford, Mark T Keane
Very few eXplainable AI (XAI) studies consider how users understanding of explanations might change depending on whether they know more or less about the to be explained domain (i. e., whether they differ in their expertise).
no code implementations • 21 Apr 2022 • Greta Warren, Mark T Keane, Ruth M J Byrne
It is also unknown whether counterfactual explanations are equally effective for categorical as for continuous features, although current methods assume they do.
no code implementations • 29 Apr 2021 • Mark T Keane, Eoin M Kenny, Mohammed Temraz, Derek Greene, Barry Smyth
Recently, it has been proposed that fruitful synergies may exist between Deep Learning (DL) and Case Based Reasoning (CBR); that there are insights to be gained by applying CBR ideas to problems in DL (what could be called DeepCBR).
no code implementations • 8 Apr 2021 • Mohammed Temraz, Eoin Kenny, Elodie Ruelle, Laurence Shalloo, Barry Smyth, Mark T Keane
Climate change poses a major challenge to humanity, especially in its impact on agriculture, a challenge that a responsible AI should meet.
no code implementations • 26 Feb 2021 • Mark T Keane, Eoin M Kenny, Eoin Delaney, Barry Smyth
In recent years, there has been an explosion of AI research on counterfactual explanations as a solution to the problem of eXplainable AI (XAI).
no code implementations • 22 Jan 2021 • Barry Smyth, Mark T Keane
Counterfactual explanations provide a potentially significant solution to the Explainable AI (XAI) problem, but good, native counterfactuals have been shown to rarely occur in most datasets.