Search Results for author: Adrita Barua

Found 5 papers, 1 papers with code

On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis

no code implementations21 Apr 2024 Abhilekha Dalal, Rushrukh Rayan, Adrita Barua, Eugene Y. Vasserman, Md Kamruzzaman Sarker, Pascal Hitzler

A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would help answer the question of what a deep learning system internally detects as relevant in the input, demystifying the otherwise black-box nature of deep learning systems.

Explanation Generation

Concept Induction using LLMs: a user experiment for assessment

no code implementations18 Apr 2024 Adrita Barua, Cara Widmer, Pascal Hitzler

To evaluate the output, we compare the concepts generated by the LLM with two other methods: concepts generated by humans and the ECII heuristic concept induction system.

Common Sense Reasoning Explainable artificial intelligence +4

Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning

1 code implementation8 Aug 2023 Abhilekha Dalal, Md Kamruzzaman Sarker, Adrita Barua, Eugene Vasserman, Pascal Hitzler

A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would provide insights into the question of what a deep learning system has internally detected as relevant on the input, demystifying the otherwise black-box character of deep learning systems.

Explaining Deep Learning Hidden Neuron Activations using Concept Induction

no code implementations23 Jan 2023 Abhilekha Dalal, Md Kamruzzaman Sarker, Adrita Barua, Pascal Hitzler

One of the current key challenges in Explainable AI is in correctly interpreting activations of hidden neurons.

Cannot find the paper you are looking for? You can Submit a new open access paper.