1 code implementation • NLPerspectives (LREC) 2022 • Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro, Salvatore Ruggieri
Understanding and quantifying the bias introduced by human annotation of data is a crucial problem for trustworthy supervised learning.
no code implementations • 24 Jan 2024 • Jose M. Alvarez, Salvatore Ruggieri
In this work, we formalize perception under causal reasoning to capture the act of interpretation by an individual.
1 code implementation • 23 Jan 2024 • Andrea Pugnana, Lorenzo Perini, Jesse Davis, Salvatore Ruggieri
The selective classification framework aims to design a mechanism that balances the fraction of rejected predictions (i. e., the proportion of examples for which the model does not make a prediction) versus the improvement in predictive performance on the selected predictions.
1 code implementation • 4 Dec 2023 • Mattia Setzu, Salvatore Ruggieri
Decision Trees are accessible, interpretable, and well-performing classification models.
no code implementations • 17 Nov 2023 • Xuan Zhao, Klaus Broelemann, Salvatore Ruggieri, Gjergji Kasneci
The two neural networks can approximate the causal model of the data, and the causal model of interventions.
1 code implementation • 1 Sep 2023 • Laura State, Salvatore Ruggieri, Franco Turini
Explaining opaque Machine Learning (ML) models is an increasingly relevant problem.
1 code implementation • 29 Aug 2023 • Riccardo Guidotti, Salvatore Ruggieri
In eXplainable Artificial Intelligence (XAI), several counterfactual explainers have been proposed, each focusing on some desirable properties of counterfactual instances: minimality, actionability, stability, diversity, plausibility, discriminative power.
1 code implementation • 28 Jul 2023 • Jose M. Alvarez, Antonio Mastropietro, Salvatore Ruggieri
To study the impact of the ISO, we introduce a human-like screener and compare it to its algorithmic counterpart.
1 code implementation • 29 May 2023 • Laura State, Salvatore Ruggieri, Franco Turini
REASONX provides interactive contrastive explanations that can be augmented by background knowledge, and allows to operate under a setting of under-specified information, leading to increased flexibility in the provided explanations.
no code implementations • 14 Mar 2023 • Carlos Mougan, Laura State, Antonio Ferrara, Salvatore Ruggieri, Steffen Staab
Liberalism-oriented political philosophy reasons that all individuals should be treated equally independently of their protected characteristics.
1 code implementation • 27 Feb 2023 • Jose M. Alvarez, Kristen M. Scott, Salvatore Ruggieri, Bettina Berendt
In uses of pre-trained machine learning models, it is a known issue that the target population in which the model is being deployed may not have been reflected in the source population with which the model was trained.
1 code implementation • 23 Feb 2023 • Jose M. Alvarez, Salvatore Ruggieri
For any complainant, we find and compare similar protected and non-protected instances in the dataset used by the classifier to construct a control and test group, where a difference between the decision outcomes of the two groups implies potential individual discrimination.
1 code implementation • 19 Oct 2022 • Andrea Pugnana, Salvatore Ruggieri
We propose a model-agnostic approach to associate a selection function to a given probabilistic binary classifier.
2 code implementations • 27 Jan 2022 • Carlos Mougan, Jose M. Alvarez, Salvatore Ruggieri, Steffen Staab
We investigate the interaction between categorical encodings and target encoding regularization methods that reduce unfairness.
1 code implementation • 24 Jan 2021 • Fabrizio Lillo, Salvatore Ruggieri
The observed volumes of sample queries are collected from Google Trends (continuous data) and SearchVolume (binned data).
no code implementations • 22 Oct 2018 • Riccardo Guidotti, Salvatore Ruggieri
Interpretable classification models are built with the purpose of providing a comprehensible description of the decision logic to an external oversight agent.
no code implementations • 26 Jun 2018 • Dino Pedreschi, Fosca Giannotti, Riccardo Guidotti, Anna Monreale, Luca Pappalardo, Salvatore Ruggieri, Franco Turini
We introduce the local-to-global framework for black box explanation, a novel approach with promising early results, which paves the road for a wide spectrum of future developments along three dimensions: (i) the language for expressing explanations in terms of highly expressive logic-based rules, with a statistical and causal interpretation; (ii) the inference of local explanations aimed at revealing the logic of the decision adopted for a specific instance by querying and auditing the black box in the vicinity of the target instance; (iii), the bottom-up generalization of the many local explanations into simple global ones, with algorithms that optimize the quality and comprehensibility of explanations.
1 code implementation • 28 May 2018 • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, Fosca Giannotti
Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome.
no code implementations • 6 Feb 2018 • Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, Fosca Giannotti
The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation.
no code implementations • ICML 2017 • Salvatore Ruggieri
The search space for the feature selection problem in decision tree learning is the lattice of subsets of the available features.