no code implementations • 13 Mar 2024 • Shubham Sharma, Sanghamitra Dutta, Emanuele Albini, Freddy Lecue, Daniele Magazzeni, Manuela Veloso
In this paper, we introduce the problem of feature \emph{reselection}, so that features can be selected with respect to secondary model performance characteristics efficiently even after a feature selection process has been done with respect to a primary objective.
1 code implementation • 13 Feb 2024 • André Artelt, Shubham Sharma, Freddy Lecué, Barbara Hammer
Counterfactual explanations provide a popular method for analyzing the predictions of black-box systems, and they can offer the opportunity for computational recourse by suggesting actionable changes on how to change the input to obtain a different (i. e. more favorable) system output.
no code implementations • 23 Nov 2023 • Sikha Pentyala, Shubham Sharma, Sanjay Kariyappa, Freddy Lecue, Daniele Magazzeni
We observe that PrivRecourse can provide paths that are private and realistic.
no code implementations • 9 Nov 2023 • Zikai Xiong, Niccolò Dalmasso, Shubham Sharma, Freddy Lecue, Daniele Magazzeni, Vamsi K. Potluru, Tucker Balch, Manuela Veloso
In this work, we present fair Wasserstein coresets (FWC), a novel coreset approach which generates fair synthetic representative samples along with sample-level weights to be used in downstream learning tasks.
no code implementations • 23 Aug 2023 • Haochen Wu, Shubham Sharma, Sunandita Patra, Sriram Gopalakrishnan
However, the uncertainties of feature changes and the risk of higher than average costs in recourse have not been considered.
no code implementations • 13 Jul 2023 • Emanuele Albini, Shubham Sharma, Saumitra Mishra, Danial Dervovic, Daniele Magazzeni
Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations.
no code implementations • 12 Apr 2023 • Anjana Arunkumar, Shubham Sharma, Rakhi Agrawal, Sriram Chandrasekaran, Chris Bryan
Cross-task generalization is a significant outcome that defines mastery in natural language understanding.
1 code implementation • 14 Nov 2022 • Pranjal Aggarwal, Pasupuleti Chandana, Jagrut Nemade, Shubham Sharma, Sunil Saumya, Shankar Biradar
Since personal computers became widely available in the consumer market, the amount of harmful content on the internet has significantly expanded.
no code implementations • 12 Oct 2022 • Shubham Sharma, Alan H. Gee, Jette Henderson, Joydeep Ghosh
The ability to quickly examine combinations of the most promising gradient directions as well as to incorporate additional user-defined constraints allows us to generate multiple counterfactual explanations that are sparse, realistic, and robust to input manipulations.
no code implementations • 10 Oct 2022 • Shubham Sharma, Jette Henderson, Joydeep Ghosh
In this paper, we propose FEAMOE, a novel "mixture-of-experts" inspired framework aimed at learning fairer, more explainable/interpretable models that can also rapidly adjust to drifts in both the accuracy and the fairness of a classifier.
1 code implementation • 27 Jun 2022 • Haoyi Niu, Shubham Sharma, Yiwen Qiu, Ming Li, Guyue Zhou, Jianming Hu, Xianyuan Zhan
This brings up a new question: is it possible to combine learning from limited real data in offline RL and unrestricted exploration through imperfect simulators in online RL to address the drawbacks of both approaches?
no code implementations • 13 Oct 2020 • Shubham Sharma, Alan H. Gee, David Paydarfar, Joydeep Ghosh
Fairness in machine learning is crucial when individuals are subject to automated decisions made by models in high-stake domains.
no code implementations • 13 Sep 2019 • Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley
Yet there is little understanding of how organizations use these methods in practice.
no code implementations • 20 May 2019 • Shubham Sharma, Jette Henderson, Joydeep Ghosh
Given a model and an input instance, CERTIFAI uses a custom genetic algorithm to generate counterfactuals: instances close to the input that change the prediction of the model.
no code implementations • WS 2016 • Amrith Krishna, Pavankumar Satuluri, Shubham Sharma, Apurv Kumar, Pawan Goyal
We construct an elaborate features space for our system by combining conditional rules from the grammar \textit{Adṣṭ{\=a}dhy{\=a}y{\=\i}}, semantic relations between the compound components from a lexical database \textit{Amarakoṣa} and linguistic structures from the data using Adaptor Grammars.
no code implementations • 16 Jun 2015 • T. V. Ananthapadmanabha, A. G. Ramakrishnan, Shubham Sharma
An objective critical distance (OCD) has been defined as that spacing between adjacent formants, when the level of the valley between them reaches the mean spectral level.