no code implementations • 18 Jan 2023 • Penghang Liu, Rupam Acharyya, Robert E. Tillman, Shunya Kimura, Naoki Masuda, Ahmet Erdem Sarıyüce
For the Venmo network, we investigate the interplay between financial and social relations on three tasks: friendship prediction, vendor identification, and analysis of temporal cycles.
no code implementations • ICLR Workshop Neural_Compression 2021 • Rupam Acharyya, Boyu Zhang, Ankani Chattoraj, Shouman Das, Daniel Stefankovic
We then empirically show that DPP edge pruning for neural networks outperforms other competing methods (both edge and node) on real data.
no code implementations • ICLR Workshop Neural_Compression 2021 • Rupam Acharyya, Ankani Chattoraj, Boyu Zhang, Shouman Das, Daniel Stefankovic
Despite multitude of empirical advances, there is a lack of theoretical understanding of the effectiveness of different pruning methods.
1 code implementation • 11 Dec 2020 • Ankani Chattoraj, Rupam Acharyya, Shouman Das, Md. Iftekhar Tanveer, Ehsan Hoque
Our work ties together a novel metric for public speeches in both verbal and non-verbal domain with the computational power of a neural network to design a fair prediction system for speakers.
no code implementations • 28 Oct 2020 • Boyu Zhang, Anis Zaman, Rupam Acharyya, Ehsan Hoque, Vincent Silenzio, Henry Kautz
Depressive disorder is one of the most prevalent mental illnesses among the global population.
1 code implementation • 30 Jun 2020 • Rupam Acharyya, Ankani Chattoraj, Boyu Zhang, Shouman Das, Daniel Stefankovic
We inspect different pruning techniques under the statistical mechanics formulation of a teacher-student framework and derive their generalization error (GE) bounds.
no code implementations • 2 Mar 2020 • Rupam Acharyya, Shouman Das, Ankani Chattoraj, Oishani Sengupta, Md Iftekar Tanveer
Unbiased data collection is essential to guaranteeing fairness in artificial intelligence models.
no code implementations • 25 Nov 2019 • Rupam Acharyya, Shouman Das, Ankani Chattoraj, Md. Iftekhar Tanveer
This causal model contributes in generating counterfactual data to train a fair predictive model.
no code implementations • 23 Aug 2016 • Yang Zhang, Rupam Acharyya, Ji Liu, Boqing Gong
We develop a new statistical machine learning paradigm, named infinite-label learning, to annotate a data point with more than one relevant labels from a candidate set, which pools both the finite labels observed at training and a potentially infinite number of previously unseen labels.