6 code implementations • 14 Dec 2020 • Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, Percy Liang
Distribution shifts -- where the training distribution differs from the test distribution -- can substantially degrade the accuracy of machine learning (ML) systems deployed in the wild.
no code implementations • 2 Nov 2020 • Akshay Balsubramani
A pervasive issue in statistical hypothesis testing is that the reported $p$-values are biased downward by data "peeking" -- the practice of reporting only progressively extreme values of the test statistic as more data samples are collected.
no code implementations • 30 Aug 2020 • Akshay Balsubramani
We show an extension of Sanov's theorem on large deviations, controlling the tail probabilities of i. i. d.
no code implementations • ICLR 2020 • Ruishan Liu, Akshay Balsubramani, James Zou
Optimal transport (OT) is a principled approach to align datasets, but a key challenge in applying OT is that we need to specify a transport cost function that accurately captures how the two datasets are related.
1 code implementation • NeurIPS 2019 • Akshay Balsubramani, Sanjoy Dasgupta, Yoav Freund, Shay Moran
We introduce a variant of the $k$-nearest neighbor classifier in which $k$ is chosen adaptively for each query, rather than supplied as a parameter.
no code implementations • 5 Sep 2017 • Akshay Balsubramani
In this note, we point out a basic link between generative adversarial (GA) training and binary classification -- any powerful discriminator essentially computes an (f-)divergence between real and generated samples.
1 code implementation • ICLR 2018 • Chris Donahue, Zachary C. Lipton, Akshay Balsubramani, Julian McAuley
Corresponding samples from the real dataset consist of two distinct photographs of the same subject.
no code implementations • 7 Nov 2016 • Akshay Balsubramani
We formulate learning of a binary autoencoder as a biconvex optimization problem which learns from the pairwise correlations between encoded and decoded bits.
1 code implementation • 28 May 2016 • Akshay Balsubramani, Yoav Freund
We explore a novel approach to semi-supervised learning.
no code implementations • 25 Feb 2016 • Akshay Balsubramani
A binary classifier capable of abstaining from making a label prediction has two goals in tension: minimizing errors, and avoiding abstaining unnecessarily often.
no code implementations • 26 Dec 2015 • Akshay Balsubramani
We explore the problem of binary classification in machine learning, with a twist - the classifier is allowed to abstain on any datum, professing ignorance about the true class label without committing to any prediction.
1 code implementation • NeurIPS 2016 • Akshay Balsubramani, Yoav Freund
We address the problem of aggregating an ensemble of predictors with known loss bounds in a semi-supervised binary classification setting, to minimize prediction loss incurred on the unlabeled data.
no code implementations • 22 Jun 2015 • Akshay Balsubramani
We give tight concentration bounds for mixtures of martingales that are simultaneously uniform over (a) mixture distributions, in a PAC-Bayes sense; and (b) all finite times.
1 code implementation • NeurIPS 2015 • Akshay Balsubramani, Yoav Freund
We present and empirically evaluate an efficient algorithm that learns to aggregate the predictions of an ensemble of binary classifiers.
1 code implementation • 10 Jun 2015 • Akshay Balsubramani, Aaditya Ramdas
It is novel in several ways: (a) it takes linear time and constant space to compute on the fly, (b) it has the same power guarantee as a non-sequential version of the test with the same computational constraints up to a small factor, and (c) it accesses only as many samples as are required - its stopping time adapts to the unknown difficulty of the problem.
1 code implementation • 5 Mar 2015 • Akshay Balsubramani, Yoav Freund
We develop a worst-case analysis of aggregation of classifier ensembles for binary classification.
no code implementations • NeurIPS 2013 • Akshay Balsubramani, Sanjoy Dasgupta, Yoav Freund
We consider a situation in which we see samples in $\mathbb{R}^d$ drawn i. i. d.
no code implementations • 15 Jan 2015 • Akshay Balsubramani, Yoav Freund
We consider using an ensemble of binary classifiers for transductive prediction, when unlabeled test data are known in advance.
no code implementations • 12 May 2014 • Akshay Balsubramani
We give concentration bounds for martingales that are uniform over finite times and extend classical Hoeffding and Bernstein inequalities.