Search Results

Predicting the Generalization Gap in Deep Networks with Margin Distributions

google-research/google-research ICLR 2019

In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap.

Can weight sharing outperform random architecture search? An investigation with TuNAS

google-research/google-research CVPR 2020

Efficient Neural Architecture Search methods based on weight sharing have shown good promise in democratizing Neural Architecture Search for computer vision models.

Image Classification Neural Architecture Search

Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift

google-research/google-research NeurIPS 2019

Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive {\em uncertainty}.

Probabilistic Deep Learning

Momentum-Based Variance Reduction in Non-Convex SGD

google-research/google-research NeurIPS 2019

Variance reduction has emerged in recent years as a strong competitor to stochastic gradient descent in non-convex problems, providing the first algorithms to improve upon the converge rate of stochastic gradient descent for finding first-order critical points.

Saccader: Improving Accuracy of Hard Attention Models for Vision

google-research/google-research NeurIPS 2019

Although deep convolutional neural networks achieve state-of-the-art performance across nearly all image classification tasks, their decisions are difficult to interpret.

Hard Attention Image Classification

Milking CowMask for Semi-Supervised Image Classification

google-research/google-research 26 Mar 2020

Using it to provide perturbations for semi-supervised consistency regularization, we achieve a state-of-the-art result on ImageNet with 10% labeled data, with a top-5 error of 8. 76% and top-1 error of 26. 06%.

Classification General Classification +1

Meta-Learning Requires Meta-Augmentation

google-research/google-research NeurIPS 2020

Meta-learning algorithms aim to learn two components: a model that predicts targets for a task, and a base learner that quickly updates that model when given examples from a new task.

Meta-Learning

A Spectral Energy Distance for Parallel Speech Synthesis

google-research/google-research NeurIPS 2020

Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems.

Speech Synthesis

Meta Back-translation

google-research/google-research ICLR 2021

Back-translation is an effective strategy to improve the performance of Neural Machine Translation~(NMT) by generating pseudo-parallel data.

Machine Translation Meta-Learning +2