1 code implementation • 4 Dec 2023 • Michael Tschannen, Cian Eastwood, Fabian Mentzer
We introduce generative infinite-vocabulary transformers (GIVT) which generate vector sequences with real-valued entries, instead of discrete tokens from a finite vocabulary.
Ranked #13 on Image Generation on ImageNet 256x256
no code implementations • 15 Nov 2023 • Cian Eastwood, Julius von Kügelgen, Linus Ericsson, Diane Bouchacourt, Pascal Vincent, Bernhard Schölkopf, Mark Ibrahim
Self-supervised representation learning often uses data augmentations to induce some invariance to "style" attributes of the data.
no code implementations • 19 Jul 2023 • Cian Eastwood, Shashank Singh, Andrei Liviu Nicolicioiu, Marin Vlastelica, Julius von Kügelgen, Bernhard Schölkopf
To avoid failures on out-of-distribution data, recent works have sought to extract features that have an invariant or stable relationship with the label across domains, discarding "spurious" or unstable features whose relationship with the label changes across domains.
1 code implementation • 1 Oct 2022 • Cian Eastwood, Andrei Liviu Nicolicioiu, Julius von Kügelgen, Armin Kekić, Frederik Träuble, Andrea Dittadi, Bernhard Schölkopf
In representation learning, a common approach is to seek representations which disentangle the underlying factors of variation.
2 code implementations • 20 Jul 2022 • Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed Hassani, George J. Pappas, Bernhard Schölkopf
By minimizing the $\alpha$-quantile of predictor's risk distribution over domains, QRM seeks predictors that perform well with probability $\alpha$.
no code implementations • 9 Mar 2022 • Cian Eastwood, Li Nanbo, Christopher K. I. Williams
Given two object images, how can we explain their differences in terms of the underlying object properties?
1 code implementation • NeurIPS 2020 • Li Nanbo, Cian Eastwood, Robert B. Fisher
In order to sidestep the main technical difficulty of the multi-object-multi-view scenario -- maintaining object correspondences across views -- MulMON iteratively updates the latent object representations for a scene over multiple views.
no code implementations • NeurIPS Workshop ICBINB 2021 • Cian Eastwood, Ian Mason, Chris Williams
To adapt to changes in real-world data distributions, neural networks must update their parameters.
1 code implementation • ICLR 2022 • Cian Eastwood, Ian Mason, Christopher K. I. Williams, Bernhard Schölkopf
Existing methods for SFDA leverage entropy-minimization techniques which: (i) apply only to classification; (ii) destroy model calibration; and (iii) rely on the source model achieving a good level of feature-space class-separation in the target domain.
2 code implementations • ICLR 2018 • Cian Eastwood, Christopher K. I. Williams
Recent AI research has emphasised the importance of learning disentangled representations of the explanatory factors behind data.