1 code implementation • ICML 2020 • Alexia Jolicoeur-Martineau
We take a more rigorous look at Relativistic Generative Adversarial Networks (RGANs) and prove that the objective function of the discriminator is a statistical divergence for any concave function $f$ with minimal properties ($f(0)=0$, $f'(0) \neq 0$, $\sup_x f(x)>0$).
1 code implementation • 25 May 2024 • Xinyu Zhou, Boris Knyazev, Alexia Jolicoeur-Martineau, Jie Fu
Unfortunately, predicting parameters of very wide networks relies on copying small chunks of parameters multiple times and requires an extremely large number of parameters to support full prediction, which greatly hinders its adoption in practice.
2 code implementations • 18 Sep 2023 • Alexia Jolicoeur-Martineau, Kilian Fatras, Tal Kachman
Through empirical evaluation across the benchmark, we demonstrate that our approach outperforms deep-learning generation methods in data generation tasks and remains competitive in data imputation.
no code implementations • 12 Apr 2023 • Alexia Jolicoeur-Martineau, Kilian Fatras, Ke Li, Tal Kachman
Diffusion Models (DMs) are powerful generative models that add Gaussian noise to the data and learn to remove it.
1 code implementation • 6 Apr 2023 • Alexia Jolicoeur-Martineau, Emy Gervais, Kilian Fatras, Yan Zhang, Simon Lacoste-Julien
Based on this idea, we propose PopulAtion Parameter Averaging (PAPA): a method that combines the generality of ensembling with the efficiency of weight averaging.
no code implementations • 18 Oct 2022 • Alexia Jolicoeur-Martineau, Alex Lamb, Vikas Verma, Aniket Didolkar
We propose a novel regularizer for supervised learning called Conditioning on Noisy Targets (CNT).
1 code implementation • 19 May 2022 • Vikram Voleti, Alexia Jolicoeur-Martineau, Christopher Pal
We train the model in a manner where we randomly and independently mask all the past frames or all the future frames.
Ranked #4 on Video Generation on BAIR Robot Pushing
no code implementations • NeurIPS Workshop DLDE 2021 • Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas
Score-based (denoising diffusion) generative models have recently gained a lot of success in generating realistic and diverse data.
1 code implementation • 28 May 2021 • Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas
For high-resolution images, our method leads to significantly higher quality samples than all other methods tested.
Ranked #10 on Image Generation on CIFAR-10 (Inception score metric)
1 code implementation • ICLR 2021 • Alexia Jolicoeur-Martineau, Rémi Piché-Taillefer, Rémi Tachet des Combes, Ioannis Mitliagkas
Denoising Score Matching with Annealed Langevin Sampling (DSM-ALS) has recently found success in generative modeling.
Ranked #57 on Image Generation on CIFAR-10
no code implementations • ICML 2020 • Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, Ioannis Mitliagkas
The success of adversarial formulations in machine learning has brought renewed motivation for smooth games.
2 code implementations • 15 Oct 2019 • Alexia Jolicoeur-Martineau, Ioannis Mitliagkas
We present a unifying framework of expected margin maximization and show that a wide range of gradient-penalized GANs (e. g., Wasserstein, Standard, Least-Squares, and Hinge GANs) can be derived from this framework.
Ranked #134 on Image Generation on CIFAR-10
1 code implementation • 8 Jan 2019 • Alexia Jolicoeur-Martineau
Given the good performance of RGANs, this suggests that WGAN does not performs well primarily because of the weak metric, but rather because of regularization and the use of a relativistic discriminator.
1 code implementation • 6 Sep 2018 • Alexia Jolicoeur-Martineau
We observe that most loss functions converge well and provide comparable data generation quality to non-saturating GAN, LSGAN, and WGAN-GP generator loss functions, whether we use divergences or non-divergences.
10 code implementations • ICLR 2019 • Alexia Jolicoeur-Martineau
We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data.
Ranked #2 on Image Generation on CAT 256x256
1 code implementation • 23 Mar 2017 • Alexia Jolicoeur-Martineau, Ashley Wazana, Eszter Szekely, Meir Steiner, Alison S. Fleming, James L. Kennedy, Michael J. Meaney, Celia M. T. Greenwood
The approach uses alternating optimization to estimate the parameters of the GxE model.
Applications