no code implementations • ICML 2020 • Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jeremie Mary
Typical architectures of Generative Adversarial Networks make use of a unimodal latent/input distribution transformed by a continuous generator.
no code implementations • 13 Dec 2023 • Antoine Schnepf, Flavian vasile, Ugo Tanielian
The recent advances in text and image synthesis show a great promise for the future of generative models in creative fields.
no code implementations • 8 Sep 2023 • Veronika Shilova, Ludovic Dos Santos, Flavian vasile, Gaëtan Racic, Ugo Tanielian
In digital advertising, the selection of the optimal item (recommendation) and its best creative presentation (creative optimization) have traditionally been considered separate disciplines.
no code implementations • 21 Jul 2022 • Thibaut Issenhuth, Ugo Tanielian, Jérémie Mary, David Picard
We investigate the relationship between the performance of these models and the geometry of their latent space.
no code implementations • 31 Jan 2022 • Eustache Diemert, Romain Fabre, Alexandre Gilotte, Fei Jia, Basile Leparmentier, Jérémie Mary, Zhonghua Qu, Ugo Tanielian, Hui Yang
Designing data sharing mechanisms providing performance and strong privacy guarantees is a hot topic for the Online Advertising industry.
no code implementations • 8 Jan 2022 • Arthur Stéphanovitch, Ugo Tanielian, Benoît Cadre, Nicolas Klutchnikoff, Gérard Biau
The mathematical forces at work behind Generative Adversarial Networks raise challenging theoretical issues.
1 code implementation • 30 Nov 2021 • Thibaut Issenhuth, Ugo Tanielian, Jérémie Mary, David Picard
Advances in computer vision are pushing the limits of im-age manipulation, with generative models sampling detailed images on various tasks.
no code implementations • 19 Oct 2021 • Thibaut Issenhuth, Ugo Tanielian, David Picard, Jeremie Mary
Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting different classes of images.
no code implementations • 2 Sep 2021 • Jules Samaran, Ugo Tanielian, Romain Beaumont, Flavian vasile
Current recommendation approaches help online merchants predict, for each visiting user, which subset of their existing products is the most relevant.
no code implementations • 1 Jan 2021 • Thibaut Issenhuth, Ugo Tanielian, David Picard, Jeremie Mary
Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting disconnected manifolds.
no code implementations • NeurIPS Workshop LMCA 2020 • Lucas Anquetil, Mike Gartrell, Alain Rakotomamonjy, Ugo Tanielian, Clément Calauzènes
Through an evaluation on a real-world dataset, we show that our Wasserstein learning approach provides significantly improved predictive performance on a generative task compared to DPPs trained using MLE.
no code implementations • 9 Jun 2020 • Ugo Tanielian, Maxime Sangnier, Gerard Biau
Recent advances in adversarial attacks and Wasserstein GANs have advocated for use of neural networks with restricted Lipschitz constants.
no code implementations • 8 Jun 2020 • Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jeremie Mary
Typical architectures of Generative AdversarialNetworks make use of a unimodal latent distribution transformed by a continuous generator.
no code implementations • 4 Jun 2020 • Gérard Biau, Maxime Sangnier, Ugo Tanielian
Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation.
no code implementations • 17 Sep 2019 • Ugo Tanielian, Flavian vasile
In recent years, the softmax model and its fast approximations have become the de-facto loss functions for deep neural networks when dealing with multi-class prediction.
no code implementations • 14 Jun 2019 • Louis Faury, Ugo Tanielian, Flavian vasile, Elena Smirnova, Elvis Dohmatob
This manuscript introduces the idea of using Distributionally Robust Optimization (DRO) for the Counterfactual Risk Minimization (CRM) problem.
no code implementations • ICLR 2019 • Ugo Tanielian, Flavian vasile, Mike Gartrell
This is often the case for applications such as language modeling, next event prediction and matrix factorization, where many of the potential outcomes are not mutually exclusive, but are more likely to be independent conditionally on the state.
no code implementations • 22 May 2018 • Ugo Tanielian, Mike Gartrell, Flavian vasile
In recent years, the Word2Vec model trained with the Negative Sampling loss function has shown state-of-the-art results in a number of machine learning tasks, including language modeling tasks, such as word analogy and word similarity, and in recommendation tasks, through Prod2Vec, an extension that applies to modeling user shopping activity and user preferences.