no code implementations • 26 May 2022 • Dávid Terjék, Diego González-Sánchez
A candidate explanation of the good empirical performance of deep neural networks is the implicit regularization effect of first order optimization methods.
no code implementations • 29 May 2021 • Dávid Terjék, Diego González-Sánchez
We propose a practical algorithm for computing an approximate solution of the optimal transport problem with $f$-divergence regularization via the generalized Sinkhorn algorithm.
1 code implementation • 26 Feb 2021 • Dávid Terjék
Variational representations of $f$-divergences are central to many machine learning algorithms, with Lipschitz constrained variants recently gaining attention.
no code implementations • 24 Feb 2021 • Dávid Terjék
In this note, following \cite{Chitescuetal2014}, we show that the Monge-Kantorovich norm on the vector space of countably additive measures on a compact metric space has a primal representation analogous to the Hanin norm, meaning that similarly to the Hanin norm, the Monge-Kantorovich norm can be seen as an extension of the Kantorovich-Rubinstein norm from the vector subspace of zero-charge measures, implying a number of novel results, such as the equivalence of the Monge-Kantorovich and Hanin norms.
Functional Analysis 46B10 (Primary) 46E27, 46E15 (Secondary)
2 code implementations • ICLR 2020 • Dávid Terjék
Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability and sample quality.
Ranked #105 on Image Generation on CIFAR-10