no code implementations • ICML 2020 • Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov
Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.
no code implementations • 20 Apr 2024 • Rob Romijnders, Christos Louizos, Yuki M. Asano, Max Welling
The COVID19 pandemic had enormous economic and societal consequences.
no code implementations • 8 Feb 2024 • Sindy Löwe, Francesco Locatello, Max Welling
In human cognition, the binding problem describes the open question of how the brain flexibly integrates diverse information into cohesive object representations.
no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Jose Miguel Hernandez Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
no code implementations • 18 Dec 2023 • Rob Romijnders, Christos Louizos, Yuki M. Asano, Max Welling
The pandemic in 2020 and 2021 had enormous economic and societal consequences, and studies show that contact tracing algorithms can be key in the early containment of the virus.
no code implementations • 7 Dec 2023 • Micah Goldblum, Anima Anandkumar, Richard Baraniuk, Tom Goldstein, Kyunghyun Cho, Zachary C Lipton, Melanie Mitchell, Preetum Nakkiran, Max Welling, Andrew Gordon Wilson
The goal of this series is to chronicle opinions and issues in the field of machine learning as they stand today and as they change over time.
no code implementations • 28 Nov 2023 • Luisa H. B. Liboni, Roberto C. Budzinski, Alexandra N. Busch, Sindy Löwe, Thomas A. Keller, Max Welling, Lyle E. Muller
We study image segmentation using spatiotemporal dynamics in a recurrent neural network where the state of each unit is given by a complex number.
no code implementations • 7 Nov 2023 • Tara Akhound-Sadegh, Laurence Perreault-Levasseur, Johannes Brandstetter, Max Welling, Siamak Ravanbakhsh
Symmetries have been leveraged to improve the generalization of neural networks through different mechanisms from data augmentation to equivariant architectures.
1 code implementation • 16 Oct 2023 • Takeru Miyato, Bernhard Jaeger, Max Welling, Andreas Geiger
As transformers are equivariant to the permutation of input tokens, encoding the positional information of tokens is necessary for many tasks.
1 code implementation • NeurIPS 2023 • Yue Song, T. Anderson Keller, Nicu Sebe, Max Welling
A prominent goal of representation learning research is to achieve representations which are factorized in a useful manner with respect to the ground truth factors of variation.
1 code implementation • 11 Sep 2023 • Tim Bakker, Herke van Hoof, Max Welling
In this work, we propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem with an Attentive Conditional Neural Process model.
1 code implementation • 3 Sep 2023 • T. Anderson Keller, Lyle Muller, Terrence Sejnowski, Max Welling
Traveling waves of neural activity have been observed throughout the brain at a diversity of regions and scales; however, their precise computational role is still debated.
no code implementations • 14 Aug 2023 • Winfried van den Dool, Tijmen Blankevoort, Max Welling, Yuki M. Asano
In the past years, the application of neural networks as an alternative to classical numerical methods to solve Partial Differential Equations has emerged as a potential paradigm shift in this century-old mathematical field.
1 code implementation • 25 Apr 2023 • Yue Song, T. Anderson Keller, Nicu Sebe, Max Welling
In this work, we instead propose to model latent structures with a learned dynamic potential landscape, thereby performing latent traversals as the flow of samples down the landscape's gradient.
no code implementations • 14 Apr 2023 • Evgenii Egorov, Roberto Bondesan, Max Welling
Quantum error correction is a critical component for scaling up quantum computing.
1 code implementation • 13 Feb 2023 • David Ruhe, Jayesh K. Gupta, Steven de Keninck, Max Welling, Johannes Brandstetter
GCANs are based on symmetry group transformations using geometric (Clifford) algebras.
no code implementations • 10 Jan 2023 • Alexandre Adam, Laurence Perreault-Levasseur, Yashar Hezaveh, Max Welling
In this work, we use a neural network based on the Recurrent Inference Machine (RIM) to simultaneously reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
2 code implementations • 24 Oct 2022 • Arne Schneuing, Yuanqi Du, Charles Harris, Arian Jamasb, Ilia Igashov, Weitao Du, Tom Blundell, Pietro Lió, Carla Gomes, Max Welling, Michael Bronstein, Bruno Correia
Structure-based drug design (SBDD) aims to design small-molecule ligands that bind with high affinity and specificity to pre-determined protein targets.
1 code implementation • 11 Oct 2022 • Ilia Igashov, Hannes Stärk, Clément Vignac, Victor Garcia Satorras, Pascal Frossard, Max Welling, Michael Bronstein, Bruno Correia
Additionally, the model automatically determines the number of atoms in the linker and its attachment points to the input fragments.
1 code implementation • 13 Sep 2022 • Zhuo Su, Max Welling, Matti Pietikäinen, Li Liu
Precisely, the presence of scalar features makes the major part of the network binarizable, while vector features serve to retain rich structural information and ensure SO(3) equivariance.
1 code implementation • 8 Sep 2022 • Johannes Brandstetter, Rianne van den Berg, Max Welling, Jayesh K. Gupta
We empirically evaluate the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes and weather modeling tasks, as well as 3D Maxwell equations.
no code implementations • 18 Jul 2022 • Changyong Oh, Roberto Bondesan, Dana Kianfar, Rehan Ahmed, Rishubh Khurana, Payal Agarwal, Romain Lepert, Mysore Sriram, Max Welling
Macro placement is the problem of placing memory blocks on a chip canvas.
1 code implementation • NeurIPS 2023 • Lars Holdijk, Yuanqi Du, Ferry Hooft, Priyank Jaini, Bernd Ensing, Max Welling
We consider the problem of sampling transition paths between two given metastable states of a molecular system, e. g. a folded and unfolded protein or products and reactants of a chemical reaction.
1 code implementation • 5 Apr 2022 • Sindy Löwe, Phillip Lippe, Maja Rudolph, Max Welling
Object-centric representations form the basis of human perception, and enable us to reason about the world and to systematically generalize to new settings.
4 code implementations • 31 Mar 2022 • Emiel Hoogeboom, Victor Garcia Satorras, Clément Vignac, Max Welling
This work introduces a diffusion model for molecule generation in 3D that is equivariant to Euclidean transformations.
no code implementations • 19 Mar 2022 • Shi Hu, Eric Nalisnick, Max Welling
In the literature on adversarial examples, white box and black box attacks have received the most attention.
1 code implementation • 18 Mar 2022 • Anna Kuzina, Max Welling, Jakub M. Tomczak
Variational autoencoders (VAEs) are latent variable models that can generate complex objects and provide meaningful latent representations.
no code implementations • 15 Mar 2022 • Shreya Kadambi, Arash Behboodi, Joseph B. Soriaga, Max Welling, Roohollah Amiri, Srinivas Yerramalli, Taesang Yoo
The model is based on an encoder-decoder architecture.
1 code implementation • 15 Feb 2022 • Johannes Brandstetter, Max Welling, Daniel E. Worrall
In this paper, we present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity -- Lie point symmetry data augmentation (LPSDA).
1 code implementation • ICLR 2022 • Johannes Brandstetter, Daniel Worrall, Max Welling
The numerical solution of partial differential equations (PDEs) is difficult, having led to a century of research so far.
no code implementations • NeurIPS 2021 • Farhad Ghazvinian Zanjani, Ilia Karmanov, Hanno Ackermann, Daniel Dijkman, Simone Merlin, Max Welling, Fatih Porikli
This work presents a data-driven approach for the indoor localization of an observer on a 2D topological map of the environment.
1 code implementation • 26 Nov 2021 • Kirill Neklyudov, Priyank Jaini, Max Welling
We accomplish this by viewing the evolution of the modeling distribution as (i) the evolution of the energy function, and (ii) the evolution of the samples from this distribution along some vector field.
no code implementations • 19 Nov 2021 • Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
1 code implementation • NeurIPS Workshop SVRHM 2021 • T. Anderson Keller, Qinghe Gao, Max Welling
Category-selectivity in the brain describes the observation that certain spatially localized areas of the cerebral cortex tend to respond robustly and selectively to stimuli from specific limited categories.
1 code implementation • ICLR 2022 • Elise van der Pol, Herke van Hoof, Frans A. Oliehoek, Max Welling
This paper introduces Multi-Agent MDP Homomorphic Networks, a class of networks that allows distributed execution using only local information, yet is able to share experience between global symmetries in the joint state-action space of cooperative multi-agent systems.
2 code implementations • ICLR 2022 • Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, Max Welling
Including covariant information, such as position, force, velocity or spin is important in many tasks in computational physics and chemistry.
no code implementations • 26 Sep 2021 • Kumar Pratik, Rana Ali Amjad, Arash Behboodi, Joseph B. Soriaga, Max Welling
Through extensive experiments on CDL-B channel model, we show that the HKF can be used for tracking the channel over a wide range of Doppler values, matching Kalman filter performance with genie Doppler information.
1 code implementation • NeurIPS 2021 • T. Anderson Keller, Max Welling
Finally, we demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
no code implementations • 14 Jul 2021 • Matthias Reisser, Christos Louizos, Efstratios Gavves, Max Welling
Federated learning (FL) has emerged as the predominant approach for collaborative training of neural network models across multiple users, without the need to gather the data at a central location.
1 code implementation • 18 Jun 2021 • Kirill Neklyudov, Roberto Bondesan, Max Welling
Deterministic dynamics is an essential part of many MCMC algorithms, e. g.
no code implementations • NeurIPS 2021 • Priyank Jaini, Lars Holdijk, Max Welling
We focus on the problem of efficient sampling and learning of probability densities by incorporating symmetries in probabilistic models.
1 code implementation • 10 Jun 2021 • Maurice Weiler, Patrick Forré, Erik Verlinde, Max Welling
We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent.
1 code implementation • NeurIPS 2021 • Victor Garcia Satorras, Emiel Hoogeboom, Fabian B. Fuchs, Ingmar Posner, Max Welling
This paper introduces a generative model equivariant to Euclidean symmetries: E(n) Equivariant Normalizing Flows (E-NFs).
4 code implementations • 19 Apr 2021 • Marc Finzi, Max Welling, Andrew Gordon Wilson
Symmetries and equivariance are fundamental to the generalization of neural networks on domains such as images, graphs, and point clouds.
no code implementations • 18 Apr 2021 • Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling
We consider the problem of training User Verification (UV) models in federated setting, where each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.
1 code implementation • 10 Mar 2021 • Anna Kuzina, Max Welling, Jakub M. Tomczak
In this work, we explore adversarial attacks on the Variational Autoencoders (VAE).
1 code implementation • 8 Mar 2021 • Maximilian Ilse, Patrick Forré, Max Welling, Joris M. Mooij
Second, for continuous variables and assuming a linear-Gaussian model, we derive equality constraints for the parameters of the observational and interventional distributions.
no code implementations • 8 Mar 2021 • Roberto Bondesan, Max Welling
In this work we develop a quantum field theory formalism for deep learning, where input signals are encoded in Gaussian states, a generalization of Gaussian processes which encode the agent's uncertainty about the input signal.
1 code implementation • 26 Feb 2021 • Changyong Oh, Roberto Bondesan, Efstratios Gavves, Max Welling
In this work we propose a batch Bayesian optimization method for combinatorial problems on permutations, which is well suited for expensive-to-evaluate objectives.
no code implementations • 25 Feb 2021 • Changyong Oh, Efstratios Gavves, Max Welling
In experiments, we demonstrate the improved sample efficiency of GP BO using FM kernels (BO-FM). On synthetic problems and hyperparameter optimization problems, BO-FM outperforms competitors consistently.
2 code implementations • NeurIPS 2021 • Wouter Kool, Herke van Hoof, Joaquim Gromicho, Max Welling
Routing problems are a class of combinatorial problems with many practical applications.
5 code implementations • 19 Feb 2021 • Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs).
2 code implementations • NeurIPS 2021 • Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling
Argmax Flows are defined by a composition of a continuous distribution (such as a normalizing flow), and an argmax function.
1 code implementation • 4 Feb 2021 • Priyank Jaini, Didrik Nielsen, Max Welling
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
no code implementations • 1 Jan 2021 • Hossein Hosseini, Hyunsin Park, Sungrack Yun, Christos Louizos, Joseph Soriaga, Max Welling
We consider the problem of training User Verification (UV) models in federated setup, where the conventional loss functions are not applicable due to the constraints that each user has access to the data of only one class and user embeddings cannot be shared with the server or other users.
no code implementations • 1 Jan 2021 • Christos Louizos, Matthias Reisser, Joseph Soriaga, Max Welling
Federated averaging (FedAvg), despite its simplicity, has been the main approach in training neural networks in the federated learning setting.
no code implementations • pproximateinference AABI Symposium 2021 • Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, Max Welling
This paper introduces a new method to define and train continuous distributions such as normalizing flows directly on categorical data, for example text and image segmentation.
1 code implementation • 14 Nov 2020 • T. Anderson Keller, Jorn W. T. Peters, Priyank Jaini, Emiel Hoogeboom, Patrick Forré, Max Welling
Efficient gradient computation of the Jacobian determinant term is a core problem in many machine learning settings, and especially so in the normalizing flow framework.
2 code implementations • NeurIPS 2020 • Tim Bakker, Herke van Hoof, Max Welling
In today's clinical practice, magnetic resonance imaging (MRI) is routinely accelerated through subsampling of the associated Fourier domain.
1 code implementation • ICLR 2021 • Marc Finzi, Roberto Bondesan, Max Welling
Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods.
no code implementations • 21 Oct 2020 • Roberto Bondesan, Max Welling
We develop a new quantum neural network layer designed to run efficiently on a quantum computer but that can be simulated on a classical computer when restricted in the way it entangles input states.
1 code implementation • 15 Oct 2020 • Kirill Neklyudov, Max Welling
Markov Chain Monte Carlo (MCMC) algorithms ubiquitously employ complex deterministic transformations to generate proposal points that are then filtered by the Metropolis-Hastings-Green (MHG) test.
no code implementations • NeurIPS 2020 • Pim de Haan, Taco Cohen, Max Welling
A key requirement for graph neural networks is that they must process a graph in a way that does not depend on how the graph is described.
no code implementations • 9 Jul 2020 • Hossein Hosseini, Sungrack Yun, Hyunsin Park, Christos Louizos, Joseph Soriaga, Max Welling
In this paper, we propose Federated User Authentication (FedUA), a framework for privacy-preserving training of UA models.
3 code implementations • NeurIPS 2020 • Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max Welling
Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions.
2 code implementations • NeurIPS 2020 • Elise van der Pol, Daniel E. Worrall, Herke van Hoof, Frans A. Oliehoek, Max Welling
MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP.
1 code implementation • 30 Jun 2020 • Kumar Pratik, Bhaskar D. Rao, Max Welling
Each iterative unit is a neural computation module comprising of 3 sub-modules: the likelihood module, the encoder module, and the predictor module.
no code implementations • 30 Jun 2020 • Kirill Neklyudov, Max Welling, Evgenii Egorov, Dmitry Vetrov
Markov Chain Monte Carlo (MCMC) is a computational approach to fundamental problems such as inference, integration, optimization, and simulation.
1 code implementation • 18 Jun 2020 • Sindy Löwe, David Madras, Richard Zemel, Max Welling
This enables us to train a single, amortized model that infers causal relations across samples with different underlying causal graphs, and thus leverages the shared dynamics information.
5 code implementations • NeurIPS 2020 • Fabian B. Fuchs, Daniel E. Worrall, Volker Fischer, Max Welling
We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations.
1 code implementation • NeurIPS 2020 • Emiel Hoogeboom, Victor Garcia Satorras, Jakub M. Tomczak, Max Welling
Empirically, we show that the convolution exponential outperforms other linear transformations in generative flows on CIFAR10 and the graph convolution exponential improves the performance of graph normalizing flows.
1 code implementation • NeurIPS 2020 • Mart van Baalen, Christos Louizos, Markus Nagel, Rana Ali Amjad, Ying Wang, Tijmen Blankevoort, Max Welling
We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization.
no code implementations • 21 Apr 2020 • Mirgahney Mohamed, Gabriele Cesa, Taco S. Cohen, Max Welling
Thanks to their improved data efficiency, equivariant neural networks have gained increased interest in the deep learning community.
no code implementations • CVPR 2020 • Zheng Ding, Yifan Xu, Weijian Xu, Gaurav Parmar, Yang Yang, Max Welling, Zhuowen Tu
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
1 code implementation • ICLR 2021 • Pim de Haan, Maurice Weiler, Taco Cohen, Max Welling
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs).
1 code implementation • 4 Mar 2020 • Victor Garcia Satorras, Max Welling
In this work we first extend graph neural networks to factor graphs (FG-GNN).
1 code implementation • 27 Feb 2020 • Elise van der Pol, Thomas Kipf, Frans A. Oliehoek, Max Welling
We introduce a contrastive loss function that enforces action equivariance on the learned representations.
no code implementations • ICLR 2020 • Milad Alizadeh, Arash Behboodi, Mart van Baalen, Christos Louizos, Tijmen Blankevoort, Max Welling
We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization.
1 code implementation • ICLR 2020 • Wouter Kool, Herke van Hoof, Max Welling
We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples.
no code implementations • 13 Feb 2020 • Shi Hu, Nicola Pezzotti, Max Welling
In this paper, we demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error by decomposing the latter into random and systematic errors, and showing that the former is equivalent to the variance of the random error.
no code implementations • 20 Dec 2019 • Andrey Kuzmin, Markus Nagel, Saurabh Pitre, Sandeep Pendyam, Tijmen Blankevoort, Max Welling
The success of deep neural networks in many real-world applications is leading to new challenges in building more efficient architectures.
1 code implementation • 29 Nov 2019 • Christina Winkler, Daniel Worrall, Emiel Hoogeboom, Max Welling
Normalizing Flows (NFs) are able to model complicated distributions p(y) with strong inter-dimensional correlations and high multimodality by transforming a simple base density p(z) through an invertible neural network under the change of variables formula.
3 code implementations • ICLR 2020 • Thomas Kipf, Elise van der Pol, Max Welling
Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.
1 code implementation • NeurIPS 2019 • Patrick Putzky, Max Welling
Iterative learning to infer approaches have become popular solvers for inverse problems.
1 code implementation • 20 Oct 2019 • Patrick Putzky, Dimitrios Karkalousos, Jonas Teuwen, Nikita Miriakov, Bart Bakker, Matthan Caan, Max Welling
We, team AImsterdam, summarize our submission to the fastMRI challenge (Zbontar et al., 2018).
1 code implementation • 15 Oct 2019 • Frederik Harder, Jonas Köhler, Max Welling, Mijung Park
Developing a differentially private deep learning algorithm is challenging, due to the difficulty in analyzing the sensitivity of objective functions that are typically used to train deep neural networks.
no code implementations • 22 Jul 2019 • Xiahan Shi, Leonard Salewski, Martin Schiegg, Zeynep Akata, Max Welling
Instead, we consider the extended setup of generalized few-shot learning (GFSL), where the model is required to perform classification on the joint label space consisting of both previously seen and novel classes.
1 code implementation • ICLR 2020 • Babak Ehteshami Bejnordi, Tijmen Blankevoort, Max Welling
To achieve this, we introduce a new residual block architecture that gates convolutional channels in a fine-grained manner.
1 code implementation • 3 Jul 2019 • Shi Hu, Daniel Worrall, Stefan Knegt, Bas Veeling, Henkjan Huisman, Max Welling
The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation.
1 code implementation • NeurIPS 2019 • Christos Louizos, Xiahan Shi, Klamer Schutte, Max Welling
We present a new family of exchangeable stochastic processes, the Functional Neural Processes (FNPs).
1 code implementation • 18 Jun 2019 • Karen Ullrich, Rianne van den Berg, Marcus Brubaker, David Fleet, Max Welling
Finally, we demonstrate how the reconstruction algorithm can be extended with an amortized inference scheme on unknown attributes such as object pose.
5 code implementations • ICCV 2019 • Markus Nagel, Mart van Baalen, Tijmen Blankevoort, Max Welling
This improves quantization accuracy performance, and can be applied to many common computer vision architectures with a straight forward API call.
6 code implementations • 6 Jun 2019 • Diederik P. Kingma, Max Welling
Variational autoencoders provide a principled framework for learning deep latent-variable models and corresponding inference models.
1 code implementation • NeurIPS 2019 • Victor Garcia Satorras, Zeynep Akata, Max Welling
A graphical model is a structured representation of the data generating process.
no code implementations • 6 Jun 2019 • Miranda C. N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, Max Welling
In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs).
1 code implementation • NeurIPS 2019 • Daniel E. Worrall, Max Welling
We introduce deep scale-spaces (DSS), a generalization of convolutional neural networks, exploiting the scale symmetry structure of conventional image recognition tasks.
3 code implementations • 24 May 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
1 code implementation • NeurIPS 2019 • Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, Max Welling
For that reason, we introduce a flow-based generative model for ordinal discrete data called Integer Discrete Flow (IDF): a bijective integer map that can learn rich transformations on high-dimensional data.
no code implementations • ICLR 2019 • Peter O'Connor, Efstratios Gavves, Max Welling
In response to this, Scellier & Bengio (2017) proposed Equilibrium Propagation - a method for gradient-based train- ing of neural networks which uses only local learning rules and, crucially, does not rely on neurons having a mechanism for back-propagating an error gradient.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.
no code implementations • ICLR Workshop drlStructPred 2019 • Wouter Kool, Herke van Hoof, Max Welling
REINFORCE can be used to train models in structured prediction settings to directly optimize the test-time objective.
4 code implementations • 14 Mar 2019 • Wouter Kool, Herke van Hoof, Max Welling
We show how to implicitly apply this 'Gumbel-Top-$k$' trick on a factorized distribution over sequences, allowing to draw exact samples without replacement using a Stochastic Beam Search.
2 code implementations • 11 Feb 2019 • Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling
The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design.
Ranked #23 on Semantic Segmentation on Stanford2D3D Panoramic
1 code implementation • NeurIPS 2019 • Changyong Oh, Jakub M. Tomczak, Efstratios Gavves, Max Welling
On this combinatorial graph, we propose an ARD diffusion kernel with which the GP is able to model high-order interactions between variables leading to better performance.
1 code implementation • 30 Jan 2019 • Emiel Hoogeboom, Rianne van den Berg, Max Welling
We generalize the 1 x 1 convolutions proposed in Glow to invertible d x d convolutions, which are more flexible since they operate on both channel and spatial axes.
no code implementations • ICLR 2020 • Chongxuan Li, Chao Du, Kun Xu, Max Welling, Jun Zhu, Bo Zhang
We propose a black-box algorithm called {\it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF).
no code implementations • 5 Jan 2019 • Warren R. Morningstar, Laurence Perreault Levasseur, Yashar D. Hezaveh, Roger Blandford, Phil Marshall, Patrick Putzky, Thomas D. Rueter, Risa Wechsler, Max Welling
We present a machine learning method for the reconstruction of the undistorted images of background sources in strongly lensed systems.
Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies
1 code implementation • 21 Nov 2018 • Raghavendra Selvan, Thomas Kipf, Max Welling, Antonio Garcia-Uceda Juarez, Jesper H. Pedersen, Jens Petersen, Marleen de Bruijne
Graph refinement, or the task of obtaining subgraphs of interest from over-complete graphs, can have many varied applications.
2 code implementations • ICLR 2019 • Andrei Atanov, Arsenii Ashukha, Kirill Struminsky, Dmitry Vetrov, Max Welling
Bayesian inference is known to provide a general framework for incorporating prior knowledge or specific properties into machine learning models via carefully choosing a prior distribution.
no code implementations • 12 Oct 2018 • Bastiaan S. Veeling, Rianne van den Berg, Max Welling
High-risk domains require reliable confidence estimates from predictive models.
1 code implementation • ICLR 2019 • Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, Max Welling
Neural network quantization has become an important research area due to its great impact on deployment of large models on resource constrained devices.
2 code implementations • ICLR 2019 • Giorgio Patrini, Rianne van den Berg, Patrick Forré, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, Frank Nielsen
We show that minimizing the p-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the p-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error.
1 code implementation • ICLR 2019 • Jorn W. T. Peters, Max Welling
Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks.
1 code implementation • NeurIPS 2018 • Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, Taco Cohen
We prove that equivariant convolutions are the most general equivariant linear maps between fields over R^3.
no code implementations • 2 Jul 2018 • Jasper Linmans, Jim Winkens, Bastiaan S. Veeling, Taco S. Cohen, Max Welling
The group equivariant CNN framework is extended for segmentation by introducing a new equivariant (G->Z2)-convolution that transforms feature maps on a group to planar feature maps.
4 code implementations • 8 Jun 2018 • Bastiaan S. Veeling, Jasper Linmans, Jim Winkens, Taco Cohen, Max Welling
We propose a new model for digital pathology segmentation, based on the observation that histopathology images are inherently symmetric under rotation and reflection.
Ranked #7 on Breast Tumour Classification on PCam
1 code implementation • ICML 2018 • Changyong Oh, Efstratios Gavves, Max Welling
A major challenge in Bayesian Optimization is the boundary issue (Swersky, 2017) where an algorithm spends too many evaluations near the boundary of its search space.
no code implementations • 24 May 2018 • Mevlana Gemici, Zeynep Akata, Max Welling
We introduce Primal-Dual Wasserstein GAN, a new learning algorithm for building latent variable models of the data distribution based on the primal and the dual formulations of the optimal transport (OT) problem.
no code implementations • 12 Apr 2018 • Raghavendra Selvan, Thomas Kipf, Max Welling, Jesper H. Pedersen, Jens Petersen, Marleen de Bruijne
We present extraction of tree structures, such as airways, from image data as a graph refinement task.
no code implementations • 10 Apr 2018 • Raghavendra Selvan, Max Welling, Jesper H. Pedersen, Jens Petersen, Marleen de Bruijne
Performance of the method is compared with two methods: the first uses probability images from a trained voxel classifier with region growing, which is similar to one of the best performing methods at EXACT'09 airway challenge, and the second method is based on Bayesian smoothing on these probability images.
1 code implementation • NeurIPS 2018 • Chongxuan Li, Max Welling, Jun Zhu, Bo Zhang
We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data.
14 code implementations • ICLR 2019 • Wouter Kool, Herke van Hoof, Max Welling
The recently presented idea to learn heuristics for combinatorial optimization problems is promising as it can save costly development.
2 code implementations • 15 Mar 2018 • Rianne van den Berg, Leonard Hasenclever, Jakub M. Tomczak, Max Welling
Variational inference relies on flexible approximate posterior distributions.
1 code implementation • ICLR 2018 • Emiel Hoogeboom, Jorn W. T. Peters, Taco S. Cohen, Max Welling
We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.
9 code implementations • ICML 2018 • Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, Richard Zemel
Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics.
17 code implementations • ICML 2018 • Maximilian Ilse, Jakub M. Tomczak, Max Welling
Multiple instance learning (MIL) is a variation of supervised learning where a single class label is assigned to a bag of instances.
Ranked #7 on Aerial Scene Classification on UCM (50% as trainset)
3 code implementations • ICLR 2018 • Taco S. Cohen, Mario Geiger, Jonas Koehler, Max Welling
Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images.
no code implementations • ICLR 2018 • Christos Louizos, Max Welling, Diederik P. Kingma
We further propose the \emph{hard concrete} distribution for the gates, which is obtained by ``stretching'' a binary concrete distribution and then transforming its samples with a hard-sigmoid.
no code implementations • ICLR 2018 • Mary Phuong, Max Welling, Nate Kushman, Ryota Tomioka, Sebastian Nowozin
Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code.
4 code implementations • 4 Dec 2017 • Christos Louizos, Max Welling, Diederik P. Kingma
We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid.
no code implementations • 1 Dec 2017 • Jakub M. Tomczak, Maximilian Ilse, Max Welling
The computer-aided analysis of medical scans is a longstanding goal in the medical imaging field.
no code implementations • 17 Nov 2017 • Marco Federici, Karen Ullrich, Max Welling
Compression of Neural Networks (NN) has become a highly studied topic in recent years.
2 code implementations • 14 Sep 2017 • Taco Cohen, Mario Geiger, Jonas Köhler, Max Welling
Many areas of science and egineering deal with signals with other symmetries, such as rotation invariant data on the sphere.
4 code implementations • 13 Jun 2017 • Patrick Putzky, Max Welling
Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference.
1 code implementation • ICLR 2018 • Peter O'Connor, Efstratios Gavves, Max Welling
We present a variant on backpropagation for neural networks in which computation scales with the rate of change of the data - not the rate at which we process the data.
1 code implementation • 7 Jun 2017 • Jakub M. Tomczak, Max Welling
In this paper, we propose a new volume-preserving flow and show that it performs similarly to the linear general normalizing flow.
15 code implementations • 7 Jun 2017 • Rianne van den Berg, Thomas N. Kipf, Max Welling
We consider matrix completion for recommender systems from the point of view of link prediction on graphs.
Ranked #4 on Recommendation Systems on YahooMusic Monti (using extra training data)
3 code implementations • NeurIPS 2017 • Christos Louizos, Karen Ullrich, Max Welling
Compression and computational efficiency in deep learning have become a problem of great significance.
6 code implementations • NeurIPS 2017 • Christos Louizos, Uri Shalit, Joris Mooij, David Sontag, Richard Zemel, Max Welling
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers.
Ranked #9 on Causal Inference on IHDP
6 code implementations • 19 May 2017 • Jakub M. Tomczak, Max Welling
In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call "Variational Mixture of Posteriors" prior, or VampPrior for short.
27 code implementations • 17 Mar 2017 • Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, Max Welling
We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification.
Ranked #1 on Node Classification on AIFB
7 code implementations • ICML 2017 • Christos Louizos, Max Welling
We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks.
1 code implementation • 15 Feb 2017 • Luisa M. Zintgraf, Taco S. Cohen, Tameem Adel, Max Welling
This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input.
3 code implementations • 13 Feb 2017 • Karen Ullrich, Edward Meeds, Max Welling
The success of deep learning in numerous application domains created the de- sire to run and train them on mobile devices.
3 code implementations • 27 Dec 2016 • Taco S. Cohen, Max Welling
It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks.
2 code implementations • NeurIPS 2016 • Durk P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables.
Ranked #40 on Image Generation on CIFAR-10 (bits/dimension metric)
2 code implementations • 29 Nov 2016 • Jakub M. Tomczak, Max Welling
One fashion of enriching the variational posterior distribution is application of normalizing flows, i. e., a series of invertible transformations to latent variables with a simple posterior.
21 code implementations • 21 Nov 2016 • Thomas N. Kipf, Max Welling
We introduce the variational graph auto-encoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE).
Ranked #1 on Link Prediction on Pubmed (ACC metric)
no code implementations • 8 Nov 2016 • Gianfranco Bertone, Marc Peter Deisenroth, Jong Soo Kim, Sebastian Liem, Roberto Ruiz de Austri, Max Welling
The interpretation of Large Hadron Collider (LHC) data in the framework of Beyond the Standard Model (BSM) theories is hampered by the need to run computationally expensive event generators and detector simulators.
1 code implementation • 7 Nov 2016 • Peter O'Connor, Max Welling
Thus the amount of computation that the network does scales with the amount of change in the input and layer activations, rather than the size of the network.
1 code implementation • 1 Nov 2016 • Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling
Many applications of Bayesian data analysis involve sensitive information, motivating methods which ensure that privacy is protected.
no code implementations • 14 Sep 2016 • Mijung Park, James Foulds, Kamalika Chaudhuri, Max Welling
We develop a privatised stochastic variational inference method for Latent Dirichlet Allocation (LDA).
51 code implementations • 9 Sep 2016 • Thomas N. Kipf, Max Welling
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs.
Ranked #1 on Graph Property Prediction on ogbg-ppa
no code implementations • 28 Jun 2016 • Alexander Moreno, Tameem Adel, Edward Meeds, James M. Rehg, Max Welling
Approximate Bayesian Computation (ABC) is a framework for performing likelihood-free posterior inference for simulation models.
8 code implementations • 15 Jun 2016 • Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
The framework of normalizing flows provides a general strategy for flexible variational inference of posteriors over latent variables.
no code implementations • 24 May 2016 • Mijung Park, Max Welling
In particular, IRLS for L1 minimisation under the linear model provides a closed-form solution in each step, which is a simple multiplication between the inverse of the weighted second moment matrix and the weighted first moment vector.
1 code implementation • 23 May 2016 • Mijung Park, Jimmy Foulds, Kamalika Chaudhuri, Max Welling
The iterative nature of the expectation maximization (EM) algorithm presents a challenge for privacy-preserving estimation, as each iteration increases the amount of noise needed.
no code implementations • 23 Mar 2016 • James Foulds, Joseph Geumlek, Max Welling, Kamalika Chaudhuri
Bayesian inference has great promise for the privacy-preserving analysis of sensitive data, as posterior sampling automatically preserves differential privacy, an algorithmic notion of data privacy, under certain conditions (Dimitrakakis et al., 2014; Wang et al., 2015).
2 code implementations • 15 Mar 2016 • Christos Louizos, Max Welling
We introduce a variational Bayesian neural network where the parameters are governed via a probability distribution on random matrices.
no code implementations • 8 Mar 2016 • Luisa M. Zintgraf, Taco S. Cohen, Max Welling
We present a method for visualising the response of a deep neural network to a specific input.
1 code implementation • 26 Feb 2016 • Peter O'Connor, Max Welling
Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset.
1 code implementation • 24 Feb 2016 • Taco S. Cohen, Max Welling
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries.
Ranked #6 on Breast Tumour Classification on PCam
Breast Tumour Classification Colorectal Gland Segmentation: +2
no code implementations • 9 Feb 2016 • Yutian Chen, Max Welling
Herding defines a deterministic dynamical system at the edge of chaos.
2 code implementations • 3 Nov 2015 • Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard Zemel
We investigate the problem of learning representations that are invariant to certain nuisance or sensitive factors of variation in the data while retaining as much of the remaining information as possible.
Ranked #4 on Sentiment Analysis on Multi-Domain Sentiment Dataset
no code implementations • 16 Oct 2015 • Wenzhe Li, Sungjin Ahn, Max Welling
We propose a stochastic gradient Markov chain Monte Carlo (SG-MCMC) algorithm for scalable inference in mixed-membership stochastic blockmodels (MMSB).
1 code implementation • NeurIPS 2015 • Anoop Korattikara, Vivek Rathod, Kevin Murphy, Max Welling
We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities, e. g., for applications involving bandits or active learning.
no code implementations • NeurIPS 2015 • Edward Meeds, Max Welling
We describe an embarrassingly parallel, anytime Monte Carlo method for likelihood-free models.
12 code implementations • NeurIPS 2015 • Diederik P. Kingma, Tim Salimans, Max Welling
Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models.
no code implementations • 17 May 2015 • Taco S. Cohen, Max Welling
In a range of fields including the geosciences, molecular biology, robotics and computer vision, one encounters problems that involve random variables on manifolds.
no code implementations • 6 Mar 2015 • Edward Meeds, Robert Leenders, Max Welling
Approximate Bayesian computation (ABC) is a powerful and elegant framework for performing inference in simulation-based models.
no code implementations • 5 Mar 2015 • Sungjin Ahn, Anoop Korattikara, Nathan Liu, Suju Rajan, Max Welling
Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference.
no code implementations • 24 Dec 2014 • Taco S. Cohen, Max Welling
Starting with the idea that a good visual representation is one that transforms linearly under scene motions, we show, using the theory of group representations, that any such representation is equivalent to a combination of the elementary irreducible representations.
no code implementations • 9 Dec 2014 • Edward Meeds, Michael Chiang, Mary Lee, Olivier Cinquin, John Lowengrub, Max Welling
We propose a post optimization posterior analysis that computes and visualizes all the models that can generate equally good or better simulation results, subject to constraints.
1 code implementation • 8 Dec 2014 • Edward Meeds, Remco Hendriks, Said Al Faraby, Magiel Bruntink, Max Welling
Beyond an educational resource for ML, the browser has vast potential to not only improve the state-of-the-art in ML research, but also, inexpensively and on a massive scale, to bring sophisticated ML learning and prediction to the public at large.
no code implementations • 23 Oct 2014 • Tim Salimans, Diederik P. Kingma, Max Welling
Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables.
no code implementations • 9 Aug 2014 • Yutian Chen, Max Welling
In recent years a number of methods have been developed for automatically learning the (sparse) connectivity structure of Markov Random Fields.
18 code implementations • NeurIPS 2014 • Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling
The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis.
Ranked #53 on Image Classification on SVHN
no code implementations • 26 Feb 2014 • Max Welling
When dealing with datasets containing a billion instances or with simulations that require a supercomputer to execute, computational resources become part of the equation.
no code implementations • 18 Feb 2014 • Taco Cohen, Max Welling
We present a new probabilistic model of compact commutative Lie groups that produces invariant-equivariant and disentangled representations of data.
no code implementations • 3 Feb 2014 • Diederik P. Kingma, Max Welling
Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models.
no code implementations • 13 Jan 2014 • Edward Meeds, Max Welling
Scientists often express their understanding of the world through a computationally demanding simulation program.
135 code implementations • 20 Dec 2013 • Diederik P. Kingma, Max Welling
First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods.
Ranked #11 on Image Clustering on Tiny-ImageNet
no code implementations • CVPR 2013 • Peter Welinder, Max Welling, Pietro Perona
How many labeled examples are needed to estimate a classifier's performance on a new dataset?
no code implementations • 10 May 2013 • James Foulds, Levi Boyles, Christopher DuBois, Padhraic Smyth, Max Welling
We propose a stochastic algorithm for collapsed variational Bayesian inference for LDA, which is simpler and more efficient than the state of the art method.
no code implementations • 19 Apr 2013 • Anoop Korattikara, Yutian Chen, Max Welling
Can we make Bayesian posterior MCMC sampling more efficient when faced with very large datasets?
no code implementations • NeurIPS 2012 • Levi Boyles, Max Welling
We introduce a new prior for use in Nonparametric Bayesian Hierarchical Clustering.
1 code implementation • 9 May 2012 • Arthur Asuncion, Max Welling, Padhraic Smyth, Yee Whye Teh
Latent Dirichlet analysis, or topic modeling, is a flexible latent variable framework for modeling high-dimensional sparse count data.
1 code implementation • 15 Mar 2012 • Yutian Chen, Max Welling, Alex Smola
We extend the herding algorithm to continuous spaces by using the kernel trick.
no code implementations • NeurIPS 2011 • Levi Boyles, Anoop Korattikara, Deva Ramanan, Max Welling
Learning problems such as logistic regression are typically formulated as pure optimization problems defined on some loss function.
1 code implementation • ICML 2011 2011 • Max Welling, Yee Whye Teh
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches.
no code implementations • NeurIPS 2010 • Andrew Gelfand, Yutian Chen, Laurens Maaten, Max Welling
The paper develops a connection between traditional perceptron algorithms and recently introduced herding algorithms.
no code implementations • NeurIPS 2008 • Padhraic Smyth, Max Welling, Arthur U. Asuncion
Distributed learning is a problem of fundamental interest in machine learning and cognitive science.