1 code implementation • 9 Feb 2024 • Tara Akhound-Sadegh, Jarrid Rector-Brooks, Avishek Joey Bose, Sarthak Mittal, Pablo Lemos, Cheng-Hao Liu, Marcin Sendera, Siamak Ravanbakhsh, Gauthier Gidel, Yoshua Bengio, Nikolay Malkin, Alexander Tong
Efficiently generating statistically independent samples from an unnormalized probability distribution, such as equilibrium samples of many-body systems, is a foundational problem in science.
1 code implementation • 3 Oct 2023 • Avishek Joey Bose, Tara Akhound-Sadegh, Guillaume Huguet, Kilian Fatras, Jarrid Rector-Brooks, Cheng-Hao Liu, Andrei Cristian Nica, Maksym Korablyov, Michael Bronstein, Alexander Tong
The computational design of novel protein structures has the potential to impact numerous scientific disciplines greatly.
1 code implementation • 30 Sep 2023 • Quentin Bertrand, Avishek Joey Bose, Alexandre Duplessis, Marco Jiralerspong, Gauthier Gidel
In this paper, we develop a framework to rigorously study the impact of training generative models on mixed datasets -- from classical training on real data to self-consuming generative models trained on purely synthetic data.
1 code implementation • NeurIPS 2023 • Marco Jiralerspong, Avishek Joey Bose, Ian Gemp, Chongli Qin, Yoram Bachrach, Gauthier Gidel
The past few years have seen impressive progress in the development of deep generative models capable of producing high-dimensional, complex, and photo-realistic data.
no code implementations • 16 Aug 2022 • Chin-wei Huang, Milad Aghajohari, Avishek Joey Bose, Prakash Panangaden, Aaron Courville
In this work, we generalize continuous-time diffusion models to arbitrary Riemannian manifolds and derive a variational framework for likelihood estimation.
no code implementations • 16 Oct 2021 • Avishek Joey Bose, Marcus Brubaker, Ivan Kobyzev
Generative modeling seeks to uncover the underlying factors that give rise to observed data that can often be modeled as the natural symmetries that manifest themselves through invariances and equivariances to certain transformation laws.
1 code implementation • EMNLP 2021 • Nouha Dziri, Andrea Madotto, Osmar Zaiane, Avishek Joey Bose
Dialogue systems powered by large pre-trained language models (LM) exhibit an innate ability to deliver fluent and natural-looking responses.
1 code implementation • ICLR 2022 • Andjela Mladenovic, Avishek Joey Bose, Hugo Berard, William L. Hamilton, Simon Lacoste-Julien, Pascal Vincent, Gauthier Gidel
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream.
no code implementations • EMNLP 2020 • Kian Ahrabian, Aarash Feizi, Yasmin Salehi, William L. Hamilton, Avishek Joey Bose
Learning low-dimensional representations for entities and relations in knowledge graphs using contrastive estimation represents a scalable and effective method for inferring connectivity patterns.
1 code implementation • NeurIPS 2020 • Avishek Joey Bose, Gauthier Gidel, Hugo Berard, Andre Cianflone, Pascal Vincent, Simon Lacoste-Julien, William L. Hamilton
We introduce Adversarial Example Games (AEG), a framework that models the crafting of adversarial examples as a min-max game between a generator of attacks and a classifier.
1 code implementation • ICML 2020 • Avishek Joey Bose, Ariella Smofsky, Renjie Liao, Prakash Panangaden, William L. Hamilton
One effective solution is the use of normalizing flows \cut{defined on Euclidean spaces} to construct flexible posterior distributions.
1 code implementation • 20 Dec 2019 • Avishek Joey Bose, Ankit Jain, Piero Molino, William L. Hamilton
We consider the task of few shot link prediction on graphs.
1 code implementation • 6 Jun 2019 • Patrick Nadeem Ward, Ariella Smofsky, Avishek Joey Bose
Deep Reinforcement Learning (DRL) algorithms for continuous action spaces are known to be brittle toward hyperparameters as well as \cut{being}sample inefficient.
1 code implementation • ACL 2019 • Peng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, Jackie Chi Kit Cheung
Coherence is an important aspect of text quality and is crucial for ensuring its readability.
no code implementations • 26 May 2019 • Avishek Joey Bose, Andre Cianflone, William L. Hamilton
Adversarial attacks on deep neural networks traditionally rely on a constrained optimization paradigm, where an optimization procedure is used to obtain a single adversarial perturbation for a given input example.
1 code implementation • 25 May 2019 • Avishek Joey Bose, William L. Hamilton
Learning high-quality node embeddings is a key building block for machine learning models that operate on graph data, such as social networks and recommender systems.
no code implementations • 31 May 2018 • Avishek Joey Bose, Parham Aarabi
Adversarial attacks involve adding, small, often imperceptible, perturbations to inputs with the goal of getting a machine learning model to misclassifying them.
no code implementations • ACL 2018 • Avishek Joey Bose, Huan Ling, Yanshuai Cao
Learning by contrasting positive and negative samples is a general strategy adopted by many methods.