Search Results for author: Luca Scimeca

Found 6 papers, 2 papers with code

Amortizing intractable inference in diffusion models for vision, language, and control

1 code implementation31 May 2024 Siddarth Venkatraman, Moksh Jain, Luca Scimeca, Minsu Kim, Marcin Sendera, Mohsin Hasan, Luke Rowe, Sarthak Mittal, Pablo Lemos, Emmanuel Bengio, Alexandre Adam, Jarrid Rector-Brooks, Yoshua Bengio, Glen Berseth, Nikolay Malkin

Diffusion models have emerged as effective distribution estimators in vision, language, and reinforcement learning, but their use as priors in downstream tasks poses an intractable posterior inference problem.

Continuous Control reinforcement-learning +1

Improved off-policy training of diffusion samplers

1 code implementation7 Feb 2024 Marcin Sendera, Minsu Kim, Sarthak Mittal, Pablo Lemos, Luca Scimeca, Jarrid Rector-Brooks, Alexandre Adam, Yoshua Bengio, Nikolay Malkin

We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function.

Benchmarking

Mitigating Biases with Diverse Ensembles and Diffusion Models

no code implementations23 Nov 2023 Luca Scimeca, Alexander Rubinstein, Damien Teney, Seong Joon Oh, Armand Mihai Nicolicioiu, Yoshua Bengio

Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones.

Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts in Underspecified Visual Tasks

no code implementations3 Oct 2023 Luca Scimeca, Alexander Rubinstein, Armand Mihai Nicolicioiu, Damien Teney, Yoshua Bengio

Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to shortcut learning phenomena, where a model may rely on erroneous, easy-to-learn, cues while ignoring reliable ones.

Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective

no code implementations ICLR 2022 Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, Sangdoo Yun

This phenomenon, also known as shortcut learning, is emerging as a key limitation of the current generation of machine learning models.

Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions

no code implementations NeurIPS 2021 Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg

Effective control and prediction of dynamical systems often require appropriate handling of continuous-time and discrete, event-triggered processes.

Cannot find the paper you are looking for? You can Submit a new open access paper.