1 code implementation • 17 Dec 2023 • Juan A. Rodriguez, Shubham Agarwal, Issam H. Laradji, Pau Rodriguez, David Vazquez, Christopher Pal, Marco Pedersoli
These visual tokens are pre-pended to the SVG token embeddings, and the sequence is modeled by the StarCoder model using next-token prediction, effectively learning to align the visual and code tokens.
2 code implementations • 23 Nov 2023 • Sergi Masip, Pau Rodriguez, Tinne Tuytelaars, Gido M. van de Ven
Diffusion models are powerful generative models that achieve state-of-the-art performance in image synthesis.
no code implementations • 28 Oct 2023 • Rim Assouel, Pau Rodriguez, Perouz Taslakian, David Vazquez, Yoshua Bengio
A key aspect of human intelligence is the ability to imagine -- composing learned concepts in novel ways -- to make sense of new scenarios.
1 code implementation • NeurIPS 2023 • Alexandre Lacoste, Nils Lehmann, Pau Rodriguez, Evan David Sherwin, Hannah Kerner, Björn Lütjens, Jeremy Andrew Irvin, David Dao, Hamed Alemohammad, Alexandre Drouin, Mehmet Gunturkun, Gabriel Huang, David Vazquez, Dava Newman, Yoshua Bengio, Stefano Ermon, Xiao Xiang Zhu
Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks.
1 code implementation • 1 Jun 2023 • Juan A Rodriguez, David Vazquez, Issam Laradji, Marco Pedersoli, Pau Rodriguez
The generative modeling landscape has experienced tremendous growth in recent years, particularly in generating natural images and art.
no code implementations • 10 Feb 2023 • Nicolas Gontier, Pau Rodriguez, Issam Laradji, David Vazquez, Christopher Pal
Text-based game environments are challenging because agents must deal with long sequences of text, execute compositional actions using text and learn from sparse rewards.
1 code implementation • 13 Dec 2022 • Lorenzo Pellegrini, Chenchen Zhu, Fanyi Xiao, Zhicheng Yan, Antonio Carta, Matthias De Lange, Vincenzo Lomonaco, Roshan Sumbaly, Pau Rodriguez, David Vazquez
Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community.
3 code implementations • 19 Oct 2022 • Juan A. Rodriguez, David Vazquez, Issam Laradji, Marco Pedersoli, Pau Rodriguez
To alleviate this problem, we present OCR-VQGAN, an image encoder, and decoder that leverages OCR pre-trained features to optimize a text perceptual loss, encouraging the architecture to preserve high-fidelity text and diagram structure.
1 code implementation • 13 Oct 2022 • Oscar Mañas, Pau Rodriguez, Saba Ahmadi, Aida Nematzadeh, Yash Goyal, Aishwarya Agrawal
Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks.
no code implementations • 30 Aug 2022 • Joao Monteiro, Pau Rodriguez, Pierre-Andre Noel, Issam Laradji, David Vazquez
In the add-on case, the original neural network's inference head is completely unaffected (so its accuracy remains the same) but we now have the option to use TAC's own confidence and prediction when determining which course of action to take in an hypothetical production workflow.
1 code implementation • 24 May 2022 • Amine El Hattami, Stefania Raimondo, Issam Laradji, David Vazquez, Pau Rodriguez, Chris Pal
We propose and evaluate an approach that conditions models on the set of possible actions, and we show that using this strategy, we can improve WD performance.
Ranked #1 on Workflow Discovery on ABCD
1 code implementation • NLP4ConvAI (ACL) 2022 • Gaurav Sahu, Pau Rodriguez, Issam H. Laradji, Parmida Atighehchian, David Vazquez, Dzmitry Bahdanau
Data augmentation is a widely employed technique to alleviate the problem of data scarcity.
1 code implementation • 30 Mar 2022 • Christopher Beckham, Issam Laradji, Pau Rodriguez, David Vazquez, Derek Nowrouzezahrai, Christopher Pal
In this paper, we explore the use of GAN-based few-shot data augmentation as a method to improve few-shot classification performance.
no code implementations • 1 Dec 2021 • Alexandre Lacoste, Evan David Sherwin, Hannah Kerner, Hamed Alemohammad, Björn Lütjens, Jeremy Irvin, David Dao, Alex Chang, Mehmet Gunturkun, Alexandre Drouin, Pau Rodriguez, David Vazquez
Recent progress in self-supervision shows that pre-training large neural networks on vast amounts of unsupervised data can lead to impressive increases in generalisation for downstream tasks.
no code implementations • CVPR 2022 • Sai Rajeswar, Pau Rodriguez, Soumye Singhal, David Vazquez, Aaron Courville
We also show that MILe is effective reducing label noise, achieving state-of-the-art performance on real-world large-scale noisy data such as WebVision.
Ranked #6 on Image Classification on WebVision-1000
1 code implementation • NeurIPS 2021 • Oleksiy Ostapenko, Pau Rodriguez, Massimo Caccia, Laurent Charlin
We introduce local module composition (LMC), an approach to modular CL where each module is provided a local structural component that estimates a module's relevance to the input.
1 code implementation • 27 Oct 2021 • Gabriel Huang, Issam Laradji, David Vazquez, Simon Lacoste-Julien, Pau Rodriguez
Labeling data is often expensive and time-consuming, especially for tasks such as object detection and instance segmentation, which require dense labeling of the image.
no code implementations • 29 Sep 2021 • Sai Rajeswar Mudumba, Pau Rodriguez, Soumye Singhal, David Vazquez, Aaron Courville
This ambiguity biases models towards a single prediction, which could result in the suppression of classes that tend to co-occur in the data.
3 code implementations • 2 Aug 2021 • Fabrice Normandin, Florian Golemo, Oleksiy Ostapenko, Pau Rodriguez, Matthew D Riemer, Julio Hurtado, Khimya Khetarpal, Ryan Lindeborg, Lucas Cecchi, Timothée Lesort, Laurent Charlin, Irina Rish, Massimo Caccia
We propose a taxonomy of settings, where each setting is described as a set of assumptions.
3 code implementations • ICCV 2021 • Oscar Mañas, Alexandre Lacoste, Xavier Giro-i-Nieto, David Vazquez, Pau Rodriguez
Transfer learning approaches can reduce the data requirements of deep learning algorithms.
Ranked #4 on Change Detection on OSCD - 13ch (using extra training data)
2 code implementations • ICCV 2021 • Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin, David Vazquez
Explainability for machine learning models has gained considerable attention within the research community given the importance of deploying more reliable machine-learning systems.
no code implementations • 1 Jan 2021 • Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam H. Laradji, Laurent Charlin, David Vazquez
In computer vision applications, most methods explain models by displaying the regions in the input image that they focus on for their prediction, but it is difficult to improve models based on these explanations since they do not indicate why the model fail.
1 code implementation • 8 Dec 2020 • Parichehr Behjati, Pau Rodriguez, Armin Mehri, Isabelle Hupont, Carles Fernández Tena, Jordi Gonzalez
In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output.
no code implementations • NeurIPS 2020 • Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Page-Caccia, Issam Hadj Laradji, Irina Rish, Alexandre Lacoste, David Vázquez, Laurent Charlin
The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream.
1 code implementation • 14 Nov 2020 • Issam Laradji, Pau Rodriguez, Freddie Kalaitzis, David Vazquez, Ross Young, Ed Davey, Alexandre Lacoste
Cattle farming is responsible for 8. 8\% of greenhouse gas emissions worldwide.
1 code implementation • 6 Nov 2020 • Issam Laradji, Alzayat Saleh, Pau Rodriguez, Derek Nowrouzezahrai, Mostafa Rahimi Azghadi, David Vazquez
Leading automatic approaches rely on fully-supervised segmentation models to acquire these measurements but these require collecting per-pixel labels -- also time consuming and laborious: i. e., it can take up to two minutes per fish to generate accurate segmentation labels, almost always requiring at least some manual intervention.
1 code implementation • 14 Sep 2020 • Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodriguez, Massimo Caccia, Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David Vazquez, German I. Parisi, Nikhil Churamani, Marc Pickett, Issam Laradji, Davide Maltoni
In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous.
1 code implementation • 5 Aug 2020 • Parichehr Behjati, Pau Rodriguez, Armin Mehri, Isabelle Hupont, Jordi Gonzalez, Carles Fernandez Tena
Super-resolution (SR) has achieved great success due to the development of deep convolutional neural networks (CNNs).
no code implementations • 7 Jul 2020 • Issam Laradji, Pau Rodriguez, Frederic Branchaud-Charron, Keegan Lensink, Parmida Atighehchian, William Parker, David Vazquez, Derek Nowrouzezahrai
We address this challenge introducing a scalable, fast, and accurate active learning system that accelerates the labeling of CT scan images.
3 code implementations • 4 Jul 2020 • Issam Laradji, Pau Rodriguez, Oscar Mañas, Keegan Lensink, Marco Law, Lironne Kurzman, William Parker, David Vazquez, Derek Nowrouzezahrai
Thus, we propose a consistency-based (CB) loss function that encourages the output predictions to be consistent with spatial transformations of the input images.
1 code implementation • 3 Jul 2020 • Issam H. Laradji, Rafael Pardinas, Pau Rodriguez, David Vazquez
For localization, LOOC achieves a strong new baseline in the novel problem setup where only count supervision is available.
1 code implementation • NeurIPS 2020 • Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexandre Lacoste, David Vazquez, Laurent Charlin
We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario.
3 code implementations • NeurIPS 2018 • Boris N. Oreshkin, Pau Rodriguez, Alexandre Lacoste
We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space.