1 code implementation • 29 May 2024 • Dipam Goswami, Albin Soutif--Cormerais, Yuyang Liu, Sandesh Kamath, Bartłomiej Twardowski, Joost Van de Weijer
We then estimate the drift in the embedding space from the old to the new model using the perturbed images and compensate the prototypes accordingly.
1 code implementation • 9 Apr 2024 • Dipam Goswami, Bartłomiej Twardowski, Joost Van de Weijer
FSCIL methods start with a many-shot first task to learn a very good feature extractor and then move to the few-shot setting from the second task onwards.
no code implementations • 12 Mar 2024 • Filip Szatkowski, Fei Yang, Bartłomiej Twardowski, Tomasz Trzciński, Joost Van de Weijer
We assess the accuracy and computational cost of various continual learning techniques enhanced with early-exits and TLC across standard class-incremental learning benchmarks such as 10 split CIFAR100 and ImageNetSubset and show that TLC can achieve the accuracy of the standard methods using less than 70\% of their computations.
1 code implementation • 6 Mar 2024 • Bartosz Cywiński, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski, Łukasz Kuciński
We introduce GUIDE, a novel continual learning approach that directs diffusion models to rehearse samples at risk of being forgotten.
1 code implementation • 18 Jan 2024 • Grzegorz Rypeść, Sebastian Cygert, Valeriya Khan, Tomasz Trzciński, Bartosz Zieliński, Bartłomiej Twardowski
Class-incremental learning is becoming more popular as it helps models widen their applicability while not forgetting what they already know.
no code implementations • 22 Nov 2023 • Daniel Marczak, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski
In the field of continual learning, models are designed to learn tasks one after the other.
no code implementations • 20 Oct 2023 • Damian Sójka, Yuyang Liu, Dipam Goswami, Sebastian Cygert, Bartłomiej Twardowski, Joost Van de Weijer
Each sequence is composed of 401 images and starts with the source domain, then gradually drifts to a different one (changing weather or time of day) until the middle of the sequence.
no code implementations • 18 Oct 2023 • Mateusz Pyla, Kamil Deja, Bartłomiej Twardowski, Tomasz Trzciński
Bayesian Flow Networks (BFNs) has been recently proposed as one of the most promising direction to universal generative modelling, having ability to learn any of the data type.
1 code implementation • NeurIPS 2023 • Dipam Goswami, Yuyang Liu, Bartłomiej Twardowski, Joost Van de Weijer
However, when learning from non-stationary data, we observe that the Euclidean metric is suboptimal and that feature distributions are heterogeneous.
no code implementations • 18 Sep 2023 • Damian Sójka, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński
Test-time adaptation is a promising research direction that allows the source model to adapt itself to changes in data distribution without any supervision.
1 code implementation • 18 Sep 2023 • Valeriya Khan, Sebastian Cygert, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski
We notice that in VAE-based generative replay, this could be attributed to the fact that the generated features are far from the original ones when mapped to the latent space.
no code implementations • 23 Aug 2023 • Daniel Marczak, Grzegorz Rypeść, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski
However, these settings are not well aligned with real-life scenarios, where a learning agent has access to a vast amount of unlabeled data encompassing both novel (entirely unlabeled) classes and examples from known classes.
1 code implementation • 18 Aug 2023 • Filip Szatkowski, Mateusz Pyla, Marcin Przewięźlikowski, Sebastian Cygert, Bartłomiej Twardowski, Tomasz Trzciński
In this work, we investigate exemplar-free class incremental learning (CIL) with knowledge distillation (KD) as a regularization strategy, aiming to prevent forgetting.
no code implementations • 14 Aug 2023 • Hao Wu, Alejandro Ariza-Casabona, Bartłomiej Twardowski, Tri Kurniawan Wijaya
In modern e-commerce, item content features in various modalities offer accurate yet comprehensive information to recommender systems.
1 code implementation • 31 May 2023 • Marcin Przewięźlikowski, Mateusz Pyla, Bartosz Zieliński, Bartłomiej Twardowski, Jacek Tabor, Marek Śmieja
By learning to remain invariant to applied data augmentations, methods such as SimCLR and MoCo are able to reach quality on par with supervised approaches.
1 code implementation • ICCV 2023 • Dawid Rymarczyk, Joost Van de Weijer, Bartosz Zieliński, Bartłomiej Twardowski
Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks.
no code implementations • 30 Aug 2022 • Xianghang Liu, Bartłomiej Twardowski, Tri Kurniawan Wijaya
In Federated Learning (FL) of click-through rate (CTR) prediction, users' data is not shared for privacy protection.
1 code implementation • 16 Feb 2022 • Simone Zini, Alex Gomez-Villa, Marco Buzzelli, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
The data augmentations used are of crucial importance to the quality of learned feature representations.
1 code implementation • 22 Jun 2021 • Albin Soutif--Cormerais, Marc Masana, Joost Van de Weijer, Bartłomiej Twardowski
We also define a new forgetting measure for class-incremental learning, and see that forgetting is not the principal cause of low performance.
1 code implementation • 7 Jan 2021 • Bartłomiej Twardowski, Paweł Zawistowski, Szymon Zaborowski
Session-based recommenders, used for making predictions out of users' uninterrupted sequences of actions, are attractive for many applications.
1 code implementation • NeurIPS 2020 • Riccardo Del Chiaro, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost Van de Weijer
We call our method Recurrent Attention to Transient Tasks (RATT), and also show how to adapt continual learning approaches based on weight egularization and knowledge distillation to recurrent continual learning problems.
no code implementations • 4 Jul 2020 • Marc Masana, Bartłomiej Twardowski, Joost Van de Weijer
The influence of class orderings in the evaluation of incremental learning has received very little attention.
2 code implementations • CVPR 2020 • Lu Yu, Bartłomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, Joost Van de Weijer
The vast majority of methods have studied this scenario for classification networks, where for each new task the classification layer of the network must be augmented with additional weights to make room for the newly added classes.