no code implementations • 23 Feb 2024 • Nader Asadi, Mahdi Beitollahi, Yasser Khalil, Yinchuan Li, Guojun Zhang, Xi Chen
Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks.
no code implementations • 2 Feb 2024 • Yasser H. Khalil, Amir H. Estiri, Mahdi Beitollahi, Nader Asadi, Sobhan Hemati, Xu Li, Guojun Zhang, Xi Chen
In the realm of real-world devices, centralized servers in Federated Learning (FL) present challenges including communication bottlenecks and susceptibility to a single point of failure.
no code implementations • 20 Nov 2023 • Farzad Salajegheh, Nader Asadi, Soroush Saryazdi, Sudhir Mudur
Our claim is that DAS's ability to pay increased attention to relevant features results in performance improvements when added to popular CNNs for Image Classification and Object Detection.
Ranked #1 on Object Detection on MSCOCO
1 code implementation • 26 Mar 2023 • Nader Asadi, MohammadReza Davari, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point.
no code implementations • CVPR 2022 • MohammadReza Davari, Nader Asadi, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Continual Learning research typically focuses on tackling the phenomenon of catastrophic forgetting in neural networks.
no code implementations • 24 Mar 2022 • Nader Asadi, Sudhir Mudur, Eugene Belilovsky
Recent work studies the supervised online continual learning setting where a learner receives a stream of data whose class distribution changes over time.
3 code implementations • ICLR 2022 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
3 code implementations • 11 Apr 2021 • Lucas Caccia, Rahaf Aljundi, Nader Asadi, Tinne Tuytelaars, Joelle Pineau, Eugene Belilovsky
In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones.
no code implementations • 18 Sep 2019 • Nader Asadi, Amir M. Sarfi, Mehrdad Hosseinzadeh, Zahra Karimpour, Mahdi Eftekhari
In this work, we propose a learning framework to improve the shape bias property of self-supervised methods.
Ranked #51 on Domain Generalization on PACS
no code implementations • 1 Jul 2019 • Nader Asadi, AmirMohammad Sarfi, Mehrdad Hosseinzadeh, Sahba Tahsini, Mahdi Eftekhari
Our method can be applied to any layer of any arbitrary model without the need of any modification or additional training.