1 code implementation • 2 May 2024 • Samir Khaki, Ahmad Sajedi, Kai Wang, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis
To address these challenges in dataset distillation, we propose the ATtentiOn Mixer (ATOM) module to efficiently distill large datasets using a mixture of channel and spatial-wise attention in the feature matching process.
1 code implementation • 2 Jan 2024 • Ahmad Sajedi, Samir Khaki, Yuri A. Lawryshyn, Konstantinos N. Plataniotis
We validate the effectiveness of our framework through experimentation with datasets from the computer vision and medical imaging domains.
2 code implementations • ICCV 2023 • Ahmad Sajedi, Samir Khaki, Ehsan Amjadian, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis
Emerging research on dataset distillation aims to reduce training costs by creating a small synthetic set that contains the information of a larger real dataset and ultimately achieves test accuracy equivalent to a model trained on the whole dataset.
1 code implementation • 8 Jul 2023 • Ahmad Sajedi, Samir Khaki, Konstantinos N. Plataniotis, Mahdi S. Hosseini
However, they fail to design an end-to-end training framework, leading to high computational complexity.
no code implementations • 12 Jun 2023 • Ahmad Sajedi, Yuri A. Lawryshyn, Konstantinos N. Plataniotis
This paper presents a new distance metric to compare two continuous probability density functions.
no code implementations • 17 Jul 2022 • Ahmad Sajedi, Yuri A. Lawryshyn, Konstantinos N. Plataniotis
In classification tasks with a small number of classes or binary detection, the amount of information transferred from the teacher to the student is restricted, thus limiting the utility of knowledge distillation.
no code implementations • 12 Sep 2021 • Ahmad Sajedi, Konstantinos N. Plataniotis
These results show that the extra subclasses' knowledge (i. e., 0. 4656 label bits per training sample in our experiment) can provide more information about the teacher generalization, and therefore SKD can benefit from using more information to increase the student performance.
no code implementations • 16 Apr 2021 • Hossam Amer, Ahmed H. Salamah, Ahmad Sajedi, En-hui Yang
Our offline selections yield CNN inference time savings up to 9% and CR up to 10x.