Search Results for author: Yuri A. Lawryshyn

Found 5 papers, 3 papers with code

DataDAM: Efficient Dataset Distillation with Attention Matching

2 code implementations ICCV 2023 Ahmad Sajedi, Samir Khaki, Ehsan Amjadian, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis

Emerging research on dataset distillation aims to reduce training costs by creating a small synthetic set that contains the information of a larger real dataset and ultimately achieves test accuracy equivalent to a model trained on the whole dataset.

Continual Learning Neural Architecture Search

Subclass Knowledge Distillation with Known Subclass Labels

no code implementations17 Jul 2022 Ahmad Sajedi, Yuri A. Lawryshyn, Konstantinos N. Plataniotis

In classification tasks with a small number of classes or binary detection, the amount of information transferred from the teacher to the student is restricted, thus limiting the utility of knowledge distillation.

Binary Classification Knowledge Distillation

Maximum Mutation Reinforcement Learning for Scalable Control

2 code implementations24 Jul 2020 Karush Suri, Xiao Qi Shi, Konstantinos N. Plataniotis, Yuri A. Lawryshyn

Advances in Reinforcement Learning (RL) have demonstrated data efficiency and optimal control over large state spaces at the cost of scalable performance.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.