1 code implementation • 10 Mar 2022 • Chenhongyi Yang, Mateusz Ochal, Amos Storkey, Elliot J. Crowley
Based on this, we propose Prediction-Guided Distillation (PGD), which focuses distillation on these key predictive regions of the teacher and yields considerable gains in performance over many existing KD baselines.
1 code implementation • ICLR Workshop Learning_to_Learn 2021 • Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang
Meta-Learning (ML) has proven to be a useful tool for training Few-Shot Learning (FSL) algorithms by exposure to batches of tasks sampled from a meta-dataset.
1 code implementation • 7 Jan 2021 • Mateusz Ochal, Massimiliano Patacchiola, Amos Storkey, Jose Vazquez, Sen Wang
Few-Shot Learning (FSL) algorithms are commonly trained through Meta-Learning (ML), which exposes models to batches of tasks sampled from a meta-dataset to mimic tasks seen during evaluation.
no code implementations • 1 Jan 2021 • Mateusz Ochal, Massimiliano Patacchiola, Jose Vazquez, Amos Storkey, Sen Wang
Few-shot learning aims to train models on a limited number of labeled samples from a support set in order to generalize to unseen samples from a query set.
no code implementations • 10 May 2020 • Mateusz Ochal, Jose Vazquez, Yvan Petillot, Sen Wang
Deep convolutional neural networks generally perform well in underwater object recognition tasks on both optical and sonar images.
2 code implementations • 15 Apr 2020 • Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal, Amos Storkey
Both few-shot and continual learning have seen substantial progress in the last years due to the introduction of proper benchmarks.