Search Results for author: Kenneth Borup

Found 3 papers, 3 papers with code

Distilling from Similar Tasks for Transfer Learning on a Budget

1 code implementation ICCV 2023 Kenneth Borup, Cheng Perng Phoo, Bharath Hariharan

To alleviate this, we propose a weighted multi-source distillation method to distill multiple source models trained on different domains weighted by their relevance for the target task into a single efficient model (named DistillWeighted).

Transfer Learning

Self-Distillation for Gaussian Process Regression and Classification

1 code implementation5 Apr 2023 Kenneth Borup, Lars Nørvang Andersen

We propose two approaches to extend the notion of knowledge distillation to Gaussian Process Regression (GPR) and Gaussian Process Classification (GPC); data-centric and distribution-centric.

Classification GPR +2

Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation

1 code implementation NeurIPS 2021 Kenneth Borup, Lars N. Andersen

Knowledge distillation is classically a procedure where a neural network is trained on the output of another network along with the original targets in order to transfer knowledge between the architectures.

Self-Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.