1 code implementation • ICCV 2023 • Kenneth Borup, Cheng Perng Phoo, Bharath Hariharan
To alleviate this, we propose a weighted multi-source distillation method to distill multiple source models trained on different domains weighted by their relevance for the target task into a single efficient model (named DistillWeighted).
1 code implementation • 5 Apr 2023 • Kenneth Borup, Lars Nørvang Andersen
We propose two approaches to extend the notion of knowledge distillation to Gaussian Process Regression (GPR) and Gaussian Process Classification (GPC); data-centric and distribution-centric.
1 code implementation • NeurIPS 2021 • Kenneth Borup, Lars N. Andersen
Knowledge distillation is classically a procedure where a neural network is trained on the output of another network along with the original targets in order to transfer knowledge between the architectures.