no code implementations • NeurIPS 2021 • Taebum Kim, Eunji Jeong, Geon-Woo Kim, Yunmo Koo, Sehoon Kim, Gyeong-In Yu, Byung-Gon Chun
Recently, several systems have been proposed to combine the usability of imperative programming with the optimized performance of symbolic graph execution.
1 code implementation • NeurIPS 2019 • Jaemin Yoo, Minyong Cho, Taebum Kim, U Kang
Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.