no code implementations • ICLR 2019 • Daeyoung Choi, Wonjong Rhee, Kyungeun Lee, Changho Shin
It has been common to argue or imply that a regularizer can be used to alter a statistical property of a hidden layer's representation and thus improve generalization or performance of deep networks.
1 code implementation • NeurIPS19 under review 2019 • Hyunghun Cho, Yongjin Kim, Eunjung Lee, Daeyoung Choi, YongJae lee, Wonjong Rhee
The performance of deep neural networks (DNN) is very sensitive to the particular choice of hyper-parameters.
no code implementations • 8 Nov 2018 • Daeyoung Choi, Kyungeun Lee, Duhun Hwang, Wonjong Rhee
In this study, the effects of eight representation regularization methods are investigated, including two newly developed rank regularizers (RR).
1 code implementation • 25 Sep 2018 • Daeyoung Choi, Wonjong Rhee
Motivated by the idea, we design two class-wise regularizers that explicitly utilize class information: class-wise Covariance Regularizer (cw-CR) and class-wise Variance Regularizer (cw-VR).
no code implementations • ICLR 2018 • Daeyoung Choi, Changho Shin, Hyunghun Cho, Wonjong Rhee
Performance of Deep Neural Network (DNN) heavily depends on the characteristics of hidden layer representations.