1 code implementation • 19 Dec 2019 • Woohyung Chun, Sung-Min Hong, Junho Huh, Inyup Kang
We propose the scheme that mitigates the adversarial perturbation $\epsilon$ on the adversarial example $X_{adv}$ ($=$ $X$ $\pm$ $\epsilon$, $X$ is a benign sample) by subtracting the estimated perturbation $\hat{\epsilon}$ from $X$ $+$ $\epsilon$ and adding $\hat{\epsilon}$ to $X$ $-$ $\epsilon$.
no code implementations • 31 Oct 2018 • Doyun Kim, Han Young Yim, Sanghyuck Ha, Changgwun Lee, Inyup Kang
As edge applications using convolutional neural networks (CNN) models grow, it is becoming necessary to introduce dedicated hardware accelerators in which network parameters and feature-map data are represented with limited precision.
no code implementations • 9 May 2018 • Woohyung Chun, Sung-Min Hong, Junho Huh, Inyup Kang
We propose the method to sanitize the privacy of the IFM(Input Feature Map)s that are fed into the layers of CNN(Convolutional Neural Network)s. The method introduces the degree of the sanitization that makes the application using a CNN be able to control the privacy loss represented as the ratio of the probabilistic accuracies for original IFM and sanitized IFM.