Search Results for author: Jaeha Kung

Found 4 papers, 1 papers with code

One-Spike SNN: Single-Spike Phase Coding with Base Manipulation for ANN-to-SNN Conversion Loss Minimization

no code implementations30 Jan 2024 Sangwoo Hwang, Jaeha Kung

In this work, we propose a single-spike phase coding as an encoding scheme that minimizes the number of spikes to transfer data between SNN layers.

LightNorm: Area and Energy-Efficient Batch Normalization Hardware for On-Device DNN Training

no code implementations4 Nov 2022 Seock-Hwan Noh, JunSang Park, Dahoon Park, Jahyun Koo, Jeik Choi, Jaeha Kung

Thus, in this work, we conduct a detailed analysis of the batch normalization layer to efficiently reduce the runtime overhead in the batch normalization process.

FlexBlock: A Flexible DNN Training Accelerator with Multi-Mode Block Floating Point Support

no code implementations13 Mar 2022 Seock-Hwan Noh, Jahyun Koo, SeungHyun Lee, Jongse Park, Jaeha Kung

While several prior works proposed such multi-precision support for DNN accelerators, not only do they focus only on the inference, but also their core utilization is suboptimal at a fixed precision and specific layer types when the training is considered.

ZeBRA: Precisely Destroying Neural Networks with Zero-Data Based Repeated Bit Flip Attack

1 code implementation1 Nov 2021 Dahoon Park, Kon-Woo Kwon, Sunghoon Im, Jaeha Kung

Many prior works on adversarial weight attack require not only the weight parameters, but also the training or test dataset in searching vulnerable bits to be attacked.

Cannot find the paper you are looking for? You can Submit a new open access paper.