Search Results for author: Jun Nishikawa

Found 3 papers, 0 papers with code

n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

no code implementations22 Mar 2021 Yuiko Sakuma, Hiroshi Sumihiro, Jun Nishikawa, Toshiki Nakamura, Ryoji Ikegaya

Moreover, we use a two-stage fine-tuning algorithm to recover the accuracy drop that is triggered by introducing the bit-level sparsity.

object-detection Object Detection +1

Filter Pre-Pruning for Improved Fine-tuning of Quantized Deep Neural Networks

no code implementations13 Nov 2020 Jun Nishikawa, Ryoji Ikegaya

Third, we propose a workflow of fine-tuning for quantized DNNs using the proposed pruning method(PfQ).

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.