no code implementations • 22 Mar 2021 • Yuiko Sakuma, Hiroshi Sumihiro, Jun Nishikawa, Toshiki Nakamura, Ryoji Ikegaya
Moreover, we use a two-stage fine-tuning algorithm to recover the accuracy drop that is triggered by introducing the bit-level sparsity.
no code implementations • 13 Nov 2020 • Jun Nishikawa, Ryoji Ikegaya
Third, we propose a workflow of fine-tuning for quantized DNNs using the proposed pruning method(PfQ).