FFConv: Fast Factorized Convolutional Neural Network Inference on Encrypted Data

6 Feb 2021  ·  Yuxiao Lu, Jie Lin, Chao Jin, Zhe Wang, Min Wu, Khin Mi Mi Aung, XiaoLi Li ·

Homomorphic Encryption (HE), allowing computations on encrypted data (ciphertext) without decrypting it first, enables secure but prohibitively slow Convolutional Neural Network (CNN) inference for privacy-preserving applications in clouds. To reduce the inference latency, one approach is to pack multiple messages into a single ciphertext in order to reduce the number of ciphertexts and support massive parallelism of Homomorphic Multiply-Accumulate (HMA) operations between ciphertexts. Despite the faster HECNN inference, the mainstream packing schemes Dense Packing (DensePack) and Convolution Packing (ConvPack) introduce expensive rotation overhead, which prolongs the inference latency of HECNN for deeper and wider CNN architectures. In this paper, we propose a low-rank factorization method named FFConv dedicated to efficient ciphertext packing for reducing both the rotation overhead and HMA operations. FFConv approximates a d x d convolution layer with low-rank factorized convolutions, in which a d x d low-rank convolution with fewer channels is followed by a 1 x 1 convolution to restore the channels. The d x d low-rank convolution with DensePack leads to significantly reduced rotation operations, while the rotation overhead of 1 x 1 convolution with ConvPack is close to zero. To our knowledge, FFConv is the first work that is capable of reducing the rotation overhead incurred by DensePack and ConvPack simultaneously, without introducing additional special blocks into the HECNN inference pipeline. Compared to prior art LoLa and Falcon, our method reduces the inference latency by up to 88% and 21%, respectively, with comparable accuracy on MNIST and CIFAR-10.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods