Welcome to the resource topic for 2024/883
Title:
Low-Latency Linear Transformations with Small Key Transmission for Private Neural Network on Homomorphic Encryption
Authors: Byeong-Seo Min, Joon-Woo Lee
Abstract:In the field of Artificial Intelligence (AI), convolution operations have primarily been used in Convolutional Neural Networks (CNNs). However, its utility is increasing with the appearance of convolution integrated transformers or state space models where convolution is a constituent element. In the field of private AI, generalized algorithm, multiplexed parallel convolution was recently proposed to implement CNNs based on the Homomorphic Encryption scheme, residue number system variant Cheon-Kim-Kim-Song. Multiplexed parallel convolution is highly applicable, but its usage has been partly limited due to requiring many rotation operations. In this paper, we propose rotation optimized convolution, which reduces the rotation required for multiplexed parallel convolution, thus lowering latency, enhancing usability, and additionally decreasing the required rotation key. We additionally reduce the size of rotation keys by applying the hierarchical rotation key system, and our proposed small level key system. We also propose a new form of matrix-vector multiplication called Parallel Baby-Step Giant-Step matrix-vector multiplication which also reduces the number of
rotations. In our experiment case, rotation optimized convolution achieved a maximum 70% reduction in execution time and 29× reduction for rotation keys using our method. Also, our proposed matrix-vector multiplication method achieved a reduction of execution time by up to 64%.
ePrint: https://eprint.iacr.org/2024/883
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .