[Resource Topic] 2023/1462: High-precision RNS-CKKS on fixed but smaller word-size architectures: theory and application

Welcome to the resource topic for 2023/1462

Title:
High-precision RNS-CKKS on fixed but smaller word-size architectures: theory and application

Authors: Rashmi Agrawal, Jung Ho Ahn, Flavio Bergamaschi, Ro Cammarota, Jung Hee Cheon, Fillipe D. M. de Souza, Huijing Gong, Minsik Kang, Duhyeong Kim, Jongmin Kim, Hubert de Lassus, Jai Hyun Park, Michael Steiner, Wen Wang

Abstract:

A prevalent issue in the residue number system (RNS) variant of the Cheon-Kim-Kim-Song (CKKS) homomorphic encryption (HE) scheme is the challenge of efficiently achieving high precision on hardware architectures with a fixed, yet smaller, word-size of bit-length W, especially when the scaling factor satisfies \log\Delta > W.
In this work, we introduce an efficient solution termed composite scaling. In this approach, we group multiple RNS primes as q_\ell:= \prod_{j=0}^{t-1} q_{\ell,j} such that \log q_{\ell,j} < W for 0\le j < t, and use each composite q_\ell in the rescaling procedure as \mathsf{ct}\mapsto \lfloor \mathsf{ct} / q_\ell\rceil. Here, the number of primes, denoted by t, is termed the composition degree. This strategy contrasts the traditional rescaling method in RNS-CKKS, where each q_\ell is chosen as a single \log\Delta-bit prime, a method we designate as single scaling.
To achieve higher precision in single scaling, where \log\Delta > W, one would either need a novel hardware architecture with word size W' > \log\Delta or would have to resort to relatively inefficient solutions rooted in multi-precision arithmetic. This problem, however, doesn’t arise in composite scaling. In the composite scaling approach, the larger the composition degree t, the greater the precision attainable with RNS-CKKS across an extensive range of secure parameters tailored for workload deployment.
We have integrated composite scaling RNS-CKKS into both OpenFHE and Lattigo libraries. This integration was achieved via a concrete implementation of the method and its application to the most up-to-date workloads, specifically, logistic regression training and convolutional neural network inference. Our experiments demonstrate that single and composite scaling approaches are functionally equivalent, both theoretically and practically.

ePrint: https://eprint.iacr.org/2023/1462

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .