Welcome to the resource topic for 2023/1637
Title:
Algorithmic Views of Vectorized Polynomial Multipliers – NTRU
Authors: Han-Ting Chen, Yi-Hua Chung, Vincent Hwang, Bo-Yin Yang
Abstract:The lattice-based post-quantum cryptosystem NTRU is used by Google for protecting Google’s internal communication. In NTRU, polynomial multiplication is one of bottleneck. In this paper, we explore the interactions between polynomial multiplication, Toeplitz matrix–vector product, and vectorization with architectural insights. For a unital commutative ring R, a positive integer n, and an element \zeta \in R, we reveal the benefit of vector-by-scalar multiplication instructions while multiplying in R[x] / \langle x^n - \zeta \rangle.
We aim at designing an algorithm exploiting no algebraic and number–theoretic properties of n and \zeta. An obvious way is to multiply in R[x] and reduce modulo x^n - \zeta. Since the product in R[x] is a polynomial of degree at most 2n − 2, one usually chooses a polynomial modulus g such that (i) deg(g) \geq 2n − 1, and (ii) there exists a well-studied fast polynomial multiplication algorithm f for multiplying in R[x] / \langle g \rangle.
We deviate from common approaches and point out a novel insight with dual modules and vector-by-scalar multiplications. Conceptually, we relate the module-theoretic dual of R[x] / \langle x^n - \zeta \rangle and R[x] / \langle g \rangle with Toeplitz matrix-vector products, and demonstrate the benefit of Toeplitz matrix-vector products with vector-by-scalar multiplication instructions. It greatly reduces the register pressure, and allows us to multiply with essentially no permutation instructions that are commonly used in vectorized implementation.
We implement the ideas for the NTRU parameter sets ntruhps2048677 and ntruhrss701 on a Cortex-A72 implementing the Armv8.0-A architecture with the single-instruction-multiple-data (SIMD) technology Neon. For polynomial multiplications, our implementation is 2.18× and 2.23× for ntruhps2048677 and ntruhrsss701 than the state-of-the-art optimized implementation. We also vectorize the polynomial inversions and sorting network by employing existing techniques and translating AVX2-optimized implementations into Neon. Compared to the state-of-the-art optimized implementation, our key generation, encapsulation, and decapsulation for ntruhps2048677 are 7.67×, 2.48×, and 1.77× faster, respectively. For ntruhrss701, our key generation, encapsulation, and decapsulation are 7.99×, 1.47×, and 1.56× faster, respectively.
ePrint: https://eprint.iacr.org/2023/1637
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .