[Resource Topic] 2020/870: Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN

Welcome to the resource topic for 2020/870

Title:
Smoothing Out Binary Linear Codes and Worst-case Sub-exponential Hardness for LPN

Authors: Yu Yu, Jiang Zhang

Abstract:

Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90’s. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic primitives. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate \frac{log^2 n}{n} implies the quasi-polynomial hardness of LPN at the extremely high noise rate 1/2-1/poly(n). It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19] and carry on the worst-case to average-case reduction for LPN, which is the main focus of this work. We first expand the underlying binary linear codes (of the worst-case NCP) to not only the balanced code considered in [BLVW19] but also to another code (in some sense dual to balanced code). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to the worst-case hardness result obtained in [BLVW19], we show that for any constant 0<c<1 the constant-noise LPN problem is (T=2^{\Omega(n^{1-c})},\epsilon=2^{-\Omega(n^{min(c,1-c)})},q=2^{\Omega(n^{min(c,1-c)})})-hard assuming that the NCP (on either code) at the low-noise rate \tau=n^{-c} is (T'={2^{\Omega(\tau n)}}, \epsilon'={2^{-\Omega(\tau n)}},m={2^{\Omega(\tau n)}})-hard in the worst case, where T, \epsilon, q and m are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions would need constant-noise LPN with (T={2^{\omega(\sqrt{n})}}, \epsilon'={2^{-\omega(\sqrt{n})}},q={2^{\sqrt{n}}})-hardness (Yu et al., CRYPTO 2016 & ASIACRYPT 2019), which is almost (up to an arbitrary \omega(1) factor in the exponent) what is reducible from the worst-case NCP when c= 0.5. We leave it as an open problem whether the gap can be closed or there is a separation in place.

ePrint: https://eprint.iacr.org/2020/870

Talk: https://www.youtube.com/watch?v=_E2ngzX8go8

Slides: https://iacr.org/submit/files/slides/2021/crypto/crypto2021/28/slides.pdf

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .