Welcome to the resource topic for 2022/1744
Title:
Worst and Average Case Hardness of Decoding via Smoothing Bounds
Authors: Thomas Debris-Alazard, Nicolas Resch
Abstract:In this work, we consider the worst and average case hardness of the decoding problems that are the basis for code-based cryptography. By a decoding problem, we consider inputs of the form (\mathbf{G}, \mathbf{m} \mathbf{G} + \mathbf{t}) for a matrix \mathbf{G} that generates a code and a noise vector \mathbf{t}, and the algorithm’s goal is to recover \mathbf{m}.
We consider a natural strategy for creating a reduction to an average-case problem: from our input we simulate a Learning Parity with Noise (LPN) oracle, where we recall that LPN is essentially an average-case decoding problem where there is no a priori lower bound on the rate of the code. More formally, the oracle \mathcal{O}_{\mathbf{x}} outputs independent samples of the form \langle \mathbf{x}, \mathbf{a} \rangle + e, where \mathbf{a} is a uniformly random vector and e is a noise bit. Such an approach is (implicit in) the previous worst-case to average-case reductions for coding problems (Brakerski et al Eurocrypt 2019, Yu and Zhang CRYPTO 2021).
To analyze the effectiveness of this reduction, we use a smoothing bound derived recently by (Debris-Alazard et al IACR Eprint 2022), which quantifies the simulation error of this reduction. It is worth noting that this latter work crucially use a bound, known as the second linear programming bounds, on the weight distribution of the code generated here by \mathbf{G}. Our approach, which is Fourier analytic in nature, applies to any smoothing distribution (so long as it is radial); for our purposes, the best choice appears to be Bernoulli (although for the analysis it is most effective to study the uniform distribution over a sphere, and subsequently translate the bound back to the Bernoulli distribution by applying a truncation trick). Our approach works naturally when reducing from a worst-case instance, as well as from an average-case instance.
While we are unable to improve the parameters of the worst-case to average-case reductions of Brakerski et al or Yu and Zhang, we think that our work highlights two important points. Firstly, in analyzing the average-case to average-case reduction we run into inherent limitations of this reduction template. Essentially, it appears hopeless to reduce to an LPN instance for which the noise rate is more than inverse-polynomially biased away from uniform. We furthermore uncover a surprising weakness in the second linear programming bound: we observe that it is essentially useless for the regime of parameters where the rate of the code is inverse polynomial in the block-length. By highlighting these shortcomings, we hope to stimulate the development of new techniques for reductions between cryptographic decoding problems.
ePrint: https://eprint.iacr.org/2022/1744
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .