Welcome to the resource topic for 2015/769
Title:
On the Hardness of Learning with Rounding over Small Modulus
Authors: Andrej Bogdanov, Siyao Guo, Daniel Masny, Silas Richelson, Alon Rosen
Abstract:We show the following reductions from the learning with errors problem (LWE) to the learning with rounding problem (LWR): (1) Learning the secret and (2) distinguishing samples from random strings is at least as hard for LWR as it is for LWE for efficient algorithms if the number of samples is no larger than O(q/Bp), where q is the LWR modulus, p is the rounding modulus and the noise is sampled from any distribution supported over the set {-B,…,B}. Our second result generalizes a theorem of Alwen, Krenn, Pietrzak and Wichs (CRYPTO 2013) and provides an alternate proof of it. Unlike Alwen et al., we do not impose any number theoretic restrictions on the modulus q. The first result also extends to variants of LWR and LWE over polynomial rings. As additional results we show that (3) distinguishing any number of LWR samples from random strings is of equivalent hardness to LWE whose noise distribution is uniform over the integers in the range [-q/2p,…,q/2p) provided q is a multiple of p and (4) the “noise flooding” technique for converting faulty LWE noise to a discrete Gaussian distribution can be applied whenever q = \Omega(B\sqrt{m}). All our reductions preserve sample complexity and have time complexity at most polynomial in q, the dimension, and the number of samples.
ePrint: https://eprint.iacr.org/2015/769
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .