Welcome to the resource topic for 2023/1050
Title:
SNARGs for Monotone Policy Batch NP
Authors: Zvika Brakerski, Maya Farber Brodsky, Yael Tauman Kalai, Alex Lombardi, Omer Paneth
Abstract:We construct a succinct non-interactive argument (\mathsf{SNARG}) for the class of monotone policy batch \mathsf{NP} languages under the Learning with Errors (\mathsf{LWE}) assumption. This class is a subclass of \mathsf{NP} that is associated with a monotone function~f:\{0,1\}^k\rightarrow\{0,1\} and an \mathsf{NP} language \mathcal L, and contains instances (x_1,\ldots,x_k) such that f(b_1,\ldots,b_k)=1 where b_j=1 if and only if x_j\in \mathcal L. Our $\mathsf{SNARG}$s are arguments of knowledge in the non-adaptive setting, and satisfy a new notion of somewhere extractability against adaptive adversaries.
This is the first \mathsf{SNARG} under standard hardness assumptions for a sub-class of \mathsf{NP} that is not known to have a (computational) non-signaling \mathsf{PCP} with small locality.
Indeed, our approach necessarily departs from the known framework of constructing $\mathsf{SNARG}$s dating back to [Kalai-Raz-Rothblum, STOC '13]
Our construction combines existing quasi-arguments for \mathsf{NP} (based on batch arguments for \mathsf{NP}) with a novel ingredient which we call a predicate-extractable hash (\mathsf{PEH}) family. This notion generalizes the notion of a somewhere extractable hash. Whereas a somewhere extractable hash allows to extract a single input coordinate, our \mathsf{PEH} extracts a global property of the input. We view this primitive to be of independent interest, and believe that it will find other applications.
ePrint: https://eprint.iacr.org/2023/1050
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .