Welcome to the resource topic for
**2024/227**

**Title:**

Adaptively Sound Zero-Knowledge SNARKs for UP

**Authors:**
Surya Mathialagan, Spencer Peters, Vinod Vaikuntanathan

**Abstract:**

We study succinct non-interactive arguments (SNARGs) and succinct non-interactive arguments of knowledge (SNARKs) for the class \mathsf{UP} in the reusable designated verifier model. \mathsf{UP} is an expressive subclass of \mathsf{NP} consisting of all \mathsf{NP} languages where each instance has at most one witness; a designated verifier SNARG (dvSNARG) is one where verification of the SNARG proof requires a private verification key; and such a dvSNARG is reusable if soundness holds even against a malicious prover with oracle access to the (private) verification algorithm.

Our main results are as follows.

(1) A reusably and adaptively sound zero-knowledge (zk) dvSNARG for \mathsf{UP}, from subexponential LWE and evasive LWE (a relatively new but popular variant of LWE). Our SNARGs achieve very short proofs of length (1 + o(1)) \cdot \lambda bits for 2^{-\lambda} soundness error.

(2) A generic transformation that lifts any ``Sahai-Waters-like’’ (zk) SNARG to an adaptively sound (zk) SNARG, in the designated-verifier setting. In particular, this shows that the Sahai-Waters SNARG for \mathsf{NP} is adaptively sound in the designated verifier setting, assuming subexponential hardness of the underlying assumptions. The resulting SNARG proofs have length (1 + o(1)) \cdot \lambda bits for 2^{-\lambda} soundness error. Our result sidesteps the Gentry-Wichs barrier for adaptive soundness by employing an exponential-time security reduction.

(3) A generic transformation that lifts any adaptively sound (zk) SNARG for \mathsf{UP} to an adaptively sound (zk) SNARK for \mathsf{UP}, while preserving zero-knowledge. The resulting SNARK achieves the strong notion of black-box extraction. There are barriers to achieving such SNARKs for all of \mathsf{NP} from falsifiable assumptions, so our restriction to \mathsf{UP} is, in a sense, necessary.

Applying (3) to our SNARG for \mathsf{UP} from evasive LWE (1), we obtain a reusably and adaptively sound designated-verifier zero-knowledge SNARK for \mathsf{UP} from subexponential LWE and evasive LWE. Moreover, applying both (2) and (3) to the Sahai-Waters SNARG, we obtain the same result from LWE, subexponentially secure one-way functions, and subexponentially secure indistinguishability obfuscation. Both constructions have succinct proofs of size \mathsf{poly}(\lambda). These are the first SNARK constructions (even in the designated-verifier setting) for a non-trivial subset of \mathsf{NP} from (sub-exponentially) falsifiable assumptions.

**ePrint:**
https://eprint.iacr.org/2024/227

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

**Example resources include:**
implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .