Welcome to the resource topic for 2024/1603
Title:
Boosting SNARKs and Rate-1 Barrier in Arguments of Knowledge
Authors: Jiaqi Cheng, Rishab Goyal
Abstract:We design a generic compiler to boost any non-trivial succinct non-interactive argument of knowledge (SNARK) to full succinctness. Our results come in two flavors:
For any constant \epsilon > 0, any SNARK with proof size |\pi| < \frac{|\omega|}{\lambda^\epsilon} + \mathsf{poly}(\lambda, |x|) can be upgraded to a fully succinct SNARK, where all system parameters (such as proof/CRS sizes and setup/verifier run-times) grow as fixed polynomials in \lambda, independent of witness size.
Under an additional assumption that the underlying SNARK has as an \emph{efficient} knowledge extractor, we further improve our result to upgrade any non-trivial SNARK. For example, we show how to design fully succinct SNARKs from SNARKs with proofs of length |\omega| - \Omega(\lambda), or \frac{|\omega|}{1 + \epsilon} + \mathsf{poly}(\lambda, |x|), any constant \epsilon > 0.
Our result reduces the long-standing challenge of designing fully succinct SNARKs to designing \emph{arguments of knowledge that beat the trivial construction}. It also establishes optimality of rate-1 arguments of knowledge (such as NIZKs [Gentry-Groth-Ishai-Peikert-Sahai-Smith; JoC’15] and BARGs [Devadas-Goyal-Kalai-Vaikuntanathan, Paneth-Pass; FOCS’22]), and suggests any further improvement is tantamount to designing fully succinct SNARKs, thus requires bypassing established black-box barriers [Gentry-Wichs; STOC’11].
ePrint: https://eprint.iacr.org/2024/1603
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .