Welcome to the resource topic for 2016/175
Title:
Online/Offline OR Composition of Sigma Protocols
Authors: Michele Ciampi, Giuseppe Persiano, Alessandra Scafuro, Luisa Siniscalchi, Ivan Visconti
Abstract:Proofs of partial knowledge allow a prover to prove knowledge of witnesses for k out of n instances of NP languages. Cramer, Schoenmakers and Damg\aa rd [CDS94] provided an efficient construction of a 3-round public-coin witness-indistinguishable (k, n)-proof of partial knowledge for any NP language, by cleverly combining n executions of Sigma-protocols for that language. This transform assumes that all n instances are fully specified before the proof starts, and thus directly rules out the possibility of choosing some of the instances after the first round. Very recently, Ciampi et al. [CPS+16] provided an improved transform where one of the instances can be specified in the last round. They focus on (1,2)-proofs of partial knowledge with the additional feature that one instance is defined in the last round, and could be adaptively chosen by the verifier. They left as an open question the existence of an efficient (1, 2)-proof of partial knowledge where no instance is known in the first round. More in general, they left open the question of constructing an efficient (k, n)-proof of partial knowledge where knowledge of all n instances can be postponed. Indeed, this property is achieved only by inefficient constructions requiring NP reductions [LS90]. In this paper we focus on the question of achieving adaptive-input proofs of partial knowledge. We provide through a transform the first efficient construction of a 3-round public-coin witness- indistinguishable (k, n)-proof of partial knowledge where all instances can be decided in the third round. Our construction enjoys adaptive-input witness indistinguishability. Additionally, the proof of knowledge property remains also if the adversarial prover selects instances adaptively at last round as long as our transform is applied to a proof of knowledge belonging to the widely used class of proofs of knowledge described in [Mau15, CD98]. Since knowledge of instances and witnesses is not needed before the last round, we have that the first round can be precomputed and in the online/offline setting our performance is similar to the one of [CDS94]. Our new transform relies on the DDH assumption (in contrast to the transforms of [CDS94, CPS+16] that are unconditional). We also show how to strengthen the transform of [CPS+16] so that it also achieves adaptive soundness, when the underlying combined protocols belong to the class of protocols described in [Mau15, CD98].
ePrint: https://eprint.iacr.org/2016/175
Talk: https://www.youtube.com/watch?v=-4rz4QLVcjY
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .