Welcome to the resource topic for 2024/162
Title:
Zero-Knowledge Proofs of Training for Deep Neural Networks
Authors: Kasra Abbaszadeh, Christodoulos Pappas, Dimitrios Papadopoulos, Jonathan Katz
Abstract:A zero-knowledge proof of training (zkPoT) enables a party to prove that they have correctly trained a committed model based on a committed dataset without revealing any additional information about the model or the dataset. An ideal zkPoT should offer provable security and privacy guarantees, succinct proof size and verifier runtime, and practical prover efficiency. In this work, we present Kaizen, a zkPoT targeted for deep neural networks (DNNs) that achieves the above ideals all at once. In particular, our construction enables a prover to iteratively train their model by the (mini-batch) gradient-descent algorithm where the number of iterations need not be fixed in advance; at the end of each iteration, the prover generates a commitment to the trained model attached with a succinct zkPoT, attesting to the correctness of the entire training process. The proof size and verifier time are independent of the iteration number.
Kaizen relies on two essential building blocks to achieve both prover efficiency and verification succinctness. First, we construct an optimized GKR-style (sumcheck-based) proof system for the gradient-descent algorithm with concretely efficient prover cost; this scheme allows the prover to generate a proof for each iteration of the training process. Then, we recursively compose these proofs across multiple iterations to attain succinctness. As of independent interests, we propose a framework for recursive composition of GKR-style proofs and techniques, such as aggregatable polynomial commitment schemes, to minimize the recursion overhead.
Benchmarks indicate that Kaizen can handle a large model of VGG-11 with 10 million parameters and batch size 16. The prover runtime is 22 minutes (per iteration), which is \mathbf{43\times} faster than generic recursive proofs, while we further achieve at least \mathbf{224 \times} less prover memory overhead. Independent of the number of iterations and, hence, the size of the dataset, the proof size is 1.36 megabytes, and the verifier runtime is only 103 milliseconds.
ePrint: https://eprint.iacr.org/2024/162
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .