Welcome to the resource topic for 2024/703
Title:
An Efficient and Extensible Zero-knowledge Proof Framework for Neural Networks
Authors: Tao Lu, Haoyu Wang, Wenjie Qu, Zonghui Wang, Jinye He, Tianyang Tao, Wenzhi Chen, Jiaheng Zhang
Abstract:In recent years, cloud vendors have started to supply paid services for data analysis by providing interfaces of their well-trained neural network models. However, customers lack tools to verify whether outcomes supplied by cloud vendors are correct inferences from particular models, in the face of lazy or malicious vendors. The cryptographic primitive called zero-knowledge proof (ZKP) addresses this problem. It enables the outcomes to be verifiable without leaking information about the models. Unfortunately, existing ZKP schemes for neural networks have high computational overheads, especially for the non-linear layers in neural networks.
In this paper, we propose an efficient and extensible ZKP framework for neural networks. Our work improves the performance of the proofs for non-linear layers. Compared to previous works relying on the technology of bit decomposition, we convert complex non-linear relations into range and exponent relations, which significantly reduces the number of constraints required to prove non-linear layers. Moreover, we adopt a modular design to make our framework compatible with more neural networks. Specifically, we propose two enhanced range and lookup proofs as basic blocks. They are efficient in proving the satisfaction of range and exponent relations. Then, we constrain the correct calculation of primitive non-linear operations using a small number of range and exponent relations. Finally, we build our ZKP framework from the primitive operations to the entire neural networks, offering the flexibility for expansion to various neural networks.
We implement our ZKPs for convolutional and transformer neural networks. The evaluation results show that our work achieves over 168.6\times (up to 477.2\times) speedup for separated non-linear layers and 41.4\times speedup for the entire ResNet-101 convolutional neural network, when compared with the state-of-the-art work, Mystique. In addition, our work can prove GPT-2, a transformer neural network with 117 million parameters, in 287.1 seconds, achieving 35.7\times speedup over ZKML, which is a state-of-the-art work supporting transformer neural networks.
ePrint: https://eprint.iacr.org/2024/703
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .