[Resource Topic] 2021/087: ZEN: An Optimizing Compiler for Verifiable, Zero-Knowledge Neural Network Inferences

Welcome to the resource topic for 2021/087

ZEN: An Optimizing Compiler for Verifiable, Zero-Knowledge Neural Network Inferences

Authors: Boyuan Feng, Lianke Qin, Zhenfei Zhang, Yufei Ding, Shumo Chu


We present ZEN, the first optimizing compiler that generates efficient verifiable, zero-knowledge neural network inference schemes. ZEN generates two schemes: ZEN${acc} and ZEN{infer}. ZEN{acc} proves the accuracy of a committed neural network model; ZEN{infer} proves a specific inference result. Used in combination, these verifiable computation schemes ensure both the privacy of the sensitive user data as well as the confidentiality of the neural network models. However, directly using these schemes on zkSNARKs requires prohibitive computational cost. As an optimizing compiler, ZEN introduces two kinds of optimizations to address this issue: first, ZEN incorporates a new neural network quantization algorithm that incorporate two R1CS friendly optimizations which makes the model to be express in zkSNARKs with less constraints and minimal accuracy loss; second, ZEN introduces a SIMD style optimization, namely stranded encoding, that can encoding multiple 8bit integers in large finite field elements without overwhelming extraction cost. Combining these optimizations, ZEN produces verifiable neural network inference schemes with {\bf 5.43} \sim {\bf 22.19} \times$ (15.35 \times on average) less R1CS constraints.

ePrint: https://eprint.iacr.org/2021/087

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .