[Resource Topic] 2023/1729: CompactTag: Minimizing Computation Overheads in Actively-Secure MPC for Deep Neural Networks

Welcome to the resource topic for 2023/1729

Title:
CompactTag: Minimizing Computation Overheads in Actively-Secure MPC for Deep Neural Networks

Authors: Yongqin Wang, Pratik Sarkar, Nishat Koti, Arpita Patra, Murali Annavaram

Abstract:

Secure Multiparty Computation (MPC) protocols enable secure evaluation of a circuit by several parties, even in the presence of an adversary who maliciously corrupts all but one of the parties. These MPC protocols are constructed using the well-known secret-sharing-based paradigm (SPDZ and SPD$\mathbb{Z}_{2^k}$), where the protocols ensure security against a malicious adversary by computing Message Authentication Code (MAC) tags on the input shares and then evaluating the circuit with these input shares and tags. However, this tag computation adds a significant runtime overhead, particularly for machine learning (ML) applications with computationally intensive linear layers, such as convolutions and fully connected layers.

To alleviate the tag computation overhead, we introduce CompactTag, a lightweight algorithm for generating MAC tags specifically tailored for linear layers in ML. Linear layer operations in ML, including convolutions, can be transformed into Toeplitz matrix multiplications. For the multiplication of two matrices with dimensions T1 × T2 and T2 × T3 respectively, SPD$\mathbb{Z}_{2^k}$ required O(T1 · T2 · T3) local multiplications for the tag computation. In contrast, CompactTag only requires O(T1 · T2 + T1 · T3 + T2 · T3) local multiplications, resulting in a substantial performance boost for various ML models.

We empirically compared our protocol to the SPD$\mathbb{Z}{2^k} protocol for various ML circuits, including ResNet Training-Inference, Transformer Training-Inference, and VGG16 Training-Inference. SPD\mathbb{Z}{2^k}$ dedicated around 30% of its online runtime for tag computation. CompactTag speeds up this tag computation bottleneck by up to 23×, resulting in up to 1.47× total online phase runtime speedups for various ML workloads.

ePrint: https://eprint.iacr.org/2023/1729

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .