Welcome to the resource topic for 2024/1953
Title:
Truncation Untangled: Scaling Fixed-Point Arithmetic for Privacy-Preserving Machine Learning to Large Models and Datasets
Authors: Christopher Harth-Kitzerow, Georg Carle
Abstract:Fixed point arithmetic (FPA) is essential to enable practical Privacy-Preserving Machine Learning. When multiplying two fixed-point numbers, truncation is required to ensure that the product maintains correct precision. While multiple truncation schemes based on Secure Multiparty Computation (MPC) have been proposed, which of the different schemes offers the best trade-off between accuracy and efficiency on common PPML datasets and models has remained underexplored.
In this work, we study several different stochastic and exact truncation approaches found in the MPC literature that require different slack sizes, i.e., additional bits required by each secret share to ensure correctness. We provide novel, improved construction for each truncation approach in the semi-honest 3-PC and malicious 4-PC settings, which reduce communication and round complexity up to three times. Moreover, we propose a truncation scheme that does not introduce any communication overhead in the online phase and exactly matches the accuracy of plaintext floating-point PyTorch inference of VGG-16 on the ImageNet dataset with over 80% accuracy using shares with a bitlength of only 32. This is the first time that high PPML accuracy is demonstrated on ImageNet.
ePrint: https://eprint.iacr.org/2024/1953
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .