[Resource Topic] 2023/1890: Aegis: A Lightning Fast Privacy-preserving Machine Learning Platform against Malicious Adversaries

Welcome to the resource topic for 2023/1890

Title:
Aegis: A Lightning Fast Privacy-preserving Machine Learning Platform against Malicious Adversaries

Authors: Taipei Lu, Bingsheng Zhang, Lichun Li, Kui Ren

Abstract:

Privacy-preserving machine learning (PPML) techniques have gained significant popularity in the past years. Those protocols have been widely adopted in many real-world security-sensitive machine learning scenarios, e.g., medical care and finance. In this work, we introduce \mathsf{Aegis}~-- a high-performance PPML platform built on top of a maliciously secure 3-PC framework over ring \mathbb{Z}_{2^\ell}. In particular, we propose a novel 2-round secure comparison (a.k.a., sign bit extraction) protocol in the preprocessing model. The communication of its semi-honest version is only 25% of the state-of-the-art (SOTA) constant-round semi-honest comparison protocol by Zhou et al.(S&P 2023); both communication and round complexity of its malicious version are approximately 50% of the SOTA (BLAZE) by Patra and Suresh (NDSS 2020), for \ell=64.
Moreover, the communication of our maliciously secure inner product protocol is merely 3\ell bits, reducing 50% from the SOTA (Swift) by Koti et al. (USENIX 2021).
Finally, the resulting ReLU and MaxPool PPML protocols outperform the SOTA by 4\times in the semi-honest setting and 10\times in the malicious setting, respectively.

ePrint: https://eprint.iacr.org/2023/1890

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .