[Resource Topic] 2019/1365: FLASH: Fast and Robust Framework for Privacy-preserving Machine Learning

Welcome to the resource topic for 2019/1365

Title:
FLASH: Fast and Robust Framework for Privacy-preserving Machine Learning

Authors: Megha Byali, Harsh Chaudhari, Arpita Patra, Ajith Suresh

Abstract:

Privacy-preserving machine learning (PPML) via Secure Multi-party Computation (MPC) has gained momentum in the recent past. Assuming a minimal network of pair-wise private channels, we propose an efficient four-party PPML framework over rings \Z{\ell}, FLASH, the first of its kind in the regime of PPML framework, that achieves the strongest security notion of Guaranteed Output Delivery (all parties obtain the output irrespective of adversary’s behaviour). The state of the art ML frameworks such as ABY3 by {\em Mohassel et.al} (ACM CCS’18) and SecureNN by {\em Wagh et.al} (PETS’19) operate in the setting of 3 parties with one malicious corruption but achieve the {\em weaker} security guarantee of {\em abort}. We demonstrate PPML with real-time efficiency, using the following custom-made tools that overcome the limitations of the aforementioned state-of-the-art-- (a) {\em dot product}, which is independent of the vector size unlike the state-of-the-art ABY3, SecureNN and ASTRA by {\em Chaudhari et.al} (ACM CCSW’19), all of which have linear dependence on the vector size. (b) {\em Truncation}, which is constant round and free of circuits like Ripple Carry Adder (RCA), unlike ABY3 which uses these circuits and has round complexity of the order of depth of these circuits. We then exhibit the application of our FLASH framework in the secure server-aided prediction of vital algorithms-- Linear Regression, Logistic Regression, Deep Neural Networks, and Binarized Neural Networks. We substantiate our theoretical claims through improvement in benchmarks of the aforementioned algorithms when compared with the current best framework ABY3. All the protocols are implemented over a 64-bit ring in LAN and WAN. Our experiments demonstrate that, for MNIST dataset, the improvement (in terms of throughput) ranges from 11\times to 1395\times over LAN and WAN together.

ePrint: https://eprint.iacr.org/2019/1365

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .