[Resource Topic] 2022/663: SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning

Welcome to the resource topic for 2022/663

Title:
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning

Authors: Harsh Chaudhari, Matthew Jagielski, and Alina Oprea

Abstract:

Secure multiparty computation (MPC) has been proposed to allow multiple mutually distrustful data owners to jointly train machine learning (ML) models on their combined data. However, the datasets used for training ML models might be under the control of an adversary mounting a data poisoning attack, and MPC prevents inspecting training sets to detect poisoning. We show that multiple MPC frameworks for private ML training are susceptible to backdoor and targeted poisoning attacks. To mitigate this, we propose SafeNet, a framework for building ensemble models in MPC with formal guarantees of robustness to data poisoning attacks. We extend the security definition of private ML training to account for poisoning and prove that our SafeNet design satisfies the definition. We demonstrate SafeNet’s efficiency, accuracy, and resilience to poisoning on several machine learning datasets and models. For instance, SafeNet reduces backdoor attack success from 100\% to 0\% for a neural network model, while achieving 39\times faster training and 36 \times less communication than the four-party MPC framework of Dalskov et al.

ePrint: https://eprint.iacr.org/2022/663

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .