Welcome to the resource topic for 2025/1729
Title:
GuardianMPC: Backdoor-resilient Neural Network Computation
Authors: Mohammad Hashemi, Domenic Forte, Fatemeh Ganji
Abstract:The rapid growth of deep learning (DL) has raised
serious concerns about users’ data and neural network (NN)
models’ security and privacy, particularly the risk of backdoor
insertion when outsourcing the training or employing pre-trained
models. To ensure resilience against such backdoor attacks, this
work presents GuardianMPC, a novel framework leveraging
secure multiparty computation (MPC). GuardianMPC is built
upon garbled circuits (GC) within the LEGO protocol framework
to accelerate oblivious inference on FPGAs in the presence of
malicious adversaries that can manipulate the model weights
and/or insert a backdoor in the architecture of a pre-trained
model. In this regard, GuardianMPC is the first to offer private
function evaluation in the LEGO family. GuardianMPC
also supports private training to effectively counter backdoor
attacks targeting NN model architectures and parameters. With
optimized pre-processing, GuardianMPC significantly accelerates
the online phase, achieving up to x13.44 faster computation than
its software counterparts. Our experimental results for multilayer
perceptrons (MLPs) and convolutional neural networks (CNNs)
assess GuardianMPC’s time complexity and scalability across
diverse NN model architectures. Interestingly, GuardianMPC
does not adversely affect the training accuracy, as opposed to
many existing private training frameworks. These results confirm
GuardianMPC as a high-performance, model-agnostic solution
for secure NN computation with robust security and privacy
guarantees.
ePrint: https://eprint.iacr.org/2025/1729
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .