[Resource Topic] 2024/081: SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness

Welcome to the resource topic for 2024/081

Title:
SuperFL: Privacy-Preserving Federated Learning with Efficiency and Robustness

Authors: Yulin Zhao, Hualin Zhou, Zhiguo Wan

Abstract:

Federated Learning (FL) accomplishes collaborative model training without the need to share local training data. However, existing FL aggregation approaches suffer from inefficiency, privacy vulnerabilities, and neglect of poisoning attacks, severely impacting the overall performance and reliability of model training. In order to address these challenges, we propose SuperFL, an efficient two-server aggregation scheme that is both privacy preserving and secure against poisoning attacks. The two semi-honest servers \mathcal{S}_0 and \mathcal{S}_1 collaborate with each other, with a shuffle server \mathcal{S}_0 in charge of privacy-preserving random clustering, while an analysis server \mathcal{S}_1 responsible for robustness detection, identifying and filtering malicious model updates. Our scheme employs a novel combination of homomorphic encryption and proxy re-encryption to realize secure server-to-server collaboration. We also utilize a novel sparse matrix projection compression technique to enhance communication efficiency and significantly reduce communication overhead. To resist poisoning attacks, we introduce a dual-filter algorithm based on trusted root, combine dimensionality reduction and norm calculation to identify malicious model updates.
Extensive experiments validate the efficiency and robustness of our scheme. SuperFL achieves impressive compression ratios, ranging from $5\text{-}40$x, under different models while maintaining comparable model accuracy as the baseline. Notably, our solution demonstrates a maximal model accuracy decrease of no more than 2\% and 6\% on the MNIST and CIFAR-10 datasets respectively, under specific compression ratios and the presence of malicious clients.

ePrint: https://eprint.iacr.org/2024/081

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .