Welcome to the resource topic for 2024/671
Title:
Exploiting Internal Randomness for Privacy in Vertical Federated Learning
Authors: Yulian Sun, Li Duan, Ricardo Mendes, Derui Zhu, Yue Xia, Yong Li, Asja Fischer
Abstract:Vertical Federated Learning (VFL) is becoming a standard collaborative learning paradigm with various practical applications. Randomness is essential to enhancing privacy in VFL, but introducing too much external randomness often leads to an intolerable performance loss. Instead, as it was demonstrated for other federated learning settings, leveraging internal randomness —as provided by variational autoencoders (VAEs) —can be beneficial. However, the resulting privacy has never been quantified so far nor has the approach been investigated for VFL.
We therefore propose a novel differential privacy estimate, denoted as distance-based empirical local differential privacy (dELDP). It allows to empirically bound DP parameters of concrete components, quantifying the internal randomness with appropriate distance and sensitivity metrics. We apply dELDP to investigate the DP of VAEs and observe values up to ε ≈ 6.4 and δ = 2−32. Moreover, to link the dELDP parameters to the privacy of full VAE-including VFL systems in practice, we conduct comprehensive experiments on the robustness against state-of-the-art privacy attacks. The results illustrate that the VAE system is effective against feature reconstruction attacks and outperforms other privacy-enhancing methods for VFL, especially when the adversary holds 75% of features in label inference attack.
ePrint: https://eprint.iacr.org/2024/671
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .