Welcome to the resource topic for 2023/1665
Title:
Model Stealing Attacks On FHE-based Privacy-Preserving Machine Learning through Adversarial Examples
Authors: Bhuvnesh Chaturvedi, Anirban Chakraborty, Ayantika Chatterjee, Debdeep Mukhopadhyay
Abstract:Classic MLaaS solutions suffer from privacy-related risks since the user is required to send unencrypted data to the server hosting the MLaaS. To alleviate this problem, a thriving line of research has emerged called Privacy-Preserving Machine Learning (PPML) or secure MLaaS solutions that use cryptographic techniques to preserve the privacy of both the input of the client and the output of the server. However, these implementations do not take into consideration the possibility of transferability of known attacks in classic MLaaS settings to PPML settings. In this paper, we demonstrate that it is possible to transfer existing model-extraction attacks using adversarial examples to PPML settings due to relaxed constraints on the abilities of the adversary. We show a working example of an end-to-end attack on an image processing application built using a popular FHE-based framework, namely Concrete-ML. We successfully create a cloned model with just 5000 queries, which is, in fact, 10× less than the size of the training set of the victim model, while incurring only a 7% loss in accuracy. Further, we incorporate the well-known defense strategy against such attacks and show that our attack is still able to clone the model. Finally, we evaluate the different defense rationales that exist in literature and observe that such model stealing attacks are difficult to prevent in secure MLaaS settings.
ePrint: https://eprint.iacr.org/2023/1665
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .