Welcome to the resource topic for 2025/1899
Title:
CoupledNets: Resisting Feature Snooping Attacks on Neural Processing Units through Noise Injection into Models
Authors: Sachintha Kavishan Jayarathne, Seetal Potluri
Abstract:Feature snooping has been shown to be very effective for stealing cost-sensitive models executing on neural processing units. Existing model obfuscation defenses protect the weights directly, but do not protect the features that hold information on the weights in indirect form. This paper proposes CoupledNets, the first model obfuscation defense that protects the intermediate features during inference. The obfuscation is performed during the training phase, by injecting noise, customized on the theme of neuron coupling, so as to make cryptanalysis mathematically impossible during the inference phase. When implemented across a wide range of neural network architectures and datasets, on average, CoupledNets demonstrated > 80% drop in the accuracy of the obfuscated model, with little impact on the functional accuracy and training times.
ePrint: https://eprint.iacr.org/2025/1899
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .