[Resource Topic] 2021/167: Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware

Welcome to the resource topic for 2021/167

Title:
Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware

Authors: Seetal Potluri, Aydin Aysu

Abstract:

Stealing trained machine learning (ML) models is a new and growing concern due to the model’s development cost. Existing work on ML model extraction either applies a mathematical attack or exploits hardware vulnerabilities such as side-channel leakage. This paper shows a new style of attack, for the first time, on ML models running on embedded devices by abusing the scan-chain infrastructure. We illustrate that having course-grained scan-chain access to non-linear layer outputs is sufficient to steal ML models. To that end, we propose a novel small-signal analysis inspired attack that applies small perturbations into the input signals, identifies the quiescent operating points and, selectively activates certain neurons. We then couple this with a Linear Constraint Satisfaction based approach to efficiently extract model parameters such as weights and biases. We conduct our attack on neural network inference topologies defined in earlier works, and we automate our attack. The results show that our attack outperforms mathematical model extraction proposed in CRYPTO 2020, USENIX 2020, and ICML 2020 by an increase in accuracy of 2^20.7x, 2^50.7x, and 2^33.9x, respectively, and a reduction in queries by 2^6.5x, 2^4.6x, and 2^14.2x, respectively.

ePrint: https://eprint.iacr.org/2021/167

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .