[Resource Topic] 2024/913: SoK: Is Neural Network Hardware Vulnerable to Model Reverse Engineering?

Welcome to the resource topic for 2024/913

Title:
SoK: Is Neural Network Hardware Vulnerable to Model Reverse Engineering?

Authors: Seetal Potluri, Farinaz Koushanfar

Abstract:

There has been significant progress over the past seven years in model reverse engineering (RE) for neural network (NN) hardware. Although there has been systematization of knowledge (SoK) in an overall sense, however, the treatment from the hardware perspective has been far from adequate. To bridge this gap, this paper systematically categorizes the types of NN hardware used prevalently by the industry/academia, and also the model RE attacks/defenses published in each category. Further, we sub-categorize existing NN model RE attacks based on different criteria including the degree of hardware parallelism, threat vectors like side channels, fault-injection, scan-chain attacks, system-level attacks, type of asset under attack, the type of NN, exact versus approximate recovery, etc.

We make important technical observations and identify key open research directions. Subsequently, we discuss the state-of-the-art defenses against NN model RE, identify certain categorization criteria, and compare the existing works based on these criteria. We note significant qualitative gaps for defenses, and suggest recommendations for important open research directions for protection of NN models. Finally, we discuss limitations of existing work in terms of the types of models where security evaluation or defenses were proposed, and suggest open problems in terms of protecting practically expensive model IPs.

ePrint: https://eprint.iacr.org/2024/913

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .