Welcome to the resource topic for 2022/1087
Title:
I Know What Your Layers Did: Layer-wise Explainability of Deep Learning Side-channel Analysis
Authors: Guilherme Perin, Lichao Wu, Stjepan Picek
Abstract:Masked cryptographic implementations can be vulnerable to higher-order attacks. For instance, deep neural networks have proven effective for second-order profiling side-channel attacks even in a black-box setting (no prior knowledge of masks and implementation details). While such attacks have been successful, no explanations were provided for understanding why a variety of deep neural networks can (or cannot) learn high-order leakages and what the limitations are. In other words, we lack the explainability of how neural network layers combine (or not) unknown and random secret shares, which is a necessary step to defeat, e.g., Boolean masking countermeasures.
In this paper, we use information-theoretic metrics to explain the internal activities of deep neural network layers. We propose a novel methodology for the explainability of deep learning-based profiling side-channel analysis to understand the processing of secret masks. Inspired by the Information Bottleneck theory, our explainability methodology uses perceived information to explain and detect the different phenomena that occur in deep neural networks, such as fitting, compression, and generalization. We provide experimental results on masked AES datasets showing where, what, and why deep neural networks learn relevant features from input trace sets while compressing irrelevant ones, including noise. This paper opens new perspectives for the understanding of the role of different neural network layers in profiling side-channel attacks.
ePrint: https://eprint.iacr.org/2022/1087
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .