Welcome to the resource topic for 2022/852
Title:
Making Biased DL Models Work: Message and Key Recovery Attacks on Saber Using Amplitude-Modulated EM Emanations
Authors: Ruize Wang, Kalle Ngo, and Elena Dubrova
Abstract:Creating a good deep learning (DL) model is an art which requires expertise in DL and a large set of labeled data for training neural networks. Neither is readily available. In this paper, we introduce a method which enables us to achieve good results with bad DL models. We use simple multilayer perceptron (MLP) networks, trained on a small dataset, which make strongly biased predictions if used without the proposed method. The core idea is to extend the attack dataset so that at least one of its traces has the ground truth label to which the models are biased towards. The effectiveness of the presented method is demonstrated by attacking an ARM Cortex-M4 CPU implementation of Saber KEM, a finalist of the NIST post-quantum cryptography standardization project, on a nRF52832 system-on-chip supporting Bluetooth 5, using amplitude-modulated EM emanations. Previous amplitude-modulated EM emanation-based attacks on Saber KEM could not recover its messages with a sufficiently high probability. We recover messages with the probability 1 from the profiling device and with the probability 0.74 from a different device. Using messages recovered from chosen ciphertexts, we extract the secret key of Saber KEM.
ePrint: https://eprint.iacr.org/2022/852
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .