[Resource Topic] 2023/006: Exploring multi-task learning in the context of two masked AES implementations

Welcome to the resource topic for 2023/006

Title:
Exploring multi-task learning in the context of two masked AES implementations

Authors: Thomas Marquet, Elisabeth Oswald

Abstract:

This paper investigates different ways of applying multi-task learning in the context of two masked AES implementations (via the ASCADv1 and ASCADv2 databases). We propose novel ideas: jointly using multiple single-task models (aka multi-target learning), custom layers (enabling the use of multi-task learning without the need for information about randomness), and hierarchical multi-task models (owing to the idea of encoding the hierarchy flow directly into a multi-task learning model). Our work provides comparisons with existing approaches to deep learning and delivers a first attack using multi-task models without randomness during training, and a new best attack for the ASCADv2 dataset.

ePrint: https://eprint.iacr.org/2023/006

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .