[Resource Topic] 2023/611: A Comparison of Multi-task learning and Single-task learning Approaches

Welcome to the resource topic for 2023/611

Title:
A Comparison of Multi-task learning and Single-task learning Approaches

Authors: Thomas Marquet, Elisabeth Oswald

Abstract:

In this paper, we provide experimental evidence for the benefits of multi-task learning in the context of masked AES implementations (via the ASCADv1-r and ASCADv2 databases). We develop an approach for comparing single-task and multi-task approaches rather than comparing specific resulting models: we do this by training many models with random hyperparameters (instead of comparing a few highly tuned models). We find that multi-task learning has significant practical advantages that make it an attractive option in the context of device evaluations: the multi-task approach leads to performant networks quickly in particular in situations where knowledge of internal randomness is not available during training.

ePrint: https://eprint.iacr.org/2023/611

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .