Welcome to the resource topic for 2022/1737
Regularizers to the Rescue: Fighting Overfitting in Deep Learning-based Side-channel Analysis
Authors: Azade Rezaeezade, Lejla BatinaAbstract:
Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains.
At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques’ effectiveness. In this paper, we aim to investigate the regularization effectiveness by applying four powerful and easy-to-use regularization techniques to six combinations of datasets, leakage models, and deep-learning topologies.
The investigated techniques are L_1, L_2, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, L_1 and L_2 are the most effective. Finally, if training time matters, early stopping is the best technique to choose.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .