[Resource Topic] 2016/465: Can Large Deviation Theory be Used for Estimating Data Complexity?

Welcome to the resource topic for 2016/465

Title:
Can Large Deviation Theory be Used for Estimating Data Complexity?

Authors: Subhabrata Samajder, Palash Sarkar

Abstract:

Statistical analysis of attacks on block ciphers have mostly used normal approximations. A few recent works have proposed doing away with normal approximations and instead use Chernoff and Hoeffding bounds to obtain rigorous bounds on data complexities of several attacks. This opens up the question of whether even better general bounds can be obtained using the statistical theory of large deviations. In this note we examine this question. Our conclusion is that while in theory this is indeed possible, in general obtaining meaningful expressions for data complexity presents several difficulties. This leaves open the question of whether this can be done for specific attacks.

ePrint: https://eprint.iacr.org/2016/465

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .