[Resource Topic] 2023/1465: Too Close for Comfort? Measuring Success of Sampled-Data Leakage Attacks Against Encrypted Search

Welcome to the resource topic for 2023/1465

Title:
Too Close for Comfort? Measuring Success of Sampled-Data Leakage Attacks Against Encrypted Search

Authors: Dominique Dittert, Thomas Schneider, Amos Treiber

Abstract:

The well-defined information leakage of Encrypted Search Algorithms (ESAs) is predominantly analyzed by crafting so-called leakage attacks. These attacks utilize adversarially known auxiliary data and the observed leakage to attack an ESA instance built on a user’s data. Known-data attacks require the auxiliary data to be a subset of the user’s data. In contrast, sampled-data attacks merely rely on auxiliary data that is, in some sense, statistically close to the user’s data and hence reflect a much more realistic attack scenario where the auxiliary data stems from a publicly available data source instead of the private user’s data.

Unfortunately, it is unclear what “statistically close” means in the context of sampled-data attacks. This leaves open how to measure whether data is close enough for attacks to become a considerable threat. Furthermore, sampled-data attacks have so far not been evaluated in the more realistic attack scenario where the auxiliary data stems from a source different to the one emulating the user’s data. Instead, auxiliary and user data have been emulated with data from one source being split into distinct training and testing sets. This leaves open whether and how well attacks work in the mentioned attack scenario with data from different sources.

In this work, we address these open questions by providing a measurable metric for statistical closeness in encrypted keyword search. Using real-world data, we show a clear exponential relation between our metric and attack performance. We uncover new data that are intuitively similar yet stem from different sources. We discover that said data are not “close enough” for sampled-data attacks to perform well. Furthermore, we provide a re-evaluation of sampled-data keyword attacks with varying evaluation parameters and uncover that some evaluation choices can significantly affect evaluation results.

ePrint: https://eprint.iacr.org/2023/1465

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .