[Resource Topic] 2021/687: Towards Understanding Practical Randomness Beyond Noise: Differential Privacy and Mixup

Welcome to the resource topic for 2021/687

Title:
Towards Understanding Practical Randomness Beyond Noise: Differential Privacy and Mixup

Authors: Hanshen Xiao, Srinivas Devadas

Abstract:

Information-theoretical privacy relies on randomness. Representatively, Differential Privacy (DP) has emerged as the gold standard to quantify the individual privacy preservation provided by given randomness. However, almost all randomness in existing differentially private optimization and learning algorithms is restricted to noise perturbation. In this paper, we set out to provide a privacy analysis framework to understand the privacy guarantee produced by other randomness commonly used in optimization and learning algorithms (e.g., parameter randomness). We take mixup: a random linear aggregation of inputs, as a concrete example. Our contributions are twofold. First, we develop a rigorous analysis on the privacy amplification provided by mixup either on samples or updates, where we find the hybrid structure of mixup and the Laplace Mechanism produces a new type of DP guarantee lying between Pure DP and Approximate DP. Such an average-case privacy amplification can produce tighter composition bounds. Second, both empirically and theoretically, we show that proper mixup comes almost free of utility compromise.

ePrint: https://eprint.iacr.org/2021/687

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .