[Resource Topic] 2018/820: Privacy Loss Classes: The Central Limit Theorem in Differential Privacy

Welcome to the resource topic for 2018/820

Privacy Loss Classes: The Central Limit Theorem in Differential Privacy

Authors: David Sommer, Sebastian Meiser, Esfandiar Mohammadi


Quantifying the privacy loss of a privacy-preserving mechanism on potentially sensitive data is a complex and well-researched topic; the de-facto standard for privacy measures are \varepsilon-differential privacy (DP) and its versatile relaxation (\varepsilon,\delta)-approximate differential privacy (ADP). Recently, novel variants of (A)DP focused on giving tighter privacy bounds under continual observation. In this paper we unify many previous works via the \emph{privacy loss distribution} (PLD) of a mechanism. We show that for non-adaptive mechanisms, the privacy loss under sequential composition undergoes a convolution and will converge to a Gauss distribution (the central limit theorem for DP). We derive several relevant insights: we can now characterize mechanisms by their \emph{privacy loss class}, i.e., by the Gauss distribution to which their PLD converges, which allows us to give novel ADP bounds for mechanisms based on their privacy loss class; we derive \emph{exact} analytical guarantees for the approximate randomized response mechanism and an \emph{exact} analytical and closed formula for the Gauss mechanism, that, given \varepsilon, calculates \delta, s.t., the mechanism is (\varepsilon, \delta)-ADP (not an over-approximating bound).

ePrint: https://eprint.iacr.org/2018/820

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .