[Resource Topic] 2023/392: Locally Covert Learning

Welcome to the resource topic for 2023/392

Title:
Locally Covert Learning

Authors: Justin Holmgren, Ruta Jawale

Abstract:

The goal of a covert learning algorithm is to learn a function f by querying it, while ensuring that an adversary, who sees all queries and their responses, is unable to (efficiently) learn any more about f than they could learn from random input-output pairs. We focus on a relaxation that we call local covertness, in which queries are distributed across k servers and we only limit what is learnable by k - 1 colluding servers.

For any constant k, we give a locally covert algorithm for efficiently learning any Fourier-sparse function (technically, our notion of learning is improper, agnostic, and with respect to the uniform distribution). Our result holds unconditionally and for computationally unbounded adversaries. Prior to our work, such an algorithm was known only for the special case of O(\log n)-juntas, and only with k = 2 servers, Ishai et al. (Crypto 2019).

Our main technical observation is that the original Goldreich-Levin algorithm only utilizes i.i.d. pairs of correlated queries, where each half of every pair is uniformly random. We give a simple generalization of this algorithm in which pairs are replaced by k-tuples in which any k - 1 components are jointly uniform. The cost of this generalization is that the number of queries needed grows exponentially with k.

ePrint: https://eprint.iacr.org/2023/392

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .