[Resource Topic] 2024/723: $\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning

Welcome to the resource topic for 2024/723

Title:
\mathsf{OPA}: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning

Authors: Harish Karthikeyan, Antigoni Polychroniadou

Abstract:

Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is One-shot Private Aggregation (\mathsf{OPA}) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since every client communicates just once per aggregation, this streamlines the management of dropouts and dynamic participation of clients, contrasting with multi-round state-of-the-art protocols for each aggregation.

We initiate the study of \mathsf{OPA} in several ways. First, we formalize the model and present a security definition. Second, we construct \mathsf{OPA} protocols based on class groups, DCR, and LWR assumptions. Third, we demonstrate \mathsf{OPA} with two applications: private stream aggregation and privacy-preserving federated learning. Specifically, \mathsf{OPA} can be used as a key building block to enable privacy-preserving federated learning and critically, where client speaks once. This is a sharp departure from prior multi-round protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, \mathsf{OPA} eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, \mathsf{OPA} is practical, outperforming state-of-the-art solutions. We leverage \mathsf{OPA} to develop a streaming variant named \mathsf{SOPA}, serving as the building block for privacy-preserving federated learning. We utilize \mathsf{SOPA} to construct logistic regression classifiers for two datasets.

A new distributed key homomorphic PRF is at the core of our construction of \mathsf{OPA}. This key component addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new distributed key homomorphic PRFs based on class groups or DCR or the LWR assumption.

ePrint: https://eprint.iacr.org/2024/723

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .