Welcome to the resource topic for
**2018/854**

**Title:**

Universal Multi-Party Poisoning Attacks

**Authors:**
Saeed Mahloujifar, Mahammad Mahmoody, Ameer Mohammed

**Abstract:**

In this work, we demonstrate universal multi-party poisoning attacks that adapt and apply to any multi-party learning process with arbitrary interaction pattern between the parties. More generally, we introduce and study (k,p)-poisoning attacks in which an adversary controls k\in[m] of the parties, and for each corrupted party P_i, the adversary submits some poisoned data T'_i on behalf of P_i that is still “(1-p)-close” to the correct data T_i (e.g., 1-p fraction of T'_i is still honestly generated). We prove that for any “bad” property B of the final trained hypothesis h (e.g., h failing on a particular test example or having “large” risk) that has an arbitrarily small constant probability of happening without the attack, there always is a (k,p)-poisoning attack that increases the probability of B from \mu to by \mu^{1-p \cdot k/m} = \mu + \Omega(p \cdot k/m). Our attack only uses clean labels, and it is online. More generally, we prove that for any bounded function f(x_1,\dots,x_n) \in [0,1] defined over an n-step random process x = (x_1,\dots,x_n), an adversary who can override each of the n blocks with \emph{even dependent} probability p can increase the expected output by at least \Omega(p \cdot \mathrm{Var}[f(x)]).

**ePrint:**
https://eprint.iacr.org/2018/854

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

**Example resources include:**
implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .