I’m trying to understand the advantage of the hypercube MPC-in-the-head with N “main parties” and dimension D, compared to repeating an N-party protocol D times with N-1 shares in each repetition derived from a random seed.
Soundness: should be 1/N^D in both cases
Computation: With the hypercube, it seems to be O(N^D) for the prover (to construct the main party shares), then O(ND) for the verifier. With repetition, it would be O(ND) for both.
Communication: Hypercube-MitH seems to send O(Dlog N) communication via the “sibling path” approach. With repetition, one could still use a random seed for all but one party, so this would also be O(Dlog N) seeds and shares.
Am I misunderstanding something, or is the advantage in the constant factors?
The way I’ve found that helps to think about this is to keep the number of simulated parties separate in both settings; by this I mean that the Prover will simulate n parties in the Hypercube setting, and N in the normal one. To achieve the same soundness error, we then need to pick D such that N = n^D.
Taking N=256 and n=2 as an example, in the normal setting the Prover needs to simulate 256 online phases of the MPC; whereas the Hypercube Prover only needs to simulate D\cdot n=16 online phases.
It is true that both settings have to obtain 256 seeds at the beginning, so there is no computational advantage in the “offline” phase of seed generation. The advantage only kicks in for the MPC protocol evaluation.
Thanks for the response. I was still a bit confused because there is no reason to keep N the same for both normal and hypercube. That is, a normal MPC-in-the-head could use N=2 and repeat 16 times, and achieve the same soundness error as the hypercube with N=256, n=2, D=16, and also only simulate 16 online phases.
More precisely, if the hypercube uses N=n^D parties and T repetitions, while a normal protocol uses N' parties and T' repetitions, we get n^{DT}=N'^{T'} if they achieve the same soundness. I could always take N'=n and T'=DT to do the same amount of online computation, I think.
I convinced myself that the advantage appears because of the parts of communication that need to be sent directly, rather than generated by the seeds, and these parts can be shared in the hypercube but not normally. So with the same computational budget, communication goes down. I know that’s the opposite perspective of the paper and the talk at Eurocrypt but this perspective makes more sense to me.