# [Resource Topic] 2023/455: Tri-State Circuits: A Better Model of Computation for Garbling

Welcome to the resource topic for 2023/455

Title:
Tri-State Circuits: A Better Model of Computation for Garbling

Authors: David Heath, Vladimir Kolesnikov, Rafail Ostrovsky

Abstract:

We introduce and formalize tri-state circuits (TSCs). TSCs form a natural model of computation that, to our knowledge, has not been considered by theorists. The model captures a surprising combination of simplicity and power. TSCs are simple in the sense that they allow only three wire values (0,1, and undefined – \mathcal{Z}) and three types of fan-in two gates; they are powerful in the sense that their statically placed gates fire (execute) eagerly as their inputs become defined, implying an order of execution that depends on the input. This dynamic behavior is sufficient to efficiently evaluate RAM programs.

We construct a TSC that emulates T steps of any RAM program and that has only O(T \cdot \log^3 T \cdot \log \log T ) gates. Contrast this with the reduction to Boolean circuits, where the best known approach scans all of memory on each access, incurring quadratic cost. Thus, while simple, TSCs have expressive power that closely approximates the RAM model.

We connect TSCs with Garbled Computation (GC). TSCs capture the power of garbling far better than Boolean Circuits, offering a significantly more expressive model of computation while leaving the per-gate cost of evaluation essentially unchanged.

Our most exciting explicit result is authenticated Garbled RAM (GRAM), an approach to constant-round maliciously-secure 2PC of RAM programs. Let \lambda denote the computational security parameter. We first extend authenticated garbling to TSCs; then, by simply plugging in our TSC-based RAM, we immediately obtain authenticated GRAM running at cost O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda), outperforming all prior work, including even GRAMs in the semi-honest setting. (Prior GRAMs measure per-access cost for a memory storing large w = \Omega(\log^2 T)-bit words; by this metric, our cost is O(w\cdot \log T \cdot \log\log T \cdot \lambda).)

As another highlight of the power of TSCs, we give a simple semi-honest garbling of TSCs based only on one-way functions. This yields standard- assumption-based GRAM at cost O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda), outperforming the best prior GRAM in this setting by more than factor \lambda.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.