Skip to content

High-Level Overview

A summary of Binius's steps, from the verifier's perspective

Here, we present a high-level overview of our backend. These stages reduce an input M3 instance into progressively simpler mathematical claims, culminating in claims about the evaluations of certain multilinear polynomials.

In this overview, we take the verifier's perspective, and not the prover's. The verifier's view attests to a protocol's soundness, while the prover's view attests to its completness.

Interaction, Fiat–Shamir, and Reductions

Binius operates within the Fiat–Shamir paradigm for noninteractive proofs. Our protocol proceeds through multiple broad phases, each encompassing multiple rounds of prover–verifier interaction. As is standard practice, our protocol replaces interactions with invocations of the random oracle.

We achieve this using a transcript. The transcript tracks all data sent by the prover to the verifier; it hashes this data, automating the calculation of the verifier's challenges. During verification, the verifier uses the transcript to absorb the prover's data and produce the correct challenge sequence.

Our protocol's phases are essentially cryptographic reductions. Each phase reduces the truth of claims of one sort to the truth of claims of a simpler sort.

The M3 Instance

Before beginning any given Binius proof, the prover and verifier must formalize the structure of the statement being proved. Binius operates in the M3 arithmetization paradigm, as we've already explained. For our purposes now, the prover and verifier must agree on the instance at hand—that is, on its tables, constraints, channels and flushing rules. As we will see shortly, it is not difficult to reexpress all of these objects in mathematical terms, using objects like multilinear polynomials, multisets and vanishing claims.

Taking for granted the M3 abstraction, we proceed into verifier's algorithms proper. In what follows, we sketch the steps by which the verifier may evaluate whether the prover's M3 instance is satisfying, by reducing its satisfaction to successively simpler claims.

Receiving Commitments

Before it generates any randomness, the verifier must receive the prover's commitments and absorb them into its transcript. Each "commitment" represents a massive, multilinear polynomial—or equivalently, a column in a table—which, for efficiency's sake, is hidden from the verifier (part of the M3 instance's witness). For the sake of succinctness, instead of conveying these columns directly to the verifier, the prover rather conveys commitments to them; these commitments "bottle up" the polynomials they represent. Concretely, each commitment looks like a single hash digest. As soon as the prover forks over these commitments to the verifier, it becomes effectively bound to the witness polynomials they conceal. We discuss our multilinear polynomial commitment scheme further in this site area's cryptography section.

Zerocheck

The prover's commitments, discussed above, represent the values with which it has opted to populate the M3 instance's various tables. To each among these tables is associated various constraints, multivariate polynomials which must hold identically, row-wise, on the table's rows (i.e., by definition of the M3 instance). The verifier's first task conceptually is to make sure that the relevant constraints actually hold row-wise on the prover's tables.

The zerocheck is the key tool for this problem. The zerocheck protocol reduces vanishing claims for composites of multilinears to evaluation claims on the underlying multilinears. Though we explain that protocol in great detail in our page dedicated to the matter, we sketch it here.

The classical zerocheck technique is essentially a thin wrapper around the sumcheck. Basically, if we want to check that the \ell-variate multivariate composite polynomial C(X0,,X1)C(X_0, \ldots , X_{\ell - 1}) is such that C(v)=0C(v) = 0 holds for each vBv \in \mathcal{B}_\ell, then it's enough to define CC's multilinear extension:

C~(X0,,X1)vBC(v)eq~(X0,,X1,v0,,v1),\begin{equation*}\widetilde{C}(X_0, \ldots , X_{\ell - 1}) \coloneqq \sum_{v \in \mathcal{B}_\ell} C(v) \cdot \widetilde{\texttt{eq}}(X_0, \ldots , X_{\ell - 1}, v_0, \ldots , v_{\ell - 1}),\end{equation*}

and check whether C~(r)=?0\widetilde{C}(r) \stackrel{?}= 0 holds, for a point rTτr \gets \mathcal{T}_\tau^\ell sampled randomly by the verifier. The key trick is that the claim C~(r)=?0\widetilde{C}(r) \stackrel{?}= 0 itself, in turn, amounts to a sumcheck claim, and can be checked using that protocol. Here,

eq~(X0,,X1,Y0,,Y1)i=01((1Xi)(1Yi)+XiYi)\begin{equation*}\widetilde{\texttt{eq}}(X_0, \ldots , X_{\ell - 1}, Y_0, \ldots , Y_{\ell - 1}) \coloneqq \prod_{i = 0}^{\ell - 1} \left( (1 - X_i) \cdot (1 - Y_i) + X_i \cdot Y_i \right)\end{equation*}

is the equality indicator polynomial, the unique, 22 \cdot \ell-variate multilinear polynomial whose restriction to B×B\mathcal{B}_\ell \times \mathcal{B}_\ell is the equality indicator function. More precisely, the sumcheck protocol reduces the above sum claim—that is, 0=?vBC(v)eq~(r,v)0 \stackrel{?}= \sum_{v \in \mathcal{B}_\ell} C(v) \cdot \widetilde{\texttt{eq}}(r, v)—to an evaluation claim on the underlying polynomial CC (that is, to a claim of the form C(r)=?sC(r') \stackrel{?}= s, where the values rr' and ss arise during the sumcheck protocol).

Because we work over small fields, we in fact adopt a rather sophisticated variant of the zerocheck, called the univariate skip. Very roughly, this zerocheck treats the 0th round in a special way, and moves as much work as possible into that round.

The Channel-Balancing Argument

At the center of each M3 instance is a plurality of abstract objects called channels, which must balance if the proof is to pass. The verifier's next few steps involve establishing that the prover's channels balance. More precisely, the verifier must reduce the balancing of these channels to further evaluation claims. (These latter claims will accumulate, for now, alongside those already output by the zerocheck reduction sketched above.)

Under the hood, each channel amounts to a pair of dueling "multisets", abstract lists populated by the instance's flushing rules. To check whether the relevant channels balance, the verifier must therefore run a handful of multiset equality checks. The theory behind these checks is developed in Protocols 4.17 and 4.20 of [DP23], and is discussed further here. For now, we sketch the idea. We write (T0,,Tμ1)(T_0, \ldots , T_{\mu - 1}) and (U0,,Uμ1)(U_0, \ldots , U_{\mu - 1}) for a single multiset instance; that is, we want to check that

{(T0(v),,Tμ1(v))vB}=?{(U0(v),,Uμ1(v))vB},\begin{equation*}\left\{ (T_0(v), \ldots , T_{\mu - 1}(v)) \mid v \in \mathcal{B}_\ell \right\} \stackrel{?}= \left\{ (U_0(v), \ldots , U_{\mu - 1}(v)) \mid v \in \mathcal{B}_\ell \right\},\end{equation*}

where both sides are viewed as multi-subsets of Tτμ\mathcal{T}_\tau^\mu.

What [DP23, § 4] tells us is that to check this claim, it suffices (up to some negligible soundness error) for the verifier to sample a fingerprinting challenge ss and a permutation challenge rr, and to check instead that the virtual polynomials:

Tr+i=0μ1siTi and Ur+i=0μ1siUi\begin{equation*}T \coloneqq r + \sum_{i = 0}^{\mu - 1} s^i \cdot T_i \text{ and } U \coloneqq r + \sum_{i = 0}^{\mu - 1} s^i \cdot U_i\end{equation*}

have identical grand products. That is, it's enough to check that vBT(v)=?vBU(v)\prod_{v \in \mathcal{B}_\ell} T(v) \stackrel{?}= \prod_{v \in \mathcal{B}_\ell} U(v).

The GKR-Based Grand Product Argument

In light of the above reduction, the balancing of the M3 instance's channels has now been reduced to an equality of grand products, or products of the values of certain polynomials over the cube. The verifier's next step is to further reduce the grand product claims at issue to evaluation claims. To achieve this, the verifier undertakes a "grand product argument", itself based on the GKR protocol.

We describe our GKR-based grand product argument in detail here; for now, we roughly sketch the idea. We write TT for a multilinear whose grand product we want to ascertain—that is, we want to check whether vBT(v)=?a\prod_{v \in \mathcal{B}_\ell} T(v) \stackrel{?}= a, let's say. To do this, the prover and verifier both imagine a height-\ell, balanced binary tree, whose leaves are the values {T(v)vB}\left\{T(v) \mid v \in \mathcal{B}_\ell \right\}, and whose intermediate nodes are defined to be the products of their children. Clearly, it suffices for the verifier to check whether the tree's root note is aa. The idea of GKR is that for each tree layer k{0,,}k \in \{0, \ldots , \ell\}, we can view the 2k2^k values of that layer's nodes as the values of some multilinear TkT_k over the cube Bk\mathcal{B}_k. Moreover, for each k{0,,1}k \in \{0, \ldots , \ell - 1\} and each evaluation point rkTτkr_k \in \mathcal{T}_\tau^k, we obtain the following expression for TkT_k, in terms of Tk+1T_{k + 1}:

Tk(rk)=vBkTk+1(v0)Tk+1(v1)eq~(v,rk).\begin{equation*}T_k(r_k) = \sum_{v \in \mathcal{B}_k} T_{k + 1}(v \mathbin{\Vert} 0) \cdot T_{k + 1}(v \mathbin{\Vert} 1) \cdot \widetilde{\texttt{eq}}(v, r_k).\end{equation*}

By sumchecking this thing, the verifier can reduce the claim Tk(rk)=?akT_k(r_k) \stackrel{?}= a_k to the further claim Tk+1(rk+1)=?ak+1T_{k + 1}(r_{k + 1}) \stackrel{?}= a_{k + 1}, for some further values rk+1r_{k + 1} and ak+1a_{k + 1} obtained during the sumcheck. By performing this repeatedly, we get a way to "unwind" the initial grand product claim a0a=?T0()=vBT(v)a_0 \coloneqq a \stackrel{?}= T_0() = \prod_{v \in \mathcal{B}_\ell} T(v) into an evaluation claim of the form a=?T(r)=T(r)a_\ell \stackrel{?}= T_\ell(r_\ell) = T(r_\ell).

We've thus reduced all of our grand product claims to evaluation claims, which is what we wanted.

Evalcheck

Throughout the zerocheck protocol and the channel-balancing phases above, the verifier's goal was to progressively reduce the prover's M3 satisfaction to evaluation claims on multilinears. Here, we have finally reached that point. At this point in the protocol, we pass into a new phase, called evalcheck, in which we begin processing these evaluation claims.

The main problem is that the multilinears we want to evaluate are, in general, committed or virtual. Roughly, the verifier's task during evalcheck will be to progressively "unroll" the definitions of the virtual multilinears at hand, thereby reducing its running evaluation claims on them to claims on further, committed, multilinears.

Virtual multilinears play an essential role in Binius. For further detail, we refer to our page devoted exclusively to them, as well as to [DP23, § 4.1]. Here, we sketch the idea. Given access to various underlying concrete, committed multilinears, the verifier may use these as "building blocks" to construct further, virtual polynomials in its head. For example, given evaluation access to A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}), the verifier can "simulate" evaluation access to its scaling t(X0,,X1)gt(X0,,X1)t'(X_0, \ldots , X_{\ell - 1}) \coloneqq g \cdot t(X_0, \ldots , X_{\ell - 1}) (here, gTτg \in \mathcal{T}_\tau is some constant scalar). How? Easily: if the verifier wants to learn the value of tt' at rTτr \in \mathcal{T}_\tau^\ell, say, it may query the underlying polynomial t(r)t(r), and then locally multiply the result by gg.

As trivial as this idea sounds, it rapidly gets serious in Binius. For example, as we explain in [DP23, § 4.3], it is possible, given some fixed multilinear t(X0,,X1)t(X_0, \ldots , X_{\ell - 1}), to construct a further virtual multilinear—called shift,o(t)(X0,,X1)\texttt{shift}_{\ell, o}(t)(X_0, \ldots , X_{\ell - 1})—whose values on the cube B\mathcal{B}_\ell are just those of tt, but circularly rotated (by oo steps in this case). Moreover, the verifier may reduce the problem of evaluating shift,o(t)\texttt{shift}_{\ell, o}(t) at some arbitrary point rTτr \in \mathcal{T}_\tau^\ell to that of evaluating the underlying polynomial tt at some further point rTτr' \in \mathcal{T}_\tau^\ell, up to the cost of doing an \ell-round sumcheck.

For a full explanation of how this gets done, we refer to our dedicated page on the matter. For now, we sketch it. Indeed, we get an expression:

shift,o(t)(X0,,X1)=vBt(v)s-ind,o(v0,,v1,X0,,X1).\begin{equation*}\texttt{shift}_{\ell, o}(t)(X_0, \ldots , X_{\ell - 1}) = \sum_{v \in \mathcal{B}_\ell} t(v) \cdot \texttt{s-ind}_{\ell, o}(v_0, \ldots , v_{\ell - 1}, X_0, \ldots , X_{\ell - 1}).\end{equation*}

Here, s-ind,o\texttt{s-ind}_{\ell, o} is a special, 22 \cdot \ell-variate multilinear whose restriction to B×B\mathcal{B}_{\ell} \times \mathcal{B}_{\ell} is the circular shift indicator function (i.e., it's 1 on (u,v)(u, v) if and only if uu and vv are oo arithmetic steps apart, and 0 otherwise). The above expression shows that, for each evaluation point rTτr \in \mathcal{T}_\tau^\ell, the evaluation shift,o(t)(r)\texttt{shift}_{\ell, o}(t)(r) can itself be expressed as the sum over the cube of the product polynomial t(X0,,X1)s-ind,o(X0,,X1,r0,,r1)t(X_0, \ldots , X_{\ell - 1}) \cdot \texttt{s-ind}_{\ell, o}(X_0, \ldots , X_{\ell - 1}, r_0, \ldots , r_{\ell - 1}). By the aid of a sumcheck, we can reduce this sum to, in turn, evaluation claims on the underlying polynomials tt and s-ind,o\texttt{s-ind}_{\ell, o}. The essential, and non-trivial, point is that s-ind,o\texttt{s-ind}_{\ell, o} itself has an efficient circuit representation, and can be evaluated locally at any point by the verifier in just O()O(\ell) time. So the verifier just needs to query tt once.

This idea gives us lots of mileage, since, by setting the offset o1o \coloneqq 1 for example, we can shift a column by one step—and match the output of one row to the input of the next, let's say.

In any case, during evalcheck, the verifier runs various sumchecks, progressively reducing evaluation claims on virtual things like shift,o(t)\texttt{shift}_{\ell, o}(t) to evaluation claims on committed underlying polynomials like tt itself. We describe evalcheck in full here.

The Opening Phase

We're now finally ready to do some cryptography! At this point, the verifier has reduced the problem of checking the satisfaction of the prover's M3 instance to that of checking the evaluations of various committed multilinears at various points. Here, the verifier is going to directly evaluate these latter evaluation claims, by invoking our core polynomial commitment scheme (PCS). The full resource for this phase of the protocol appears here.

Binius uses the multilinear polynomial commitment scheme described in [DP24]. The thing that makes it fairly tricky is that we have to do an involved sort of batching, which operates over polynomials not just over different coefficient fields, but moreover of different lengths (i.e., numbers of variables). The key idea that we use relates to ring-switching, an idea expounded at length in [DP24, § 4].

We summarize the idea in the following way. We write tt for an \ell-variate multilinear over the small field Tι\mathcal{T}_\iota, say, and suppose that we're interested in the evaluation claim t(r)=?st(r) \stackrel{?}= s. We moreover write tt' for tt's "packing". The packed polynomial tt' is a reinterpretation of tt as a polynomial over the huge field Tτ\mathcal{T}_\tau. Its coefficients are larger, but it has fewer of them—it has only \ell' variables, let's say, as opposed to \ell.

As is explained in [DP24, § 4], it turns out that, up to a bit of interaction—in which the prover sends the verifier a "tensor algebra" element, and the verifier samples some random challenges—the prover and verifier may mutually reduce their initial evaluation claim t(r)=?st(r) \stackrel{?}= s to a sum claim, involving tt' and a further \ell'-variate multilinear, AA. Here, AA is "transparent", in that it's succinctly evaluable locally by the verifier.

To unwind what that means, our sum claim looks like:

s0=?vBt(v)A(v).\begin{equation*}s_0 \stackrel{?}= \sum_{v \in \mathcal{B}_{\ell'}} t'(v) \cdot A(v).\end{equation*}

We've already seen a few times that the sumcheck can be used to reduce this thing to an evaluation of tt'. Instead of proceeding that way, though, we're first going to batch the respective sum claims—i.e., each of the above form—that come from all of our initial evaluation claims. We're finally going to integrate this into FRI-Binius's FRI-folding procedure, so that we only have to do that once. The full details are available in our cryptography section.