Skip to content

The Procedure

Expressing small-field claims using tensors and sums

Here, without further ado, we describe the steps of ring-switching. Here, we have an \ell-variate polynomial t(X0,,X1)t(X_0, \ldots, X_{\ell - 1}) fixed and committed to; we are going to give a protocol to evaluate its κ\kappa-fold refinement t(X0,,X+κ1)t'(X_0, \ldots, X_{\ell + \kappa - 1})—specifically, to process an evaluation claim of the form s=?t(r0,,r+κ1)s \stackrel{?}= t'(r_0, \ldots, r_{\ell + \kappa - 1}).

Our goal will be to reduce that initial evaluation claim to a sum claim on t(X0,,X1)t(X_0, \ldots , X_{\ell - 1})—i.e., of the form s0=?vBt(v)A(v)s_0 \stackrel{?}= \sum_{v \in \mathcal{B}_\ell} t(v) \cdot A(v), where A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) is some further, transparent, multilinear.

The Steps

Here is the algorithm.

  • By embedding the original (packed) multilinear t(X0,,X1)t(X_0, \ldots , X_{\ell - 1}) componentwise horizontally into AA, we can express it as an \ell-variate multilinear over AAφ1(t)\varphi_1(t), let's say. The prover computes and sends to the verifier

    s^φ1(t)(φ0(rκ),,φ0(r+κ1)).\begin{equation*}\hat{s} \coloneqq \varphi_1(t)(\varphi_0(r_\kappa), \ldots, \varphi_0(r_{\ell + \kappa - 1})).\end{equation*}
    • The element s^\hat{s} is an algebra element, since the entire evaluation above takes place in the algebra. Thus, concretely, s^\hat{s} looks like a 2κ×2κ2^\kappa \times 2^\kappa array of Tι\mathcal{T}_\iota-elements.
    • It is a bit tricky, but a very important and doable exercise, to study concretely what s^\hat{s} looks like. In fact, its columns (s^v)vBκ\left( \hat{s}_v \right)_{v \in \mathcal{B}_\kappa} will be exactly the partial evaluations s^v=t(v0,,vκ1,rκ,,r+κ1)\hat{s}_v = t'(v_0, \ldots, v_{\kappa - 1}, r_\kappa, \ldots , r_{\ell + \kappa - 1}). Roughly, this is because the vertically embedded elements φ0(rκ),,φ0(r+κ1)\varphi_0(r_\kappa), \ldots, \varphi_0(r_{\ell + \kappa - 1}) will act "slice-wise" on the cells of the coefficients φ1(t)(w)\varphi_1(t)(w), for wBw \in \mathcal{B}_\ell (recall that these coefficients are horizontally embedded). We prove this fact rigorously in our paper (see [DP24, Thm. 4.2]).
  • Writing (s^v)vBκ\left( \hat{s}_v \right)_{v \in \mathcal{B}_\kappa} for s^\hat{s}'s columns, the verifier checks:

    s=?vBκs^veq~(r0,,rκ1,v0,,vκ1).\begin{equation*}s \stackrel{?}= \sum_{v \in \mathcal{B}_\kappa} \hat{s}_v \cdot \widetilde{\texttt{eq}}(r_0, \ldots, r_{\kappa - 1}, v_0, \ldots , v_{\kappa - 1}).\end{equation*}
    • By the discussion just above, we can immediately see why this check is complete. Indeed, taking on faith for now that s^v=t(v0,,vκ1,rκ,,r+κ1)\hat{s}_v = t'(v_0, \ldots, v_{\kappa - 1}, r_\kappa, \ldots , r_{\ell + \kappa - 1}) will hold for each vBκv \in \mathcal{B}_\kappa if the prover is honest, we thus see that the right-hand side above will equal vBκt(v0,,vκ1,rκ,,r+κ1)eq~(r0,,rκ1,v0,,vκ1)=t(r0,,r+κ1)\sum_{v \in \mathcal{B}_\kappa} t'(v_0, \ldots, v_{\kappa - 1}, r_\kappa, \ldots , r_{\ell + \kappa - 1}) \cdot \widetilde{\texttt{eq}}(r_0, \ldots, r_{\kappa - 1}, v_0, \ldots , v_{\kappa - 1}) = t'(r_0, \ldots , r_{\ell + \kappa - 1}); this quantity in turn equals ss precisely when the prover's initial evaluation claim is true.
  • The verifier samples row-batching scalars (r0,,rκ1)Tτκ(r''_0, \ldots , r''_{\kappa - 1}) \gets \mathcal{T}_\tau^\kappa. The verifier ships these off to the prover. Meanwhile, locally, it decomposes s^\hat{s} into rows (s^u)uBκ\left( \hat{s}_u \right)_{u \in \mathcal{B}_\kappa}, and computes the row-combination:

    s0uBκs^ueq~(r0,,rκ1,u0,,uκ1).\begin{equation*}s_0 \coloneqq \sum_{u \in \mathcal{B}_\kappa} \hat{s}_u \cdot \widetilde{\texttt{eq}}(r''_0, \ldots , r''_{\kappa - 1}, u_0, \ldots , u_{\kappa - 1}).\end{equation*}

    In parallel, the verifier conjures into its head an odd sort of multilinear A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}), defined in the following way. For each wBw \in \mathcal{B}_\ell, we have a basis-decomposition:

    eq~(rκ,,r+κ1,w0,,w1)=uBκAw,uβu.\begin{equation*}\widetilde{\texttt{eq}}(r_\kappa, \ldots , r_{\ell + \kappa - 1}, w_0, \ldots , w_{\ell - 1}) = \sum_{u \in \mathcal{B}_\kappa} A_{w, u} \cdot \beta_u.\end{equation*}

    I.e., whatever the left-hand Tτ\mathcal{T}_\tau-element above happens to be, surely it has a coordinate decomposition with respect to the usual Tι\mathcal{T}_\iota-basis of Tτ\mathcal{T}_\tau; that coordinate decomposition is what appears on the right. Finally, the verifier defines A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) to be the multilinear extension of the function A:BTτA : \mathcal{B}_\ell \to \mathcal{T}_\tau that sends:

    A:wuBκAw,ueq~(r0,,rκ1,u0,,uκ1).\begin{equation*}A : w \mapsto \sum_{u \in \mathcal{B}_\kappa} A_{w, u} \cdot \widetilde{\texttt{eq}}(r''_0, \ldots , r''_{\kappa - 1}, u_0, \ldots , u_{\kappa - 1}).\end{equation*}
    • If we work extremely concretely, we can see clearly what A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) is. Essentially, there are two steps. First, we take the tensor-expansion (eq~(w0,,w1,rκ,,r+κ1))wB\left( \widetilde{\texttt{eq}}(w_0, \ldots, w_{\ell - 1}, r_\kappa, \ldots , r_{\ell + \kappa - 1}) \right)_{w \in \mathcal{B}_\ell}. This looks like a long, length-22^\ell vector of Tτ\mathcal{T}_\tau-elements; we can imagine this thing going horizontally. By decomposing each component of this vector along the standard Tι\mathcal{T}_\iota-basis of Tτ\mathcal{T}_\tau, we can rather express this thing as a 2κ×22^\kappa \times 2^\ell matrix with entries in Tι\mathcal{T}_\iota. By row-combining this Tι\mathcal{T}_\iota-matrix using the length-2κ2^\kappa tensor expansion (eq~(r0,,rκ1,u0,,uκ1))uBκ\left( \widetilde{\texttt{eq}}(r''_0, \ldots , r''_{\kappa - 1}, u_0, \ldots , u_{\kappa - 1}) \right)_{u \in \mathcal{B}_\kappa}, we finally get one last long, horizontal vector of Tτ\mathcal{T}_\tau-elements, again of length 22^\ell. This vector is exactly the vector of values that A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) takes on the cube.
    • This might sound crazy, but the verifier isn't doing any of these computations. Indeed, that wouldn't be succinct at all! Rather, the verifier is simply defining this polynomial in its head right now, not evaluating it anywhere yet (let alone on the entire cube).
    • Indeed, as it turns out, the verifier will ultimately evaluate A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) at exactly one point—that point will show up during the processing of the sum claim we're about to output. Details are here.
  • We're done! The verifier now outputs the sum claim

    s0=?vBt(v)A(v).\begin{equation*}s_0 \stackrel{?}= \sum_{v \in \mathcal{B}_\ell} t(v) \cdot A(v).\end{equation*}
    • Above, we agreed that we wanted to output a sum claim of the above form for which A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) is transparent. Indeed, it's a far-from-obvious fact—which we prove in our paper—[DP24]—that A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) is transparent. Indeed, the verifier can evaluate it at any point (r0,,r1)(r'_0, \ldots , r'_{\ell - 1}) by just doing O()O(\ell) Tτ\mathcal{T}_\tau-multiplications.
    • This latter fact is where the tensor algebra really shines. Indeed, the verifier's succinct evaluation procedure for A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) will involve the tensor algebra's multiplicative structure in a crucial way. In fact, it appears essential: we don't know how to write down a succinct evaluation procedure for A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) that doesn't use the the tensor algebra!
    • For now, we defer the proof that A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) is transparent, referring to our paper.

Some Intuitions and Remarks

In a rigorous treatment, there are a few things we are going to have to establish to even show that this thing is complete, let alone secure. And for the sum claim we just output at the end just now to make sense, A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) should of course be transparent—a fact which we're again going to punt on.

For now, though, we address a key fact: why the sum claim s0=?vBt(v)A(v)s_0 \stackrel{?}= \sum_{v \in \mathcal{B}_\ell} t(v) \cdot A(v) is even true (i.e., assuming the prover is honest)?

An algebra-valued sumcheck

Here is the rough idea. Back in its very first message, the prover was supposed to compute s^φ1(t)(φ0(rκ),,φ0(r+κ1))\hat{s} \coloneqq \varphi_1(t)(\varphi_0(r_\kappa), \ldots, \varphi_0(r_{\ell + \kappa - 1})). By multilinearly expanding this thing's right-hand side, we get an identity of AA-elements, which should hold if the prover is honest:

s^=?wBφ1(t)(w)eq~(φ0(rκ),,φ0(r+κ1),w0,,w1).\begin{equation*}\hat{s} \stackrel{?}= \sum_{w \in \mathcal{B}_\ell} \varphi_1(t)(w) \cdot \widetilde{\texttt{eq}}(\varphi_0(r_\kappa), \ldots , \varphi_0(r_{\ell + \kappa - 1}), w_0, \ldots , w_{\ell - 1}).\end{equation*}

Intuitively, what we would like to do is sumcheck this thing. The sumcheck would be over AA-valued multilinears. Our challenges (r0,,r1)(r'_0, \ldots , r'_{\ell - 1}) would come from the usual field Tτ\mathcal{T}_\tau; before using these, though, we will implicitly embed them horizontally (i.e., we will really use (φ1(r0),,φ1(r1))(\varphi_1(r'_0), \ldots , \varphi_1(r'_{\ell - 1}))). Thus, it's an odd kind of sumcheck, where the polynomials are over AA, but the challenges come from the subring φ1(Tτ)A\varphi_1(\mathcal{T}_\tau) \subset A. It turns out that this sumcheck variant is secure.

Reducing to an evaluation

What's the point? If we can do this sumcheck, we're in great shape, since, as of its very end, we will have reduced the sum claim to an evaluation claim at the challenge point (φ1(r0),,φ1(r1))(\varphi_1(r'_0), \ldots , \varphi_1(r'_{\ell - 1})): namely, to learning

φ1(t)(φ1(r0),,φ1(r1))eq~(φ0(rκ),,φ0(r+κ1),φ1(r0),,φ1(r1)).\begin{equation*}\varphi_1(t)(\varphi_1(r'_0), \ldots , \varphi_1(r'_{\ell - 1})) \cdot \widetilde{\texttt{eq}}(\varphi_0(r_\kappa), \ldots , \varphi_0(r_{\ell + \kappa - 1}), \varphi_1(r'_0), \ldots , \varphi_1(r'_{\ell - 1})).\end{equation*}

The right-hand factor above is succinct: the verifier can compute it by doing O()O(\ell) operations in AA (in fact, 22 \cdot \ell Tτ\mathcal{T}_\tau-operations are enough, as we argue in our paper [DP24, Rem. 4.4]). The left-hand factor, on the other hand, is φ1(t)(φ1(r0),,φ1(r1))=φ1(t(r0,,r1))\varphi_1(t)(\varphi_1(r'_0), \ldots , \varphi_1(r'_{\ell - 1})) = \varphi_1\left( t(r'_0, \ldots , r'_{\ell - 1}) \right). But this is just the same as learning t(r0,,r1)t(r'_0, \ldots , r'_{\ell - 1})! We thus successfully reduced to an evaluation of t(X0,,X1)t(X_0, \ldots , X_{\ell - 1}), which is what we wanted.

Row-combination

Now, for purely efficiency reasons, we can get some gains by batching the rows of the above sumcheck, and running it over Tτ\mathcal{T}_\tau, as opposed to over AA itself. It turns out that this batching process is secure, for standard reasons. So, the sum claim we actually output in the ring-switching reduction above is nothing other than that which we obtain from the AA-valued sum claim s^=?wBφ1(t)(w)eq~(φ0(rκ),,φ0(r+κ1),w0,,w1)\hat{s} \stackrel{?}= \sum_{w \in \mathcal{B}_\ell} \varphi_1(t)(w) \cdot \widetilde{\texttt{eq}}(\varphi_0(r_\kappa), \ldots , \varphi_0(r_{\ell + \kappa - 1}), w_0, \ldots , w_{\ell - 1}) above upon batching that claim's rows (i.e., using the batching scalars (r0,,rκ1)(r''_0, \ldots , r''_{\kappa - 1})). As for the left-hand side, we saw already that s0uBκs^ueq~(r0,,rκ1,u0,,uκ1)s_0 \coloneqq \sum_{u \in \mathcal{B}_\kappa} \hat{s}_u \cdot \widetilde{\texttt{eq}}(r''_0, \ldots , r''_{\kappa - 1}, u_0, \ldots , u_{\kappa - 1}), so that's not hard. The trickier claim is that the multilinear A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) we defined above comes from row-batching the AA-valued multilinear eq~(φ0(rκ),,φ0(r+κ1),X0,,X1)\widetilde{\texttt{eq}}(\varphi_0(r_\kappa), \ldots , \varphi_0(r_{\ell + \kappa - 1}), X_0, \ldots , X_{\ell - 1}). But even this fact becomes relatively explainable if you stare long enough at how the multilinear A(X0,,X1)A(X_0, \ldots , X_{\ell - 1}) is constructed.

Efficiency

If you stare at what's written above, you'll see that the amount of work for the verifier is actually very little. In fact, the verifier has only to receive the AA-element s^\hat{s}, which takes 2ι22κ=2τ2κ2^\iota \cdot 2^{2 \cdot \kappa} = 2^\tau \cdot 2^\kappa bits to represent (a few kilobytes in the worst case). Moreover, the verifier has to compute two different combinations of s^\hat{s}: a column-combination and a row-combination. Both combination vectors are tensors (i.e., of (r0,,rκ1)(r_0, \ldots , r_{\kappa - 1}) and (r0,,rκ1)(r''_0, \ldots , r''_{\kappa - 1}), respectively). Each of these tensors takes Θ(2κ)\Theta(2^\kappa) work to compute; the combinations themselves each amount to Θ(2κ)\Theta(2^\kappa) multiplications in the tower field Tτ\mathcal{T}_\tau.

Batching

We've already discussed batching in the context of our commitment scheme. Ring-switching has its own batching story, which is quite complicated. As a conceptual starting point, we can imagine a non-batched version, in which the parties carry out the above procedure independently for each refined evaluation claim at issue (the prover sends all resulting tensor algebra elements to the verifier). Upon processing each of these tensor algebra elements independently, the verifier would then output many sum claims (all pertaining to huge-field polynomials of different lengths).

Even in this non-batched version, our next step will certainly be batched. That is, provided that underlying polynomials t0,,tn1t_0, \ldots , t_{n - 1} were committed to using our batched commitment procedure, we will have a way to process all of the resulting sum claims in one go.

With this said, in practice we want to carry out a further sort of batching at the ring-switching level, whereby fewer tensor-algebra elements have to get sent. We have a batching scheme which works in the following rough way. We fix a huge amount nn (say, thousands) of claims. Each looks like an j\ell_j-variate polynomial tj(X0,,Xj1)t_j(X_0, \ldots , X_{\ell_j - 1}) over Tτ\mathcal{T}_\tau, a refinement factor κj\kappa_j, and an evaluation claim tj(rj,0,,rj,j+κj1)=?sjt'_j(r_{j, 0}, \ldots , r_{j, \ell_j + \kappa_j - 1}) \stackrel{?}= s_j. We first coalesce these claims into "groups", by collecting into a single group all of those claims which share a common refinement factor κj\kappa_j and a common prefix (rj,0,,rj,κj1)(r_{j, 0}, \ldots , r_{j, \kappa_j - 1}). That is, we group together evaluation claims for which the refinement factor κ\kappa is the same and the first κ\kappa components of the evaluation point are the same. We write mm for the resulting number of groups (i.e., the number of distinct refinement-factor–prefix pairs exhibited by the claims list). Our batching scheme lets us send to the verifier just mm tensor-algebra elements, though it entails sending nn field elements. This ends up being efficient enough for our purposes.