The Procedure
Here, without further ado, we describe the steps of ring-switching. Here, we have an -variate polynomial fixed and committed to; we are going to give a protocol to evaluate its -fold refinement —specifically, to process an evaluation claim of the form .
Our goal will be to reduce that initial evaluation claim to a sum claim on —i.e., of the form , where is some further, transparent, multilinear.
The Steps
Here is the algorithm.
-
By embedding the original (packed) multilinear componentwise horizontally into , we can express it as an -variate multilinear over —, let's say. The prover computes and sends to the verifier
- The element is an algebra element, since the entire evaluation above takes place in the algebra. Thus, concretely, looks like a array of -elements.
- It is a bit tricky, but a very important and doable exercise, to study concretely what looks like. In fact, its columns will be exactly the partial evaluations . Roughly, this is because the vertically embedded elements will act "slice-wise" on the cells of the coefficients , for (recall that these coefficients are horizontally embedded). We prove this fact rigorously in our paper (see [DP24, Thm. 4.2]).
-
Writing for 's columns, the verifier checks:
- By the discussion just above, we can immediately see why this check is complete. Indeed, taking on faith for now that will hold for each if the prover is honest, we thus see that the right-hand side above will equal ; this quantity in turn equals precisely when the prover's initial evaluation claim is true.
-
The verifier samples row-batching scalars . The verifier ships these off to the prover. Meanwhile, locally, it decomposes into rows , and computes the row-combination:
In parallel, the verifier conjures into its head an odd sort of multilinear , defined in the following way. For each , we have a basis-decomposition:
I.e., whatever the left-hand -element above happens to be, surely it has a coordinate decomposition with respect to the usual -basis of ; that coordinate decomposition is what appears on the right. Finally, the verifier defines to be the multilinear extension of the function that sends:
- If we work extremely concretely, we can see clearly what is. Essentially, there are two steps. First, we take the tensor-expansion . This looks like a long, length- vector of -elements; we can imagine this thing going horizontally. By decomposing each component of this vector along the standard -basis of , we can rather express this thing as a matrix with entries in . By row-combining this -matrix using the length- tensor expansion , we finally get one last long, horizontal vector of -elements, again of length . This vector is exactly the vector of values that takes on the cube.
- This might sound crazy, but the verifier isn't doing any of these computations. Indeed, that wouldn't be succinct at all! Rather, the verifier is simply defining this polynomial in its head right now, not evaluating it anywhere yet (let alone on the entire cube).
- Indeed, as it turns out, the verifier will ultimately evaluate at exactly one point—that point will show up during the processing of the sum claim we're about to output. Details are here.
-
We're done! The verifier now outputs the sum claim
- Above, we agreed that we wanted to output a sum claim of the above form for which is transparent. Indeed, it's a far-from-obvious fact—which we prove in our paper—[DP24]—that is transparent. Indeed, the verifier can evaluate it at any point by just doing -multiplications.
- This latter fact is where the tensor algebra really shines. Indeed, the verifier's succinct evaluation procedure for will involve the tensor algebra's multiplicative structure in a crucial way. In fact, it appears essential: we don't know how to write down a succinct evaluation procedure for that doesn't use the the tensor algebra!
- For now, we defer the proof that is transparent, referring to our paper.
Some Intuitions and Remarks
In a rigorous treatment, there are a few things we are going to have to establish to even show that this thing is complete, let alone secure. And for the sum claim we just output at the end just now to make sense, should of course be transparent—a fact which we're again going to punt on.
For now, though, we address a key fact: why the sum claim is even true (i.e., assuming the prover is honest)?
An algebra-valued sumcheck
Here is the rough idea. Back in its very first message, the prover was supposed to compute . By multilinearly expanding this thing's right-hand side, we get an identity of -elements, which should hold if the prover is honest:
Intuitively, what we would like to do is sumcheck this thing. The sumcheck would be over -valued multilinears. Our challenges would come from the usual field ; before using these, though, we will implicitly embed them horizontally (i.e., we will really use ). Thus, it's an odd kind of sumcheck, where the polynomials are over , but the challenges come from the subring . It turns out that this sumcheck variant is secure.
Reducing to an evaluation
What's the point? If we can do this sumcheck, we're in great shape, since, as of its very end, we will have reduced the sum claim to an evaluation claim at the challenge point : namely, to learning
The right-hand factor above is succinct: the verifier can compute it by doing operations in (in fact, -operations are enough, as we argue in our paper [DP24, Rem. 4.4]). The left-hand factor, on the other hand, is . But this is just the same as learning ! We thus successfully reduced to an evaluation of , which is what we wanted.
Row-combination
Now, for purely efficiency reasons, we can get some gains by batching the rows of the above sumcheck, and running it over , as opposed to over itself. It turns out that this batching process is secure, for standard reasons. So, the sum claim we actually output in the ring-switching reduction above is nothing other than that which we obtain from the -valued sum claim above upon batching that claim's rows (i.e., using the batching scalars ). As for the left-hand side, we saw already that , so that's not hard. The trickier claim is that the multilinear we defined above comes from row-batching the -valued multilinear . But even this fact becomes relatively explainable if you stare long enough at how the multilinear is constructed.
Efficiency
If you stare at what's written above, you'll see that the amount of work for the verifier is actually very little. In fact, the verifier has only to receive the -element , which takes bits to represent (a few kilobytes in the worst case). Moreover, the verifier has to compute two different combinations of : a column-combination and a row-combination. Both combination vectors are tensors (i.e., of and , respectively). Each of these tensors takes work to compute; the combinations themselves each amount to multiplications in the tower field .
Batching
We've already discussed batching in the context of our commitment scheme. Ring-switching has its own batching story, which is quite complicated. As a conceptual starting point, we can imagine a non-batched version, in which the parties carry out the above procedure independently for each refined evaluation claim at issue (the prover sends all resulting tensor algebra elements to the verifier). Upon processing each of these tensor algebra elements independently, the verifier would then output many sum claims (all pertaining to huge-field polynomials of different lengths).
Even in this non-batched version, our next step will certainly be batched. That is, provided that underlying polynomials were committed to using our batched commitment procedure, we will have a way to process all of the resulting sum claims in one go.
With this said, in practice we want to carry out a further sort of batching at the ring-switching level, whereby fewer tensor-algebra elements have to get sent. We have a batching scheme which works in the following rough way. We fix a huge amount (say, thousands) of claims. Each looks like an -variate polynomial over , a refinement factor , and an evaluation claim . We first coalesce these claims into "groups", by collecting into a single group all of those claims which share a common refinement factor and a common prefix . That is, we group together evaluation claims for which the refinement factor is the same and the first components of the evaluation point are the same. We write for the resulting number of groups (i.e., the number of distinct refinement-factor–prefix pairs exhibited by the claims list). Our batching scheme lets us send to the verifier just tensor-algebra elements, though it entails sending field elements. This ends up being efficient enough for our purposes.