Evaluating Piecewise Multilinears
Here, we drop another bit of interesting mathematical content, to prepare the way for our batched commitment scheme. The idea again has to do with piecewise multilinears. We already saw that our batched commitment scheme starts with individual polynomials over the huge field , generally allowed to be different sizes. To commit to those polynomials, that procedure concatenates all of the given polynomials into a huge list, and reinterprets that list as a large multilinear polynomial (possibly after zero-padding the master list to a power-of-two length).
As we also mentioned, that procedure can also be viewed in a geometric way, using the idea of a piecewise function. That is, it takes its input multilinears , with numbers of variables , say. Each of these individual multilinears defines "a 's worth" of data. The master multilinear that comes out of our commitment procedure is essentially a piecewise multilinear on the huge cube , which takes the values that the individual polynomials respectively do on various disjoint subcubes .
Here, we set out with a basic question: let's say that we fix some evaluation point . How do the individual evaluations
of the "facets", on appropriate prefixes of the , relate to the evaluation of the whole multilinear
at ?
The Idea
The idea here isn't especially mysterious mathematically, but it involves some interesting algorithmic aspects. We imagine the components coming to us one by one. The idea is to work "starting at the small end". As of the end of the th round, some among the individual polynomials might become "freshly ready"—namely, those for which . We recall that these things will be "placed" on -dimensional subcubes of .
Upon receiving , our goal will be to "bootstrap" all outstanding subcubes of -dimensions into -dimensional subcubes. We do this by doing linear combinations with and .
In general, as of the end of round , we will have some mix of freshly available, -dimensional subcubes, as well as possibly -dimensional subcubes left over from previous rounds. Our goal will be to "coalesce" or "smooth" everything -dimensional into -dimensional things, by doing a pass over the trailing end of the data.
We will keep doing passes like this, as grows from .
The Algorithm
We sketch the algorithm here. We assume as usual that . We imagine that we start with a list of "claim" objects. From the verifier's viewpoint, each will eventually contain the evaluation of —that is, as of the end of the st round, these things will become ready. In each round , our job will be to receive any such objects—i.e., for which , or in other words . Moreover, we will "lift" everything which is not yet -dimensional to the -dimensional level, by smoothing trailing subcubes.
To keep track of all the relevant information, we bake a linked-list structure into the claims . To begin with, we initialize , for each .
Here are the steps.
- For each round :
- The verifier samples and sends it to the prover.
- For each claim index for which on the nose, the prover sends a claimed evaluation to the verifier.
- For the same set of (i.e., for which ), the verifier marks:
- The verifier again writes for the minimum among those for which holds. The verifier initializes , and then runs the following loop. While :
- The verifier sets .
- If the next element , then the verifier sets , and moreover cuts out of the list, by overwriting . Otherwise, the verifier sets .
- In any case, the verifier now overwrites , and also updates .
- At this point, the verifier outputs the final value .
Security Claim
Now, during the course of the above algorithm, the prover either did or didn't supply the correct evaluation claims to the verifier. Our completeness claim is that if these claims were all valid, then the verifier's final output will be equal to nothing other than , where, again, is defined in the piecewise way we already discussed. Conversely, if even a single of these was wrong, then the verifier's final output will not be equal to , with high probability over the verifier's choice of the point . In fact, it can be shown that this soundness error is at most .