Skip to content

Informal Explanation

An informal sketch of our MPT M3 instance

Here, we sketch our arithmetization construction in rough terms. We recall the statement and witness at hand. That is, both parties have a state root, together with a list of key–value pairs. The prover moreover has a series of MPT proofs, corresponding to the key–value pairs in the statement.

The Memory Block

We allow the prover to commit to a huge, unstructured block of byte-valued memory. The prover will begin by placing all applicable MPT nodes into this memory—simply by writing them into arbitrary, disjoint places. In general, certain nodes will appear multiple times (i.e., in multiple different MPT proofs). Here, the prover can deduplicate: it must only write each node that appears at all—i.e., anywhere, within any of the MPT proofs at issue—somewhere into memory. The prover will moreover write all of the keys involved in the statement into further places in this memory block.

At the core of our construction is a datatype called a state. A state is essentially a bundle consisting of an "RLP pointer" and a "key pointer". The RLP pointer is integer-valued; at all times, it will point to some position within the prover's memory block. In fact, it will point at all times to the beginning of some RLP-encoded value (at least if the prover is honest). The key pointer is nibble-valued. This means that it contains not just a byte location in memory, but moreover a nibble value—a boolean value indicating whether it points to the 0th or 1st nibble within the byte. In other words, it will point to an individual nibble, as opposed to a byte. If the prover is honest, each state's key pointer will point to a nibble within one of the statement's various keys.

The Boundary Conditions

Our core channel will be the state channel. At the very beginning, the verifier will take the various key–value pairs to be proven, and use them to kick off some boundary conditions. For the purposes of this exposition, we in fact describe a simplified version of our construction—developed further in the next page—in which forking is absent. In this subsection's final page, we relax this simplification.

Thus, for now, we have the following boundary conditions. For each key–value pair in the statement, the verifier will:

  • Push the master state consisting of a pointer to the state root hash and a pointer to the beginning of the appropriate key.
  • Pull the terminal state consisting of a pointer to the appropriate value and a pointer to the end of the appropriate key.

The verifier can check that the pointers above indeed point where they should, by manually reading tiny subsets of the prover's memory block.

Balancing the State Channel

The prover's job will be to somehow establish a path between each pair of states above. We will achieve soundness by ensuring that the prover can only progress by making "legal unwinding steps". In each unwinding step, a few things will happen. First, the prover will advance the relevant state's key pointer by one or more nibbles. At the same time, the prover will nondeterministically perform one "unwinding" step, by advancing the state's RLP pointer from the hash of one node to the hash of another node. We will use flushing rules and constraints to ensure that each time the prover "consumes" any nibble of its key, it also takes a correct, corresponding unwinding step. That is, we will allow the prover to pull a state and push a new state; on the other hand, we will moreover introduce flushing rules that ensure the legality of each such step.

Actually, each unwinding step will be performed in two substeps. In the first, called a hash transition, the prover will replace a state that points to a digest with a state that points to that digest's preimage, without advancing his key pointer. By pushing appropriate columns into a hashing channel, we can "create an outstanding liability" that ensures the validity of the prover's hash-unwinding (i.e., that the data residing at its nondeterministically supplied preimage pointer actually hashes to the digest the prover started with).

In the second substep, the prover will perform a node transition. In this step, the prover will advance his key pointer; he will also replace a state whose RLP pointer points to the beginning of some node with a state whose RLP pointer points to a child of that node. In fact, this substep itself has two cases, depending on which kind of node is at issue (it's either a branch node on the one hand, or an extension or a leaf node on the other). In each case, we push various liabilities, which ensure that the transition was made correctly.

We allow the prover to cancel all of the "side liabilities" that result from the above steps by creating further channels and tables. For example, we have a table by whose aid the prover can absorb hash liabilities; that table hash a flushing rule that pulls from the hash channel. On the other hand, that table itself creates further liabilities, by pushing into a dedicated Keccak-ff channel. That latter channel, finally, gets pulled from by a Keccak-ff table—itself constrained in such a way as to guarantee that each of its rows represents a valid execution of the Keccak-ff permutation. In short, we use a "network" of simpler and simpler channels and tables, to progressively reduce claims into simpler claims.

Use of Lasso

At various points during the above recipe, the verifier will need to "read" values from the prover's memory block. To achieve this, we use a Lasso-based ROM procedure, involving timestamps.

Indeed, we create a separate channel, called the memory channel—essentially the same channel that appears in our Lasso arithmetization. Many of our tables have flushing rules which pull from and push to the memory channel—with a timestamp incremented by 1 in the case of the push. These flushing rules seamlessly live next to the logically oriented flushing rules, and are treated just like those rules are. Thus, our Lasso-based lookup naturally integrates into larger M3 instances.