Skip to content

Benchmarks

How Binius measures up in key real-world tasks

Below we record benchmarks for common operations. Each benchmark verifies its operations in a single-table constraint system. All executions are parameterized for 100 bits of security using proven—not conjectured—proximity gap results. We use a Reed–Solomon rate of 12\frac{1}{2}.

We report proving times below in the form <TRACE GEN TIME> + <PROVE TIME>, the sum of the witness-generation time and the SNARK-proving time. We report these times separately because, in many cases, we have not yet parallelized the trace generation.

All performance numbers are collected on an AWS c7i.4xlarge machine, which has an Intel Sapphire Rapids processor with 16 virtual CPU cores and 32 GiB RAM. All binaries are compiled and run in release mode, with the environment variable RUSTFLAGS="-C target-cpu=native".

Hashing

In each hashing-based benchmark, we run enough operations to hash at least 1 MiB of data1.

OperationCountProve (s)Verify (ms)Proof Size (KiB)
Keccak-ff213 permutations0.20 + 2.98221582
Grøstl PP214 permutations0.14 + 0.891,0901,174
Vision Mark-32214 permutations0.36 + 3.80771,020
SHA-256214 compressions0.17 + 4.26508773

Integer Operations

OperationCountProve (s)Verify (ms)Proof Size (KiB)
u32 add4,000,0000.03 + 1.5812529
u32 mul21,000,0001.85 + 11.7341819
u32 xor4,000,0000.02 + 0.4613488
u32 and4,000,0000.02 + 0.9412515
u32 or4,000,0000.02 + 0.9611515

Footnotes

  1. The procedure by which we determine the number of operations required to hash 1 MiB of data varies depending on the structure of the hash function. Keccak and Vision use a sponge construction, meaning that each Keccak-ff or Vision permutation absorbs a number of bytes equal to the sponge function's rate. For SHA-3 and Keccak-256, used in Ethereum, the rate is 136 bytes, meaning one Keccak-ff permutation absorbs 136 bytes of input data. SHA2 and Grøstl use Merkle–Damgård and (closely related) wide-pipe constructions, respectively, based on underlying compression functions. In both hash functions, each compression function processes one 64-byte message block. In the case of Grøstl, we've actually only implemented that hash function's PP permutation thus far. To process a 64-byte block of input data, that hash function must apply PP exactly once and QQ exactly once. We thus fudge slightly, by acting as if we were doing both PPs and QQs, even though we are actually doing doubly many PPs. Thus, we count 64 bytes "hashed" for every two executions of the PP permutation. In each Keccak-ff benchmark, we run 2132^{13} permutations; in each of the others, we run 2142^{14}.

  2. This is a first attempt at u32 multiplication using decomposition into 8-bit limbs and Lasso lookups for 8-bit multiplications. We are working on implementing a better multiplication technique explained in this post.