halo2
Usage
This repository contains the halo2_proofs and halo2_gadgets crates, which should be used directly.
Minimum Supported Rust Version
Requires Rust 1.60 or higher.
Minimum supported Rust version can be changed in the future, but it will be done with a minor version bump.
Controlling parallelism
halo2
currently uses rayon for parallel computation.
The RAYON_NUM_THREADS
environment variable can be used to set the number of threads.
You can disable rayon
by disabling the "multicore"
feature.
Warning! Halo2 will lose access to parallelism if you disable the "multicore"
feature.
This will significantly degrade performance.
License
Licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Concepts
First we'll describe the concepts behind zero-knowledge proof systems; the arithmetization (kind of circuit description) used by Halo 2; and the abstractions we use to build circuit implementations.
Proof systems
The aim of any proof system is to be able to prove interesting mathematical or cryptographic statements.
Typically, in a given protocol we will want to prove families of statements that differ in their public inputs. The prover will also need to show that they know some private inputs that make the statement hold.
To do this we write down a relation, , that specifies which combinations of public and private inputs are valid.
The terminology above is intended to be aligned with the ZKProof Community Reference.
To be precise, we should distinguish between the relation , and its implementation to be used in a proof system. We call the latter a circuit.
The language that we use to express circuits for a particular proof system is called an arithmetization. Usually, an arithmetization will define circuits in terms of polynomial constraints on variables over a field.
The process of expressing a particular relation as a circuit is also sometimes called "arithmetization", but we'll avoid that usage.
To create a proof of a statement, the prover will need to know the private inputs, and also intermediate values, called advice values, that are used by the circuit.
We assume that we can compute advice values efficiently from the private and public inputs. The particular advice values will depend on how we write the circuit, not only on the high-level statement.
The private inputs and advice values are collectively called a witness.
Some authors use "witness" as just a synonym for private inputs. But in our usage, a witness includes advice, i.e. it includes all values that the prover supplies to the circuit.
For example, suppose that we want to prove knowledge of a preimage of a hash function for a digest :
-
The private input would be the preimage .
-
The public input would be the digest .
-
The relation would be .
-
For a particular public input , the statement would be: .
-
The advice would be all of the intermediate values in the circuit implementing the hash function. The witness would be and the advice.
A Non-interactive Argument allows a prover to create a proof for a given statement and witness. The proof is data that can be used to convince a verifier that there exists a witness for which the statement holds. The security property that such proofs cannot falsely convince a verifier is called soundness.
A Non-interactive Argument of Knowledge (NARK) further convinces the verifier that the prover knew a witness for which the statement holds. This security property is called knowledge soundness, and it implies soundness.
In practice knowledge soundness is more useful for cryptographic protocols than soundness: if we are interested in whether Alice holds a secret key in some protocol, say, we need Alice to prove that she knows the key, not just that it exists.
Knowledge soundness is formalized by saying that an extractor, which can observe precisely how the proof is generated, must be able to compute the witness.
This property is subtle given that proofs can be malleable. That is, depending on the proof system it may be possible to take an existing proof (or set of proofs) and, without knowing the witness(es), modify it/them to produce a distinct proof of the same or a related statement. Higher-level protocols that use malleable proof systems need to take this into account.
Even without malleability, proofs can also potentially be replayed. For instance, we would not want Alice in our example to be able to present a proof generated by someone else, and have that be taken as a demonstration that she knew the key.
If a proof yields no information about the witness (other than that a witness exists and was known to the prover), then we say that the proof system is zero knowledge.
If a proof system produces short proofs —i.e. of length polylogarithmic in the circuit size— then we say that it is succinct. A succinct NARK is called a SNARK (Succinct Non-Interactive Argument of Knowledge).
By this definition, a SNARK need not have verification time polylogarithmic in the circuit size. Some papers use the term efficient to describe a SNARK with that property, but we'll avoid that term since it's ambiguous for SNARKs that support amortized or recursive verification, which we'll get to later.
A zk-SNARK is a zero-knowledge SNARK.
PLONKish Arithmetization
The arithmetization used by Halo 2 comes from PLONK, or more precisely its extension UltraPLONK that supports custom gates and lookup arguments. We'll call it PLONKish.
PLONKish circuits are defined in terms of a rectangular matrix of values. We refer to rows, columns, and cells of this matrix with the conventional meanings.
A PLONKish circuit depends on a configuration:
-
A finite field , where cell values (for a given statement and witness) will be elements of .
-
The number of columns in the matrix, and a specification of each column as being fixed, advice, or instance. Fixed columns are fixed by the circuit; advice columns correspond to witness values; and instance columns are normally used for public inputs (technically, they can be used for any elements shared between the prover and verifier).
-
A subset of the columns that can participate in equality constraints.
-
A maximum constraint degree.
-
A sequence of polynomial constraints. These are multivariate polynomials over that must evaluate to zero for each row. The variables in a polynomial constraint may refer to a cell in a given column of the current row, or a given column of another row relative to this one (with wrap-around, i.e. taken modulo ). The maximum degree of each polynomial is given by the maximum constraint degree.
-
A sequence of lookup arguments defined over tuples of input expressions (which are multivariate polynomials as above) and table columns.
A PLONKish circuit also defines:
-
The number of rows in the matrix. must correspond to the size of a multiplicative subgroup of ; typically a power of two.
-
A sequence of equality constraints, which specify that two given cells must have equal values.
-
The values of the fixed columns at each row.
From a circuit description we can generate a proving key and a verification key, which are needed for the operations of proving and verification for that circuit.
Note that we specify the ordering of columns, polynomial constraints, lookup arguments, and equality constraints, even though these do not affect the meaning of the circuit. This makes it easier to define the generation of proving and verification keys as a deterministic process.
Typically, a configuration will define polynomial constraints that are switched off and on by selectors defined in fixed columns. For example, a constraint can be switched off for a particular row by setting . In this case we sometimes refer to a set of constraints controlled by a set of selector columns that are designed to be used together, as a gate. Typically there will be a standard gate that supports generic operations like field multiplication and division, and possibly also custom gates that support more specialized operations.
Chips
The previous section gives a fairly low-level description of a circuit. When implementing circuits we will typically use a higher-level API which aims for the desirable characteristics of auditability, efficiency, modularity, and expressiveness.
Some of the terminology and concepts used in this API are taken from an analogy with integrated circuit design and layout. As for integrated circuits, the above desirable characteristics are easier to obtain by composing chips that provide efficient pre-built implementations of particular functionality.
For example, we might have chips that implement particular cryptographic primitives such as a hash function or cipher, or algorithms like scalar multiplication or pairings.
In PLONKish circuits, it is possible to build up arbitrary logic just from standard gates that do field multiplication and addition. However, very significant efficiency gains can be obtained by using custom gates.
Using our API, we define chips that "know" how to use particular sets of custom gates. This creates an abstraction layer that isolates the implementation of a high-level circuit from the complexity of using custom gates directly.
Even if we sometimes need to "wear two hats", by implementing both a high-level circuit and the chips that it uses, the intention is that this separation will result in code that is easier to understand, audit, and maintain/reuse. This is partly because some potential implementation errors are ruled out by construction.
Gates in PLONKish circuits refer to cells by relative references, i.e. to the cell in a given column, and the row at a given offset relative to the one in which the gate's selector is set. We call this an offset reference when the offset is nonzero (i.e. offset references are a subset of relative references).
Relative references contrast with absolute references used in equality constraints, which can point to any cell.
The motivation for offset references is to reduce the number of columns needed in the configuration, which reduces proof size. If we did not have offset references then we would need a column to hold each value referred to by a custom gate, and we would need to use equality constraints to copy values from other cells of the circuit into that column. With offset references, we not only need fewer columns; we also do not need equality constraints to be supported for all of those columns, which improves efficiency.
In R1CS (another arithmetization which may be more familiar to some readers, but don't worry if it isn't), a circuit consists of a "sea of gates" with no semantically significant ordering. Because of offset references, the order of rows in a PLONKish circuit, on the other hand, is significant. We're going to make some simplifying assumptions and define some abstractions to tame the resulting complexity: the aim will be that, at the gadget level where we do most of our circuit construction, we will not have to deal with relative references or with gate layout explicitly.
We will partition a circuit into regions, where each region contains a disjoint subset of cells, and relative references only ever point within a region. Part of the responsibility of a chip implementation is to ensure that gates that make offset references are laid out in the correct positions in a region.
Given the set of regions and their shapes, we will use a separate floor planner to decide where (i.e. at what starting row) each region is placed. There is a default floor planner that implements a very general algorithm, but you can write your own floor planner if you need to.
Floor planning will in general leave gaps in the matrix, because the gates in a given row did not use all available columns. These are filled in —as far as possible— by gates that do not require offset references, which allows them to be placed on any row.
Chips can also define lookup tables. If more than one table is defined for the same lookup argument, we can use a tag column to specify which table is used on each row. It is also possible to perform a lookup in the union of several tables (limited by the polynomial degree bound).
Composing chips
In order to combine functionality from several chips, we compose them in a tree. The top-level chip defines a set of fixed, advice, and instance columns, and then specifies how they should be distributed between lower-level chips.
In the simplest case, each lower-level chips will use columns disjoint from the other chips. However, it is allowed to share a column between chips. It is important to optimize the number of advice columns in particular, because that affects proof size.
The result (possibly after optimization) is a PLONKish configuration. Our circuit implementation will be parameterized on a chip, and can use any features of the supported lower-level chips via the top-level chip.
Our hope is that less expert users will normally be able to find an existing chip that supports the operations they need, or only have to make minor modifications to an existing chip. Expert users will have full control to do the kind of circuit optimizations that ECC is famous for 🙂.
Gadgets
When implementing a circuit, we could use the features of the chips we've selected directly. Typically, though, we will use them via gadgets. This indirection is useful because, for reasons of efficiency and limitations imposed by PLONKish circuits, the chip interfaces will often be dependent on low-level implementation details. The gadget interface can provide a more convenient and stable API that abstracts away from extraneous detail.
For example, consider a hash function such as SHA-256. The interface of a chip supporting
SHA-256 might be dependent on internals of the hash function design such as the separation
between message schedule and compression function. The corresponding gadget interface can
provide a more convenient and familiar update
/finalize
API, and can also handle parts
of the hash function that do not need chip support, such as padding. This is similar to how
accelerated
instructions
for cryptographic primitives on CPUs are typically accessed via software libraries, rather
than directly.
Gadgets can also provide modular and reusable abstractions for circuit programming at a higher level, similar to their use in libraries such as libsnark and bellman. As well as abstracting functions, they can also abstract types, such as elliptic curve points or integers of specific sizes.
User Documentation
You're probably here because you want to write circuits? Excellent!
This section will guide you through the process of creating circuits with halo2.
Developer tools
The halo2
crate includes several utilities to help you design and implement your
circuits.
Mock prover
halo2_proofs::dev::MockProver
is a tool for debugging circuits, as well as cheaply verifying
their correctness in unit tests. The private and public inputs to the circuit are
constructed as would normally be done to create a proof, but MockProver::run
instead
creates an object that will test every constraint in the circuit directly. It returns
granular error messages that indicate which specific constraint (if any) is not satisfied.
Circuit visualizations
The dev-graph
feature flag exposes several helper methods for creating graphical
representations of circuits.
On Debian systems, you will need the following additional packages:
sudo apt install cmake libexpat1-dev libfreetype6-dev libcairo2-dev
Circuit layout
halo2_proofs::dev::CircuitLayout
renders the circuit layout as a grid:
fn main() {
// Prepare the circuit you want to render.
// You don't need to include any witness variables.
let a = Fp::random(OsRng);
let instance = Fp::ONE + Fp::ONE;
let lookup_table = vec![instance, a, a, Fp::zero()];
let circuit: MyCircuit<Fp> = MyCircuit {
a: Value::unknown(),
lookup_table,
};
// Create the area you want to draw on.
// Use SVGBackend if you want to render to .svg instead.
use plotters::prelude::*;
let root = BitMapBackend::new("layout.png", (1024, 768)).into_drawing_area();
root.fill(&WHITE).unwrap();
let root = root
.titled("Example Circuit Layout", ("sans-serif", 60))
.unwrap();
halo2_proofs::dev::CircuitLayout::default()
// You can optionally render only a section of the circuit.
.view_width(0..2)
.view_height(0..16)
// You can hide labels, which can be useful with smaller areas.
.show_labels(false)
// Render the circuit onto your area!
// The first argument is the size parameter for the circuit.
.render(5, &circuit, &root)
.unwrap();
}
- Columns are laid out from left to right as instance, advice and fixed. The order of
columns is otherwise without meaning.
- Instance columns have a white background.
- Advice columns have a red background.
- Fixed columns have a blue background.
- Regions are shown as labelled green boxes (overlaying the background colour). A region may appear as multiple boxes if some of its columns happen to not be adjacent.
- Cells that have been assigned to by the circuit will be shaded in grey. If any cells are assigned to more than once (which is usually a mistake), they will be shaded darker than the surrounding cells.
Circuit structure
halo2_proofs::dev::circuit_dot_graph
builds a DOT graph string representing the given
circuit, which can then be rendered with a variety of layout programs. The graph is built
from calls to Layouter::namespace
both within the circuit, and inside the gadgets and
chips that it uses.
fn main() {
// Prepare the circuit you want to render.
// You don't need to include any witness variables.
let a = Fp::rand();
let instance = Fp::one() + Fp::one();
let lookup_table = vec![instance, a, a, Fp::zero()];
let circuit: MyCircuit<Fp> = MyCircuit {
a: None,
lookup_table,
};
// Generate the DOT graph string.
let dot_string = halo2_proofs::dev::circuit_dot_graph(&circuit);
// Now you can either handle it in Rust, or just
// print it out to use with command-line tools.
print!("{}", dot_string);
}
Cost estimator
The cost-model
binary takes high-level parameters for a circuit design, and estimates
the verification cost, as well as resulting proof size.
Usage: cargo run --example cost-model -- [OPTIONS] k
Positional arguments:
k 2^K bound on the number of rows.
Optional arguments:
-h, --help Print this message.
-a, --advice R[,R..] An advice column with the given rotations. May be repeated.
-i, --instance R[,R..] An instance column with the given rotations. May be repeated.
-f, --fixed R[,R..] A fixed column with the given rotations. May be repeated.
-g, --gate-degree D Maximum degree of the custom gates.
-l, --lookup N,I,T A lookup over N columns with max input degree I and max table degree T. May be repeated.
-p, --permutation N A permutation over N columns. May be repeated.
For example, to estimate the cost of a circuit with three advice columns and one fixed column (with various rotations), and a maximum gate degree of 4:
> cargo run --example cost-model -- -a 0,1 -a 0 -a-0,-1,1 -f 0 -g 4 11
Finished dev [unoptimized + debuginfo] target(s) in 0.03s
Running `target/debug/examples/cost-model -a 0,1 -a 0 -a 0,-1,1 -f 0 -g 4 11`
Circuit {
k: 11,
max_deg: 4,
advice_columns: 3,
lookups: 0,
permutations: [],
column_queries: 7,
point_sets: 3,
estimator: Estimator,
}
Proof size: 1440 bytes
Verification: at least 81.689ms
A simple example
Let's start with a simple circuit, to introduce you to the common APIs and how they are used. The circuit will take a public input , and will prove knowledge of two private inputs and such that
Define instructions
Firstly, we need to define the instructions that our circuit will rely on. Instructions are the boundary between high-level gadgets and the low-level circuit operations. Instructions may be as coarse or as granular as desired, but in practice you want to strike a balance between an instruction being large enough to effectively optimize its implementation, and small enough that it is meaningfully reusable.
For our circuit, we will use three instructions:
- Load a private number into the circuit.
- Multiply two numbers.
- Expose a number as a public input to the circuit.
We also need a type for a variable representing a number. Instruction interfaces provide associated types for their inputs and outputs, to allow the implementations to represent these in a way that makes the most sense for their optimization goals.
trait NumericInstructions<F: Field>: Chip<F> {
/// Variable representing a number.
type Num;
/// Loads a number into the circuit as a private input.
fn load_private(&self, layouter: impl Layouter<F>, a: Value<F>) -> Result<Self::Num, Error>;
/// Loads a number into the circuit as a fixed constant.
fn load_constant(&self, layouter: impl Layouter<F>, constant: F) -> Result<Self::Num, Error>;
/// Returns `c = a * b`.
fn mul(
&self,
layouter: impl Layouter<F>,
a: Self::Num,
b: Self::Num,
) -> Result<Self::Num, Error>;
/// Exposes a number as a public input to the circuit.
fn expose_public(
&self,
layouter: impl Layouter<F>,
num: Self::Num,
row: usize,
) -> Result<(), Error>;
}
Define a chip implementation
For our circuit, we will build a chip that provides the above numeric instructions for a finite field.
/// The chip that will implement our instructions! Chips store their own
/// config, as well as type markers if necessary.
struct FieldChip<F: Field> {
config: FieldConfig,
_marker: PhantomData<F>,
}
Every chip needs to implement the Chip
trait. This defines the properties of the chip
that a Layouter
may rely on when synthesizing a circuit, as well as enabling any initial
state that the chip requires to be loaded into the circuit.
impl<F: Field> Chip<F> for FieldChip<F> {
type Config = FieldConfig;
type Loaded = ();
fn config(&self) -> &Self::Config {
&self.config
}
fn loaded(&self) -> &Self::Loaded {
&()
}
}
Configure the chip
The chip needs to be configured with the columns, permutations, and gates that will be required to implement all of the desired instructions.
/// Chip state is stored in a config struct. This is generated by the chip
/// during configuration, and then stored inside the chip.
#[derive(Clone, Debug)]
struct FieldConfig {
/// For this chip, we will use two advice columns to implement our instructions.
/// These are also the columns through which we communicate with other parts of
/// the circuit.
advice: [Column<Advice>; 2],
/// This is the public input (instance) column.
instance: Column<Instance>,
// We need a selector to enable the multiplication gate, so that we aren't placing
// any constraints on cells where `NumericInstructions::mul` is not being used.
// This is important when building larger circuits, where columns are used by
// multiple sets of instructions.
s_mul: Selector,
}
impl<F: Field> FieldChip<F> {
fn construct(config: <Self as Chip<F>>::Config) -> Self {
Self {
config,
_marker: PhantomData,
}
}
fn configure(
meta: &mut ConstraintSystem<F>,
advice: [Column<Advice>; 2],
instance: Column<Instance>,
constant: Column<Fixed>,
) -> <Self as Chip<F>>::Config {
meta.enable_equality(instance);
meta.enable_constant(constant);
for column in &advice {
meta.enable_equality(*column);
}
let s_mul = meta.selector();
// Define our multiplication gate!
meta.create_gate("mul", |meta| {
// To implement multiplication, we need three advice cells and a selector
// cell. We arrange them like so:
//
// | a0 | a1 | s_mul |
// |-----|-----|-------|
// | lhs | rhs | s_mul |
// | out | | |
//
// Gates may refer to any relative offsets we want, but each distinct
// offset adds a cost to the proof. The most common offsets are 0 (the
// current row), 1 (the next row), and -1 (the previous row), for which
// `Rotation` has specific constructors.
let lhs = meta.query_advice(advice[0], Rotation::cur());
let rhs = meta.query_advice(advice[1], Rotation::cur());
let out = meta.query_advice(advice[0], Rotation::next());
let s_mul = meta.query_selector(s_mul);
// Finally, we return the polynomial expressions that constrain this gate.
// For our multiplication gate, we only need a single polynomial constraint.
//
// The polynomial expressions returned from `create_gate` will be
// constrained by the proving system to equal zero. Our expression
// has the following properties:
// - When s_mul = 0, any value is allowed in lhs, rhs, and out.
// - When s_mul != 0, this constrains lhs * rhs = out.
vec![s_mul * (lhs * rhs - out)]
});
FieldConfig {
advice,
instance,
s_mul,
}
}
}
Implement chip traits
/// A variable representing a number.
#[derive(Clone)]
struct Number<F: Field>(AssignedCell<F, F>);
impl<F: Field> NumericInstructions<F> for FieldChip<F> {
type Num = Number<F>;
fn load_private(
&self,
mut layouter: impl Layouter<F>,
value: Value<F>,
) -> Result<Self::Num, Error> {
let config = self.config();
layouter.assign_region(
|| "load private",
|mut region| {
region
.assign_advice(|| "private input", config.advice[0], 0, || value)
.map(Number)
},
)
}
fn load_constant(
&self,
mut layouter: impl Layouter<F>,
constant: F,
) -> Result<Self::Num, Error> {
let config = self.config();
layouter.assign_region(
|| "load constant",
|mut region| {
region
.assign_advice_from_constant(|| "constant value", config.advice[0], 0, constant)
.map(Number)
},
)
}
fn mul(
&self,
mut layouter: impl Layouter<F>,
a: Self::Num,
b: Self::Num,
) -> Result<Self::Num, Error> {
let config = self.config();
layouter.assign_region(
|| "mul",
|mut region: Region<'_, F>| {
// We only want to use a single multiplication gate in this region,
// so we enable it at region offset 0; this means it will constrain
// cells at offsets 0 and 1.
config.s_mul.enable(&mut region, 0)?;
// The inputs we've been given could be located anywhere in the circuit,
// but we can only rely on relative offsets inside this region. So we
// assign new cells inside the region and constrain them to have the
// same values as the inputs.
a.0.copy_advice(|| "lhs", &mut region, config.advice[0], 0)?;
b.0.copy_advice(|| "rhs", &mut region, config.advice[1], 0)?;
// Now we can assign the multiplication result, which is to be assigned
// into the output position.
let value = a.0.value().copied() * b.0.value();
// Finally, we do the assignment to the output, returning a
// variable to be used in another part of the circuit.
region
.assign_advice(|| "lhs * rhs", config.advice[0], 1, || value)
.map(Number)
},
)
}
fn expose_public(
&self,
mut layouter: impl Layouter<F>,
num: Self::Num,
row: usize,
) -> Result<(), Error> {
let config = self.config();
layouter.constrain_instance(num.0.cell(), config.instance, row)
}
}
Build the circuit
Now that we have the instructions we need, and a chip that implements them, we can finally build our circuit!
/// The full circuit implementation.
///
/// In this struct we store the private input variables. We use `Option<F>` because
/// they won't have any value during key generation. During proving, if any of these
/// were `None` we would get an error.
#[derive(Default)]
struct MyCircuit<F: Field> {
constant: F,
a: Value<F>,
b: Value<F>,
}
impl<F: Field> Circuit<F> for MyCircuit<F> {
// Since we are using a single chip for everything, we can just reuse its config.
type Config = FieldConfig;
type FloorPlanner = SimpleFloorPlanner;
fn without_witnesses(&self) -> Self {
Self::default()
}
fn configure(meta: &mut ConstraintSystem<F>) -> Self::Config {
// We create the two advice columns that FieldChip uses for I/O.
let advice = [meta.advice_column(), meta.advice_column()];
// We also need an instance column to store public inputs.
let instance = meta.instance_column();
// Create a fixed column to load constants.
let constant = meta.fixed_column();
FieldChip::configure(meta, advice, instance, constant)
}
fn synthesize(
&self,
config: Self::Config,
mut layouter: impl Layouter<F>,
) -> Result<(), Error> {
let field_chip = FieldChip::<F>::construct(config);
// Load our private values into the circuit.
let a = field_chip.load_private(layouter.namespace(|| "load a"), self.a)?;
let b = field_chip.load_private(layouter.namespace(|| "load b"), self.b)?;
// Load the constant factor into the circuit.
let constant =
field_chip.load_constant(layouter.namespace(|| "load constant"), self.constant)?;
// We only have access to plain multiplication.
// We could implement our circuit as:
// asq = a*a
// bsq = b*b
// absq = asq*bsq
// c = constant*asq*bsq
//
// but it's more efficient to implement it as:
// ab = a*b
// absq = ab^2
// c = constant*absq
let ab = field_chip.mul(layouter.namespace(|| "a * b"), a, b)?;
let absq = field_chip.mul(layouter.namespace(|| "ab * ab"), ab.clone(), ab)?;
let c = field_chip.mul(layouter.namespace(|| "constant * absq"), constant, absq)?;
// Expose the result as a public input to the circuit.
field_chip.expose_public(layouter.namespace(|| "expose c"), c, 0)
}
}
Testing the circuit
halo2_proofs::dev::MockProver
can be used to test that the circuit is working correctly. The
private and public inputs to the circuit are constructed as we will do to create a proof,
but by passing them to MockProver::run
we get an object that can test every constraint
in the circuit, and tell us exactly what is failing (if anything).
// The number of rows in our circuit cannot exceed 2^k. Since our example
// circuit is very small, we can pick a very small value here.
let k = 4;
// Prepare the private and public inputs to the circuit!
let constant = Fp::from(7);
let a = Fp::from(2);
let b = Fp::from(3);
let c = constant * a.square() * b.square();
// Instantiate the circuit with the private inputs.
let circuit = MyCircuit {
constant,
a: Value::known(a),
b: Value::known(b),
};
// Arrange the public input. We expose the multiplication result in row 0
// of the instance column, so we position it there in our public inputs.
let mut public_inputs = vec![c];
// Given the correct public input, our circuit will verify.
let prover = MockProver::run(k, &circuit, vec![public_inputs.clone()]).unwrap();
assert_eq!(prover.verify(), Ok(()));
// If we try some other public input, the proof will fail!
public_inputs[0] += Fp::one();
let prover = MockProver::run(k, &circuit, vec![public_inputs]).unwrap();
assert!(prover.verify().is_err());
Full example
You can find the source code for this example here.
Lookup tables
In normal programs, you can trade memory for CPU to improve performance, by pre-computing and storing lookup tables for some part of the computation. We can do the same thing in halo2 circuits!
A lookup table can be thought of as enforcing a relation between variables, where the relation is expressed as a table. Assuming we have only one lookup argument in our constraint system, the total size of tables is constrained by the size of the circuit: each table entry costs one row, and it also costs one row to do each lookup.
TODO
Gadgets
Tips and tricks
This section contains various ideas and snippets that you might find useful while writing halo2 circuits.
Small range constraints
A common constraint used in R1CS circuits is the boolean constraint: . This constraint can only be satisfied by or .
In halo2 circuits, you can similarly constrain a cell to have one of a small set of values. For example, to constrain to the range , you would create a gate of the form:
while to constrain to be either 7 or 13, you would use:
The underlying principle here is that we create a polynomial constraint with roots at each value in the set of possible values we want to allow. In R1CS circuits, the maximum supported polynomial degree is 2 (due to all constraints being of the form ). In halo2 circuits, you can use arbitrary-degree polynomials - with the proviso that higher-degree constraints are more expensive to use.
Note that the roots don't have to be constants; for example will constrain to be equal to one of where the latter can be arbitrary polynomials, as long as the whole expression stays within the maximum degree bound.
Small set interpolation
We can use Lagrange interpolation to create a polynomial constraint that maps for small sets of .
For instance, say we want to map a 2-bit value to a "spread" version interleaved with zeros. We first precompute the evaluations at each point:
Then, we construct the Lagrange basis polynomial for each point using the identity: where is the number of data points. ( in our example above.)
Recall that the Lagrange basis polynomial evaluates to at and at all other
Continuing our example, we get four Lagrange basis polynomials:
Our polynomial constraint is then
Using halo2 in WASM
Since halo2 is written in Rust, you can compile it to WebAssembly (wasm), which will allow you to use the prover and verifier for your circuits in browser applications. This tutorial takes you through all you need to know to compile your circuits to wasm.
Throughout this tutorial, we will follow the repository for Zordle for reference, one of the first known webapps based on Halo 2 circuits. Zordle is ZK Wordle, where the circuit takes as advice values the player's input words and the player's share grid (the grey, yellow and green squares) and verifies that they match correctly. Therefore, the proof verifies that the player knows a "preimage" to the output share sheet, which can then be verified using just the ZK proof.
Circuit code setup
The first step is to create functions in Rust that will interface with the browser application. In the case of a prover, this will typically input some version of the advice and instance data, use it to generate a complete witness, and then output a proof. In the case of a verifier, this will typically input a proof and some version of the instance, and then output a boolean indicating whether the proof verified correctly or not.
In the case of Zordle, this code is contained in wasm.rs, and consists of two primary functions:
Prover
#[wasm_bindgen]
pub async fn prove_play(final_word: String, words_js: JsValue, params_ser: JsValue) -> JsValue {
// Steps:
// - Deserialise function parameters
// - Generate the instance and advice columns using the words
// - Instantiate the circuit and generate the witness
// - Generate the proving key from the params
// - Create a proof
}
While the specific inputs and their serialisations will depend on your circuit and webapp set up, it's useful to note the format in the specific case of Zordle since your use case will likely be similar:
This function takes as input the final_word
that the user aimed for, and the words they attempted to use (in the form of words_js
). It also takes as input the parameters for the circuit, which are serialized in params_ser
. We will expand on this in the Params section below.
Note that the function parameters are passed in wasm_bindgen
-compatible formats: String
and JsValue
. The JsValue
type is a type from the Serde
library. You can find much more details about this type and how to use it in the documentation here.
The output is a Vec<u8>
converted to a JSValue
using Serde. This is later passed in as input to the the verifier function.
Verifier
#[wasm_bindgen]
pub fn verify_play(final_word: String, proof_js: JsValue, diffs_u64_js: JsValue, params_ser: JsValue) -> bool {
// Steps:
// - Deserialise function parameters
// - Generate the instance columns using the diffs representation of the columns
// - Generate the verifying key using the params
// - Verify the proof
}
Similar to the prover, we take in input and output a boolean true/false indicating the correctness of the proof. The diffs_u64_js
object is a 2D JS array consisting of values for each cell that indicate the color: grey, yellow or green. These are used to assemble the instance columns for the circuit.
Params
Additionally, both the prover and verifier functions input params_ser
, a serialised form of the public parameters of the polynomial commitment scheme. These are passed in as input (instead of being regenerated in prove/verify functions) as a performance optimisation since these are constant based only on the circuit's value of K
. We can store these separately on a static web server and pass them in as input to the WASM. To generate the binary serialised form of these (separately outside the WASM functions), you can run something like:
fn write_params(K: u32) {
let mut params_file = File::create("params.bin").unwrap();
let params: Params<EqAffine> = Params::new(K);
params.write(&mut params_file).unwrap();
}
Later, we can read the params.bin
file from the web-server in Javascript in a byte-serialised format as a Uint8Array
and pass it to the WASM as params_ser
, which can be deserialised in Rust using the js_sys
library.
Ideally, in future, instead of serialising the parameters we would be able to serialise and work directly with the proving key and the verifying key of the circuit, but that is currently not supported by the library, and tracked as issue #449 and #443.
Rust and WASM environment setup
Typically, Rust code is compiled to WASM using the wasm-pack
tool and is as simple as changing some build commands. In the case of halo2 prover/verifier functions however, we need to make some additional changes to the build process. In particular, there are two main changes:
- Parallelism: halo2 uses the
rayon
library for parallelism, which is not directly supported by WASM. However, the Chrome team has an adapter to enable rayon-like parallelism using Web Workers in browser:wasm-bindgen-rayon
. We'll use this to enable parallelism in our WASM prover/verifier. - WASM max memory: The default memory limit for WASM with
wasm-bindgen
is set to 2GB, which is not enough to run the halo2 prover for large circuits (withK
> 10 or so). We need to increase this limit to the maximum allowed by WASM (4GB!) to support larger circuits (up toK = 15
or so).
Firstly, add all the dependencies particular to your WASM interfacing functions to your Cargo.toml
file. You can restrict the dependencies to the WASM compilation by using the WASM target feature flag. In the case of Zordle, this looks like:
[target.'cfg(target_family = "wasm")'.dependencies]
getrandom = { version = "0.2", features = ["js"]}
wasm-bindgen = { version = "0.2.81", features = ["serde-serialize"]}
console_error_panic_hook = "0.1.7"
rayon = "1.5"
wasm-bindgen-rayon = { version = "1.0"}
web-sys = { version = "0.3", features = ["Request", "Window", "Response"] }
wasm-bindgen-futures = "0.4"
js-sys = "0.3"
Next, let's integrate wasm-bindgen-rayon
into our code. The README for the library has a great overview of how to do so. In particular, note the changes to the Rust compilation pipeline. You need to switch to a nightly version of Rust and enable support for WASM atomics. Additionally, remember to export the init_thread_pool
in Rust code.
Next, we will bump up the default 2GB max memory limit for wasm-pack
. To do so, add "-C", "link-arg=--max-memory=4294967296"
Rust flag to the wasm target in the .cargo/config
file. With the setup for wasm-bindgen-rayon
and the memory bump, the .cargo/config
file should now look like:
[target.wasm32-unknown-unknown]
rustflags = ["-C", "target-feature=+atomics,+bulk-memory,+mutable-globals", "-C", "link-arg=--max-memory=4294967296"]
...
Shoutout to @mattgibb who documented this esoteric change for increasing maximum memory in a random GitHub issue here.1
Off-topic but it was quite surprising for me to learn that WASM has a hard maximum limitation of 4GB memory. This is because WASM currently has a 32-bit architecture, which was quite surprising to me for such a new, forward-facing assembly language. There are, however, some open proposals to move WASM to a larger address space.
Now that we have the Rust set up, you should be able to build a WASM package simply using wasm-pack build --target web --out-dir pkg
and use the output WASM package in your webapp.
Webapp setup
Zordle ships with a minimal React test client as an example (that simply adds WASM support to the default create-react-app
template). You can find the code for the test client here. I would recommend forking the test client for your own application and working from there.
The test client includes a clean WebWorker that interfaces with the Rust WASM package. Putting the interface in a WebWorker prevents blocking the main thread of the browser and allows for a clean interface from React/application logic. Checkout halo-worker.ts
for the WebWorker code and see how you can interface with the web worker from React in App.tsx
.
If you've done everything right so far, you should now be able to generate proofs and verify them in browser! In the case of Zordle, proof generation for a circuit with K = 14
takes about a minute or so on my laptop. During proof generation, if you pop open the Chrome/Firefox task manager, you should additionally see something like this:
Zordle and its test-client set the parallelism to the number of cores available on the machine by default. If you would like to reduce this, you can do so by changing the argument to initThreadPool
.
If you'd prefer to use your own Worker/React setup, the code to fetch and serialise parameters, proofs and other instance and advice values may still be useful to look at!
Safari
Note that wasm-bindgen-rayon
library is not supported by Safari because it spawns Web Workers from inside another Web Worker. According to the relevant Webkit issue, support for this feature had made it into Safari Technology Preview by November 2022, and indeed the Release Notes for Safari Technology Preview Release 155 claim support, so it is worth checking whether this has made it into Safari if that is important to you.
Debugging
Often, you'll run into issues with your Rust code and see that the WASM execution errors with Uncaught (in promise) RuntimeError: unreachable
, a wholly unhelpful error for debugging. This is because the code is compiled in release mode which strips out error messages as a performance optimisation. To debug, you can build the WASM package in debug mode using the flag --dev
with wasm-pack build
. This will build in debug mode, slowing down execution significantly but allowing you to see any runtime error messages in the browser console. Additionally, you can install the console_error_panic_hook
crate (as is done by Zordle) to also get helpful debug messages for runtime panics.
Credits
This guide was written by Nalin. Thanks additionally to Uma and Blaine for significant work on figuring out these steps. Feel free to reach out to me if you have trouble with any of these steps.
Developer Documentation
You want to contribute to the Halo 2 crates? Awesome!
This section covers information about our development processes and review standards, and useful tips for maintaining and extending the codebase.
Feature development
Sometimes feature development can require iterating on a design over time. It can be
useful to start using features in downstream crates early on to gain experience with the
APIs and functionality, that can feed back into the feature's design prior to it being
stabilised. To enable this, we follow a three-stage nightly -> beta -> stable
development pattern inspired by (but not identical to) the Rust compiler.
Feature flags
Each unstabilised feature has a default-off feature flag that enables it, of the form
unstable-*
. The stable API of the crates must not be affected when the feature flag is
disabled, except for specific complex features that will be considered on a case-by-case
basis.
Two meta-flags are provided to enable all features at a particular stabilisation level:
beta
enables all features at the "beta" stage (and implicitly all features at the "stable" stage).nightly
enables all features at the "beta" and "nightly" stages (and implicitly all features at the "stable" stage), i.e. all features are enabled.- When neither flag is enabled (and no feature-specific flags are enabled), then in effect only features at the "stable" stage are enabled.
Feature workflow
- If the maintainers have rough consensus that an experimental feature is generally
desired, its initial implementation can be merged into the codebase optimistically
behind a feature-specific feature flag with a lower standard of review. The feature's
flag is added to the
nightly
feature flag set.- The feature will become usable by downstream published crates in the next general
release of the
halo2
crates. - Subsequent development and refinement of the feature can be performed in-situ via additional PRs, along with additional review.
- If the feature ends up having bad interactions with other features (in particular, already-stabilised features), then it can be removed later without affecting the stable or beta APIs.
- The feature will become usable by downstream published crates in the next general
release of the
- Once the feature has had sufficient review, and is at the point where a
halo2
user considers it production-ready (and is willing or planning to deploy it to production), the feature's feature flag is moved to thebeta
feature flag set. - Once the feature has had review equivalent to the stable review policy, and there is
rough consensus that the feature is useful to the wider
halo2
userbase, the feature's feature flag is removed and the feature becomes part of the main maintained codebase.
For more complex features, the above workflow might be augmented with
beta
andnightly
branches; this will be figured out once a feature requiring this is proposed as a candidate for inclusion.
In-progress features
Feature flag | Stage | Notes |
---|---|---|
unstable-sha256-gadget | nightly | The SHA-256 gadget and chip. |
Design
Note on Language
We use slightly different language than others to describe PLONK concepts. Here's the overview:
- We like to think of PLONK-like arguments as tables, where each column corresponds to a "wire". We refer to entries in this table as "cells".
- We like to call "selector polynomials" and so on "fixed columns" instead. We then refer specifically to a "selector constraint" when a cell in a fixed column is being used to control whether a particular constraint is enabled in that row.
- We call the other polynomials "advice columns" usually, when they're populated by the prover.
- We use the term "rule" to refer to a "gate" like
- TODO: Check how consistent we are with this, and update the code and docs to match.
Proving system
The Halo 2 proving system can be broken down into five stages:
- Commit to polynomials encoding the main components of the circuit:
- Cell assignments.
- Permuted values and products for each lookup argument.
- Equality constraint permutations.
- Construct the vanishing argument to constrain all circuit relations to zero:
- Standard and custom gates.
- Lookup argument rules.
- Equality constraint permutation rules.
- Evaluate the above polynomials at all necessary points:
- All relative rotations used by custom gates across all columns.
- Vanishing argument pieces.
- Construct the multipoint opening argument to check that all evaluations are consistent with their respective commitments.
- Run the inner product argument to create a polynomial commitment opening proof for the multipoint opening argument polynomial.
These stages are presented in turn across this section of the book.
Example
To aid our explanations, we will at times refer to the following example constraint system:
- Four advice columns .
- One fixed column .
- Three custom gates:
tl;dr
The table below provides a (probably too) succinct description of the Halo 2 protocol. This description will likely be replaced by the Halo 2 paper and security proof, but for now serves as a summary of the following sub-sections.
Prover | Verifier | |
---|---|---|
Checks | ||
Constructs multipoint opening poly | ||
Then the prover and verifier:
- Construct as a linear combination of and using powers of ;
- Construct as the equivalent linear combination of and ; and
- Perform
TODO: Write up protocol components that provide zero-knowledge.
Lookup argument
Halo 2 uses the following lookup technique, which allows for lookups in arbitrary sets, and is arguably simpler than Plookup.
Note on Language
In addition to the general notes on language:
- We call the polynomial (the grand product argument polynomial for the permutation argument) the "permutation product" column.
Technique Description
For ease of explanation, we'll first describe a simplified version of the argument that ignores zero knowledge.
We express lookups in terms of a "subset argument" over a table with rows (numbered from 0), and columns and
The goal of the subset argument is to enforce that every cell in is equal to some cell in This means that more than one cell in can be equal to the same cell in and some cells in don't need to be equal to any of the cells in
- might be fixed, but it doesn't need to be. That is, we can support looking up values in either fixed or variable tables (where the latter includes advice columns).
- and can contain duplicates. If the sets represented by and/or are not
naturally of size we extend with duplicates and with dummy values known
to be in
- Alternatively we could add a "lookup selector" that controls which elements of the column participate in lookups. This would modify the occurrence of in the permutation rule below to replace with, say, if a lookup is not selected.
Let be the Lagrange basis polynomial that evaluates to at row and otherwise.
We start by allowing the prover to supply permutation columns of and Let's call these and respectively. We can enforce that they are permutations using a permutation argument with product column with the rules:
i.e. provided that division by zero does not occur, we have for all :
This is a version of the permutation argument which allows and to be permutations of and respectively, but doesn't specify the exact permutations. and are separate challenges so that we can combine these two permutation arguments into one without worrying that they might interfere with each other.
The goal of these permutations is to allow and to be arranged by the prover in a particular way:
- All the cells of column are arranged so that like-valued cells are vertically adjacent to each other. This could be done by some kind of sorting algorithm, but all that matters is that like-valued cells are on consecutive rows in column and that is a permutation of
- The first row in a sequence of like values in is the row that has the corresponding value in Apart from this constraint, is any arbitrary permutation of
Now, we'll enforce that either or that using the rule
In addition, we enforce using the rule
(The term of the first rule here has no effect at row even though "wraps", because of the second rule.)
Together these constraints effectively force every element in (and thus ) to equal at least one element in (and thus ). Proof: by induction on prefixes of the rows.
Zero-knowledge adjustment
In order to achieve zero knowledge for the PLONK-based proof system, we will need the last rows of each column to be filled with random values. This requires an adjustment to the lookup argument, because these random values would not satisfy the constraints described above.
We limit the number of usable rows to We add two selectors:
- is set to on the last rows, and elsewhere;
- is set to only on row and elsewhere (i.e. it is set on the row in between the usable rows and the blinding rows).
We enable the constraints from above only for the usable rows:
The rules that are enabled on row remain the same:
Since we can no longer rely on the wraparound to ensure that the product becomes again at we would instead need to constrain to However, there is a potential difficulty: if any of the values or are zero for then it might not be possible to satisfy the permutation argument. This occurs with negligible probability over choices of and but is an obstacle to achieving perfect zero knowledge (because an adversary can rule out witnesses that would cause this situation), as well as perfect completeness.
To ensure both perfect completeness and perfect zero knowledge, we allow to be either zero or one:
Now if or are zero for some we can set for satisfying the constraint system.
Note that the challenges and are chosen after committing to and (and to and ), so the prover cannot force the case where some or is zero to occur. Since this case occurs with negligible probability, soundness is not affected.
Cost
- There is the original column and the fixed column
- There is a permutation product column
- There are the two permutations and
- The gates are all of low degree.
Generalizations
Halo 2's lookup argument implementation generalizes the above technique in the following ways:
- and can be extended to multiple columns, combined using a random challenge.
and stay as single columns.
- The commitments to the columns of can be precomputed, then combined cheaply once the challenge is known by taking advantage of the homomorphic property of Pedersen commitments.
- The columns of can be given as arbitrary polynomial expressions using relative references. These will be substituted into the product column constraint, subject to the maximum degree bound. This potentially saves one or more advice columns.
- Then, a lookup argument for an arbitrary-width relation can be implemented in terms of a
subset argument, i.e. to constrain in each row, consider
as a set of tuples (using the method of the previous point), and check
that
- In the case where represents a function, this implicitly also checks that the inputs are in the domain. This is typically what we want, and often saves an additional range check.
- We can support multiple tables in the same circuit, by combining them into a single
table that includes a tag column to identify the original table.
- The tag column could be merged with the "lookup selector" mentioned earlier, if this were implemented.
These generalizations are similar to those in sections 4 and 5 of the Plookup paper. That is, the differences from Plookup are in the subset argument. This argument can then be used in all the same ways; for instance, the optimized range check technique in section 5 of the Plookup paper can also be used with this subset argument.
Permutation argument
Given that gates in halo2 circuits operate "locally" (on cells in the current row or defined relative rows), it is common to need to copy a value from some arbitrary cell into the current row for use in a gate. This is performed with an equality constraint, which enforces that the source and destination cells contain the same value.
We implement these equality constraints by constructing a permutation that represents the constraints, and then using a permutation argument within the proof to enforce them.
Notation
A permutation is a one-to-one and onto mapping of a set onto itself. A permutation can be factored uniquely into a composition of cycles (up to ordering of cycles, and rotation of each cycle).
We sometimes use cycle notation to write permutations. Let denote a cycle where maps to maps to and maps to (with the obvious generalization to arbitrary-sized cycles). Writing two or more cycles next to each other denotes a composition of the corresponding permutations. For example, denotes the permutation that maps to to to and to
Constructing the permutation
Goal
We want to construct a permutation in which each subset of variables that are in a equality-constraint set form a cycle. For example, suppose that we have a circuit that defines the following equality constraints:
From this we have the equality-constraint sets and We want to construct the permutation:
which defines the mapping of to
Algorithm
We need to keep track of the set of cycles, which is a set of disjoint sets. Efficient data structures for this problem are known; for the sake of simplicity we choose one that is not asymptotically optimal but is easy to implement.
We represent the current state as:
- an array for the permutation itself;
- an auxiliary array that keeps track of a distinguished element of each cycle;
- another array that keeps track of the size of each cycle.
We have the invariant that for each element in a given cycle points to the same element This allows us to quickly decide whether two given elements and are in the same cycle, by checking whether Also, gives the size of the cycle containing (This is guaranteed only for not for )
The algorithm starts with a representation of the identity permutation: for all we set and
To add an equality constraint :
- Check whether and are already in the same cycle, i.e. whether If so, there is nothing to do.
- Otherwise, and belong to different cycles. Make the larger cycle and the smaller one, by swapping them iff
- Set
- Following the mapping around the right (smaller) cycle, for each element set
- Splice the smaller cycle into the larger one by swapping with
For example, given two disjoint cycles and :
A +---> B
^ +
| |
+ v
D <---+ C E +---> F
^ +
| |
+ v
H <---+ G
After adding constraint the above algorithm produces the cycle:
A +---> B +-------------+
^ |
| |
+ v
D <---+ C <---+ E F
^ +
| |
+ v
H <---+ G
Broken alternatives
If we did not check whether and were already in the same cycle, then we could end up undoing an equality constraint. For example, if we have the following constraints:
and we tried to implement adding an equality constraint just using step 5 of the above algorithm, then we would end up constructing the cycle rather than the correct
Argument specification
We need to check a permutation of cells in columns, represented in Lagrange basis by polynomials
We will label each cell in those columns with a unique element of
Suppose that we have a permutation on these labels, in which the cycles correspond to equality-constraint sets.
If we consider the set of pairs , then the values within each cycle are equal if and only if permuting the label in each pair by yields the same set:
Since the labels are distinct, set equality is the same as multiset equality, which we can check using a product argument.
Let be a root of unity and let be a root of unity, where with odd and We will use as the label for the cell in the th row of the th column of the permutation argument.
We represent by a vector of polynomials such that
Notice that the identity permutation can be represented by the vector of polynomials such that
We will use a challenge to compress each pair to Just as in the product argument we used for lookups, we also use a challenge to randomize each term of the product.
Now given our permutation represented by over columns represented by we want to ensure that:
Here represents the unpermuted pair, and represents the permuted pair.
Let be such that and for :
Then it is sufficient to enforce the rules:
This assumes that the number of columns is such that the polynomial in the first rule above fits within the degree bound of the PLONK configuration. We will see below how to handle a larger number of columns.
The optimization used to obtain the simple representation of the identity permutation was suggested by Vitalik Buterin for PLONK, and is described at the end of section 8 of the PLONK paper. Note that the are all distinct quadratic non-residues, provided that the number of columns that are enabled for equality is no more than , which always holds in practice for the curves used in Halo 2.
Zero-knowledge adjustment
Similarly to the lookup argument, we need an adjustment to the above argument to account for the last rows of each column being filled with random values.
We limit the number of usable rows to We add two selectors, defined in the same way as for the lookup argument:
- is set to on the last rows, and elsewhere;
- is set to only on row and elsewhere (i.e. it is set on the row in between the usable rows and the blinding rows).
We enable the product rule from above only for the usable rows:
The rule that is enabled on row remains the same:
Since we can no longer rely on the wraparound to ensure that each product becomes again at we would instead need to constrain This raises the same problem that was described for the lookup argument. So we allow to be either zero or one:
which gives perfect completeness and zero knowledge.
Spanning a large number of columns
The halo2 implementation does not in practice limit the number of columns for which equality constraints can be enabled. Therefore, it must solve the problem that the above approach might yield a product rule with a polynomial that exceeds the PLONK configuration's degree bound. The degree bound could be raised, but this would be inefficient if no other rules require a larger degree.
Instead, we split the product across sets of columns, using product columns and we use another rule to copy the product from the end of one column set to the beginning of the next.
That is, for we have:
For simplicity this is written assuming that the number of columns enabled for equality constraints is a multiple of ; if not then the products for the last column set will have fewer than terms.
For the first column set we have:
For each subsequent column set, we use the following rule to copy to the start of the next column set, :
For the last column set, we allow to be either zero or one:
which gives perfect completeness and zero knowledge as before.
Circuit commitments
Committing to the circuit assignments
At the start of proof creation, the prover has a table of cell assignments that it claims satisfy the constraint system. The table has rows, and is broken into advice, instance, and fixed columns. We define as the assignment in the th row of the th fixed column. Without loss of generality, we'll similarly define to represent the advice and instance assignments.
We separate fixed columns here because they are provided by the verifier, whereas the advice and instance columns are provided by the prover. In practice, the commitments to instance and fixed columns are computed by both the prover and verifier, and only the advice commitments are stored in the proof.
To commit to these assignments, we construct Lagrange polynomials of degree for each column, over an evaluation domain of size (where is the th primitive root of unity):
- interpolates such that .
- interpolates such that .
We then create a blinding commitment to the polynomial for each column:
is constructed as part of key generation, using a blinding factor of . is constructed by the prover and sent to the verifier.
Committing to the lookup permutations
The verifier starts by sampling , which is used to keep individual columns within lookups independent. Then, the prover commits to the permutations for each lookup as follows:
-
Given a lookup with input column polynomials and table column polynomials , the prover constructs two compressed polynomials
-
The prover then permutes and according to the rules of the lookup argument, obtaining and .
The prover creates blinding commitments for all of the lookups
and sends them to the verifier.
After the verifier receives , , and , it samples challenges and that will be used in the permutation argument and the remainder of the lookup argument below. (These challenges can be reused because the arguments are independent.)
Committing to the equality constraint permutation
Let be the number of columns that are enabled for equality constraints.
Let be the maximum number of columns that can be accommodated by a column set without exceeding the PLONK configuration's maximum constraint degree.
Let be the number of “usable” rows as defined in the Permutation argument section.
Let
The prover constructs a vector of length such that for each column set and each row
The prover then computes a running product of , starting at , and a vector of polynomials that each have a Lagrange basis representation corresponding to a -sized slice of this running product, as described in the Permutation argument section.
The prover creates blinding commitments to each polynomial:
and sends them to the verifier.
Committing to the lookup permutation product columns
In addition to committing to the individual permuted lookups, for each lookup, the prover needs to commit to the permutation product column:
- The prover constructs a vector :
- The prover constructs a polynomial which has a Lagrange basis representation corresponding to a running product of , starting at .
and are used to combine the permutation arguments for and while keeping them independent. The important thing here is that the verifier samples and after the prover has created , , and (and thus committed to all the cell values used in lookup columns, as well as and for each lookup).
As before, the prover creates blinding commitments to each polynomial:
and sends them to the verifier.
Vanishing argument
Having committed to the circuit assignments, the prover now needs to demonstrate that the various circuit relations are satisfied:
- The custom gates, represented by polynomials .
- The rules of the lookup arguments.
- The rules of the equality constraint permutations.
Each of these relations is represented as a polynomial of degree (the maximum degree of any of the relations) with respect to the circuit columns. Given that the degree of the assignment polynomials for each column is , the relation polynomials have degree with respect to .
In our example, these would be the gate polynomials, of degree :
A relation is satisfied if its polynomial is equal to zero. One way to demonstrate this is to divide each polynomial relation by the vanishing polynomial , which is the lowest-degree polynomial that has roots at every . If relation's polynomial is perfectly divisible by , it is equal to zero over the domain (as desired).
This simple construction would require a polynomial commitment per relation. Instead, we commit to all of the circuit relations simultaneously: the verifier samples , and then the prover constructs the quotient polynomial
where the numerator is a random (the prover commits to the cell assignments before the verifier samples ) linear combination of the circuit relations.
- If the numerator polynomial (in formal indeterminate ) is perfectly divisible by , then with high probability all relations are satisfied.
- Conversely, if at least one relation is not satisfied, then with high probability will not equal the evaluation of the numerator at . In this case, the numerator polynomial would not be perfectly divisible by .
Committing to
has degree (because the divisor has degree ). However, the polynomial commitment scheme we use for Halo 2 only supports committing to polynomials of degree (which is the maximum degree that the rest of the protocol needs to commit to). Instead of increasing the cost of the polynomial commitment scheme, the prover split into pieces of degree
and produces blinding commitments to each piece
Evaluating the polynomials
At this point, we have committed to all properties of the circuit. The verifier now wants to see if the prover committed to the correct polynomial. The verifier samples , and the prover produces the purported evaluations of the various polynomials at , for all the relative offsets used in the circuit, as well as .
In our example, this would be:
- ,
- ,
- , ...,
The verifier checks that these evaluations satisfy the form of :
Now content that the evaluations collectively satisfy the gate constraints, the verifier needs to check that the evaluations themselves are consistent with the original circuit commitments, as well as . To implement this efficiently, we use a multipoint opening argument.
Multipoint opening argument
Consider the commitments to polynomials . Let's say that and were queried at the point , while and were queried at both points and . (Here, is the primitive root of unity in the multiplicative subgroup over which we constructed the polynomials).
To open these commitments, we could create a polynomial for each point that we queried at (corresponding to each relative rotation used in the circuit). But this would not be efficient in the circuit; for example, would appear in multiple polynomials.
Instead, we can group the commitments by the sets of points at which they were queried:
For each of these groups, we combine them into a polynomial set, and create a single for that set, which we open at each rotation.
Optimization steps
The multipoint opening optimization takes as input:
- A random sampled by the verifier, at which we evaluate .
- Evaluations of each polynomial at each point of interest, provided by the prover:
These are the outputs of the vanishing argument.
The multipoint opening optimization proceeds as such:
-
Sample random , to keep linearly independent.
-
Accumulate polynomials and their corresponding evaluations according to the point set at which they were queried:
q_polys
:q_eval_sets
:[ [a(x) + x_1 b(x)], [ c(x) + x_1 d(x), c(\omega x) + x_1 d(\omega x) ] ]
NB:
q_eval_sets
is a vector of sets of evaluations, where the outer vector corresponds to the point sets, which in this example are and , and the inner vector corresponds to the points in each set. -
Interpolate each set of values in
q_eval_sets
:r_polys
: -
Construct
f_polys
which check the correctness ofq_polys
:f_polys
If , then should be a polynomial. If and then should be a polynomial.
-
Sample random to keep the
f_polys
linearly independent. -
Construct .
-
Sample random , at which we evaluate :
-
Sample random to keep and
q_polys
linearly independent. -
Construct
final_poly
, which is the polynomial we commit to in the inner product argument.
Inner product argument
Halo 2 uses a polynomial commitment scheme for which we can create polynomial commitment opening proofs, based around the Inner Product Argument.
TODO: Explain Halo 2's variant of the IPA.
It is very similar to from Appendix A.2 of BCMS20. See this comparison for details.
Comparison to other work
BCMS20 Appendix A.2
Appendix A.2 of BCMS20 describes a polynomial commitment scheme that is similar to the one described in BGH19 (BCMS20 being a generalization of the original Halo paper). Halo 2 builds on both of these works, and thus itself uses a polynomial commitment scheme that is very similar to the one in BCMS20.
The following table provides a mapping between the variable names in BCMS20, and the equivalent objects in Halo 2 (which builds on the nomenclature from the Halo paper):
BCMS20 | Halo 2 |
---|---|
msm or | |
challenge_i | |
s_poly | |
s_poly_blind | |
s_poly_commitment | |
blind / | |
Halo 2's polynomial commitment scheme differs from Appendix A.2 of BCMS20 in two ways:
-
Step 8 of the algorithm computes a "non-hiding" commitment prior to the inner product argument, which opens to the same value as but is a commitment to a randomly-drawn polynomial. The remainder of the protocol involves no blinding. By contrast, in Halo 2 we blind every single commitment that we make (even for instance and fixed polynomials, though using a blinding factor of 1 for the fixed polynomials); this makes the protocol simpler to reason about. As a consequence of this, the verifier needs to handle the cumulative blinding factor at the end of the protocol, and so there is no need to derive an equivalent to at the start of the protocol.
- is also an input to the random oracle for ; in Halo 2 we utilize a transcript that has already committed to the equivalent components of prior to sampling .
-
The subroutine (Figure 2 of BCMS20) computes the initial group element by adding , which requires two scalar multiplications. Instead, we subtract from the original commitment , so that we're effectively opening the polynomial at the point to the value zero. The computation is more efficient in the context of recursion because is a fixed base (so we can use lookup tables).
Protocol Description
Preliminaries
We take as our security parameter, and unless explicitly noted all algorithms and adversaries are probabilistic (interactive) Turing machines that run in polynomial time in this security parameter. We use to denote a function that is negligible in .
Cryptographic Groups
We let denote a cyclic group of prime order . The identity of a group is written as . We refer to the scalars of elements in as elements in a scalar field of size . Group elements are written in capital letters while scalars are written in lowercase or Greek letters. Vectors of scalars or group elements are written in boldface, i.e. and . Group operations are written additively and the multiplication of a group element by a scalar is written .
We will often use the notation to describe the inner product of two like-length vectors of scalars . We also use this notation to represent the linear combination of group elements such as with , computed in practice by a multiscalar multiplication.
We use to describe a vector of length that contains only zeroes in .
Discrete Log Relation Problem. The advantage metric is defined with respect the following game.
Given an -length vector of group elements, the discrete log relation problem asks for such that and yet , which we refer to as a non-trivial discrete log relation. The hardness of this problem is tightly implied by the hardness of the discrete log problem in the group as shown in Lemma 3 of [JT20]. Formally, we use the game defined above to capture this problem.
Interactive Proofs
Interactive proofs are a triple of algorithms . The algorithm produces as its output some public parameters commonly referred to by . The prover and verifier are interactive machines (with access to ) and we denote by an algorithm that executes a two-party protocol between them on inputs . The output of this protocol, a transcript of their interaction, contains all of the messages sent between and . At the end of the protocol, the verifier outputs a decision bit.
Zero knowledge Arguments of Knowledge
Proofs of knowledge are interactive proofs where the prover aims to convince the verifier that they know a witness such that for a statement and polynomial-time decidable relation . We will work with arguments of knowledge which assume computationally-bounded provers.
We will analyze arguments of knowledge through the lens of four security notions.
- Completeness: If the prover possesses a valid witness, can they always convince the verifier? It is useful to understand this property as it can have implications for the other security notions.
- Soundness: Can a cheating prover falsely convince the verifier of the correctness of a statement that is not actually correct? We refer to the probability that a cheating prover can falsely convince the verifier as the soundness error.
- Knowledge soundness: When the verifier is convinced the statement is correct, does the prover actually possess ("know") a valid witness? We refer to the probability that a cheating prover falsely convinces the verifier of this knowledge as the knowledge error.
- Zero knowledge: Does the verifier learn anything besides that which can be inferred from the correctness of the statement and the prover's knowledge of a valid witness?
First, we will visit the simple definition of completeness.
Perfect Completeness. An interactive argument has perfect completeness if for all polynomial-time decidable relations and for all non-uniform polynomial-time adversaries
Soundness
Complicating our analysis is that although our protocol is described as an interactive argument, it is realized in practice as a non-interactive argument through the use of the Fiat-Shamir transformation.
Public coin. We say that an interactive argument is public coin when all of the messages sent by the verifier are each sampled with fresh randomness.
Fiat-Shamir transformation. In this transformation an interactive, public coin argument can be made non-interactive in the random oracle model by replacing the verifier algorithm with a cryptographically strong hash function that produces sufficiently random looking output.
This transformation means that in the concrete protocol a cheating prover can easily "rewind" the verifier by forking the transcript and sending new messages to the verifier. Studying the concrete security of our construction after applying this transformation is important. Fortunately, we are able to follow a framework of analysis by Ghoshal and Tessaro ([GT20]) that has been applied to constructions similar to ours.
We will study our protocol through the notion of state-restoration soundness. In this model the (cheating) prover is allowed to rewind the verifier to any previous state it was in. The prover wins if they are able to produce an accepting transcript.
State-Restoration Soundness. Let be an interactive argument with verifier challenges and let the th challenge be sampled from . The advantage metric of a state restoration prover is defined with respect to the following game.
As shown in [GT20] (Theorem 1) state restoration soundness is tightly related to soundness after applying the Fiat-Shamir transformation.
Knowledge Soundness
We will show that our protocol satisfies a strengthened notion of knowledge soundness known as witness extended emulation. Informally, this notion states that for any successful prover algorithm there exists an efficient emulator that can extract a witness from it by rewinding it and supplying it with fresh randomness.
However, we must slightly adjust our definition of witness extended emulation to account for the fact that our provers are state restoration provers and can rewind the verifier. Further, to avoid the need for rewinding the state restoration prover during witness extraction we study our protocol in the algebraic group model.
Algebraic Group Model (AGM). An adversary is said to be algebraic if whenever it outputs a group element it also outputs a representation such that where is the vector of group elements that has seen so far. Notationally, we write to describe a group element enhanced with this representation. We also write to identify the component of the representation of that corresponds with . In other words,
The algebraic group model allows us to perform so-called "online" extraction for some protocols: the extractor can obtain the witness from the representations themselves for a single (accepting) transcript.
State Restoration Witness Extended Emulation Let be an interactive argument for relation with challenges. We define for all non-uniform algebraic provers , extractors , and computationally unbounded distinguishers the advantage metric is defined with the respect to the following games.
Zero Knowledge
We say that an argument of knowledge is zero knowledge if the verifier also does not learn anything from their interaction besides that which can be learned from the existence of a valid . More formally,
Perfect Special Honest-Verifier Zero Knowledge. A public coin interactive argument has perfect special honest-verifier zero knowledge (PSHVZK) if for all polynomial-time decidable relations and for all and for all non-uniform polynomial-time adversaries there exists a probabilistic polynomial-time simulator such that where is the internal randomness of the verifier.
In this (common) definition of zero-knowledge the verifier is expected to act "honestly" and send challenges that correspond only with their internal randomness; they cannot adaptively respond to the prover based on the prover's messages. We use a strengthened form of this definition that forces the simulator to output a transcript with the same (adversarially provided) challenges that the verifier algorithm sends to the prover.
Protocol
Let be a primitive root of unity forming the domain with the vanishing polynomial over this domain. Let be positive integers with and . We present an interactive argument for the relation where are (multivariate) polynomials with degree in and has degree at most in any indeterminates .
returns .
For all :
- Let be the exhaustive set of integers (modulo ) such that appears as a term in .
- Let be a list of distinct sets of integers containing and the set .
- Let when .
Let denote the size of , and let denote the size of every without loss of generality.
In the following protocol, we take it for granted that each polynomial is defined such that blinding factors are freshly sampled by the prover and are each present as an evaluation of over the domain . In all of the following, the verifier's challenges cannot be zero or an element in , and some additional limitations are placed on specific challenges as well.
- and proceed in the following rounds of interaction, where in round (starting at )
- sets
- sends a hiding commitment where are the coefficients of the univariate polynomial and is some random, independently sampled blinding factor elided for exposition. (This elision notation is used throughout this protocol description to simplify exposition.)
- responds with a challenge .
- sets .
- sends a commitment where are the coefficients of a randomly sampled univariate polynomial of degree .
- computes univariate polynomial of degree .
- computes at most degree polynomials such that .
- sends commitments for all where denotes the vector of coefficients for .
- responds with challenge and computes .
- sets .
- sends and for all sends such that for all .
- For all and set to be the lowest degree univariate polynomial defined such that for all .
- responds with challenges and initializes .
- Starting at and ending at sets .
- finally sets .
- initializes .
- Starting at and ending at sets .
- finally sets .
- and initialize .
- Starting at and ending at and set .
- Finally and set and where is computed by as using the values provided by .
- sends where defines the coefficients of the polynomial
- responds with challenge .
- sends such that for all .
- responds with challenge .
- sets and
- sets .
- samples a random polynomial of degree with a root at and sends a commitment where defines the coefficients of .
- responds with challenges .
- sets .
- sets (where should correspond with the verifier's computed value ).
- Initialize as the coefficients of and and . and will interact in the following rounds, where in the th round starting in round and ending in round :
- sends and .
- responds with challenge chosen such that is nonzero.
- and set and .
- sets .
- sends and synthetic blinding factor computed from the elided blinding factors.
- accepts only if .
Zero-knowledge and Completeness
We claim that this protocol is perfectly complete. This can be verified by inspection of the protocol; given a valid witness the prover succeeds in convincing the verifier with probability .
We claim that this protocol is perfect special honest-verifier zero knowledge. We do this by showing that a simulator exists which can produce an accepting transcript that is equally distributed with a valid prover's interaction with a verifier with the same public coins. The simulator will act as an honest prover would, with the following exceptions:
- In step of the protocol chooses random degree polynomials (in ) .
- In step of the protocol chooses a random degree polynomials .
- In step of the protocol chooses a random degree polynomial .
- In step of the protocol uses its foreknowledge of the verifier's choice of to produce a degree polynomial conditioned only such that has a root at .
First, let us consider why this simulator always succeeds in producing an accepting transcript. lacks a valid witness and simply commits to random polynomials whenever knowledge of a valid witness would be required by the honest prover. The verifier places no conditions on the scalar values in the transcript. must only guarantee that the check in step of the protocol succeeds. It does so by using its knowledge of the challenge to produce a polynomial which interferes with to ensure it has a root at . The transcript will thus always be accepting due to perfect completeness.
In order to see why produces transcripts distributed identically to the honest prover, we will look at each piece of the transcript and compare the distributions. First, note that (just as the honest prover) uses a freshly random blinding factor for every group element in the transcript, and so we need only consider the scalars in the transcript. acts just as the prover does except in the mentioned cases so we will analyze each case:
- and an honest prover reveal openings of each polynomial , and at most one additional opening of each in step . However, the honest prover blinds their polynomials (in ) with random evaluations over the domain . Thus, the openings of at the challenge (which is prohibited from being or in the domain by the protocol) are distributed identically between and an honest prover.
- Neither nor the honest prover reveal as it is computed by the verifier. However, the honest prover may reveal --- which has a non-trivial relationship with --- were it not for the fact that the honest prover also commits to a random degree polynomial in step , producing a commitment and ensuring that in step when the prover sets the distribution of is uniformly random. Thus, is never revealed by the honest prover nor by .
- The expected value of is computed by the verifier (in step ) and so the simulator's actual choice of is irrelevant.
- is conditioned on having a root at , but otherwise no conditions are placed on and so the distribution of the degree polynomial is uniformly random whether or not has a root at . Thus, the distribution of produced in step is identical between and an honest prover. The synthetic blinding factor also revealed in step is a trivial function of the prover's other blinding factors and so is distributed identically between and an honest prover.
Notes:
- In an earlier version of our protocol, the prover would open each individual commitment at as part of the multipoint opening argument, and the verifier would confirm that a linear combination of these openings (with powers of ) agreed to the expected value of . This was done because it's more efficient in recursive proofs. However, it was unclear to us what the expected distribution of the openings of these commitments was and so proving that the argument was zero-knowledge is difficult. Instead, we changed the argument so that the verifier computes a linear combination of the commitments and that linear combination is opened at . This avoided leaking .
- As mentioned, in step the prover commits to a random polynomial as a way of ensuring that is not revealed in the multiopen argument. This is done because it's unclear what the distribution of would be.
- Technically it's also possible for us to prove zero-knowledge with a simulator that uses its foreknowledge of the challenge to commit to an which agrees at to the value it will be expected to. This would obviate the need for the random polynomial in the protocol. This may make the analysis of zero-knowledge for the remainder of the protocol a little bit tricky though, so we didn't go this route.
- Group element blinding factors are technically not necessary after step in which the polynomial is completely randomized. However, it's simpler in practice for us to ensure that every group element in the protocol is randomly blinded to make edge cases involving the point at infinity harder.
- It is crucial that the verifier cannot challenge the prover to open polynomials at points in as otherwise the transcript of an honest prover will be forced to contain what could be portions of the prover's witness. We therefore restrict the space of challenges to include all elements of the field except and, for simplicity, we also prohibit the challenge of .
Witness-extended Emulation
Let be the interactive argument described above for relation and some group with scalar field . We can always construct an extractor such that for any non-uniform algebraic prover making at most queries to its oracle, there exists a non-uniform adversary with the property that for any computationally unbounded distinguisher
where .
Proof. We will prove this by invoking Theorem 1 of [GT20]. First, we note that the challenge space for all rounds is the same, i.e. . Theorem 1 requires us to define:
- for all partial transcripts such that .
- an extractor function that takes as input an accepting extended transcript and either returns a valid witness or fails.
- a function returning a probability.
We say that an accepting extended transcript contains "bad challenges" if and only if there exists a partial extended transcript , a challenge , and some sequence of prover messages and challenges such that .
Theorem 1 requires that , when given an accepting extended transcript that does not contain "bad challenges", returns a valid witness for that transcript except with probability bounded above by .
Our strategy is as follows: we will define , establish an upper bound on with respect to an adversary that plays the game, substitute these into Theorem 1, and then walk through the protocol to determine the upper bound of the size of . The adversary plays the game as follows: given the inputs , the adversary simulates the game to using the inputs from the game as public parameters. If manages to produce an accepting extended transcript , invokes a function on and returns its output. We shall define in such a way that for an accepting extended transcript that does not contain "bad challenges", always returns a valid witness whenever does not return a non-trivial discrete log relation. This means that the probability is no greater than , establishing our claim.
Helpful substitutions
We will perform some substitutions to aid in exposition. First, let us define the polynomial
so that we can write . The coefficient vector of is defined such that
where returns when the th bit of is set, and otherwise. We can also write .
Description of function
Recall that an accepting transcript is such that
By inspection of the representations of group elements with respect to (recall that is algebraic and so has them), we obtain the equalities
and the equalities
We define the linear-time function that returns the representation of
which is always a discrete log relation. If any of the equalities above are not satisfied, then this discrete log relation is non-trivial. This is the function invoked by .
The extractor function
The extractor function simply returns from the representation for . Due to the restrictions we will place on the space of bad challenges in each round, we are guaranteed to obtain polynomials such that vanishes over whenever the discrete log relation returned by the adversary's function is trivial. This trivially gives us that the extractor function succeeds with probability bounded above by as required.
Defining
Recall from before that the following equalities hold:
as well as the equality
For convenience let us introduce the following notation
so that we can rewrite the above (after expanding for ) as
We can combine these equations by multiplying both sides of each instance of the first equation by (because is never zero) and substituting for in the second equation, yielding the following equalities:
Lemma 1. If then it follows that for all transcripts that do not contain bad challenges.
Proof. It will be useful to introduce yet another abstraction defined starting with and then recursively defined for all integers such that This allows us to rewrite our above equalities as
We will now show that for all integers such that that whenever the following holds for that the same also holds for
For all integers such that we have that by the definition of . This gives us as no value in nor any challenge are zeroes. We can use this to relate one half of the equalities with the other half as so:
Notice that can be rewritten as for all . Thus we can rewrite the above as
Now let us rewrite these equalities substituting with formal indeterminate .
Now let us rescale everything by to remove negative exponents.
This gives us triples of maximal degree- polynomials in that agree at despite having coefficients determined prior to the choice of . The probability that two of these polynomials would agree at and yet be distinct would be by the Schwartz-Zippel lemma and so by the union bound the probability that the three of these polynomials agree and yet any of them is distinct from another is . By the union bound again the probability that any of the triples have multiple distinct polynomials is . By restricting the challenge space for accordingly we obtain for integers and thus .
We can now conclude an equality of polynomials, and thus of coefficients. Consider the coefficients of the constant terms first, which gives us the equalities
No value of is zero, is never chosen to be and each is chosen so that is nonzero, so we can then conclude
An identical process can be followed with respect to the coefficients of the term in the equalities to establish contingent on being nonzero, which it always is. Substituting these in our equalities yields us something simpler
Now we will consider the coefficients in , which yield the equalities
which for similar reasoning as before yields the equalities
Finally we will consider the coefficients in which yield the equalities
which by substitution gives us
Notice that by the definition of we can rewrite this as
which is precisely in the form we set out to demonstrate.
We now proceed by induction from the case (which we know holds) to reach , which gives us
and because and , we obtain , which completes the proof.
Having established that , and given that and are fixed in advance of the choice of , we have that at most one value of (which is nonzero) exists such that and yet . By restricting accordingly we obtain and therefore that the polynomial defined by has a root at .
By construction , giving us that the polynomial defined by evaluates to at the point . We have that are fixed prior to the choice of , and so either the polynomial defined by has a root at (which implies the polynomial defined by evaluates to at the point ) or else is the single solution in for which evaluates to at the point while itself does not. We avoid the latter case by restricting accordingly and can thus conclude that the polynomial defined by evaluates to at .
The remaining work deals strictly with the representations of group elements sent previously by the prover and their relationship with as well as the challenges chosen in each round of the protocol. We will simplify things first by using to represent the polynomial defined by , as it is the case that this corresponds exactly with the like-named polynomial in the protocol itself. We will make similar substitutions for the other group elements (and their corresponding polynomials) to aid in exposition, as the remainder of this proof is mainly tedious application of the Schwartz-Zippel lemma to upper bound the bad challenge space size for each of the remaining challenges in the protocol.
Recall that , and so by substitution we have . Recall also that
We have already established that . Notice that the coefficients in the above expressions for and are fixed prior to the choice of . By the Schwartz-Zippel lemma we have that only at most possible choices of exist such that these expressions are satisfied and yet for any or
By restricting we can conclude that all of the aforementioned inequalities are untrue. Now we can substitute with for all to obtain
Suppose that (which is the polynomial defined by , and is of degree at most ) does not take the form
and yet agrees with this expression at as we've established above. By the Schwartz-Zippel lemma this can only happen for at most choices of and so by restricting we obtain that
Next we will extract the coefficients of this polynomial in (which are themselves polynomials in formal indeterminate ) by again applying the Schwartz-Zippel lemma with respect to ; again, this leads to the restriction and we obtain the following polynomials of degree at most for all
Having established that these are each non-rational polynomials of degree at most we can then say (by the factor theorem) that for each and we have that has a root at . Note that we can interpret each as the restriction of a bivariate polynomial at the point whose degree with respect to is at most and whose coefficients consist of various polynomials (from the representation ) as well as (from the representation ) and (from the representation ). By similarly applying the Schwartz-Zippel lemma and restricting the challenge space with we obtain (by construction of each and in steps 12 and 13 of the protocol) that the prover's claimed value of in step 9 is equal to ; that the value computed by the verifier in step 13 is equal to ; and that for all the prover's claimed values for all .
By construction of (from the representation ) in step 7 we know that where by we refer to the polynomial of degree at most whose coefficients correspond to the concatenated representations of each . As before, suppose that does not take the form . Then because is determined prior to the choice of then by the Schwartz-Zippel lemma we know that it would only agree with at points at most if the polynomials were not equal. By restricting again we obtain and because is a non-rational polynomial by the factor theorem we obtain that vanishes over the domain .
We now have that vanishes over but wish to show that vanishes over at all points to complete the proof. This just involves a sequence of applying the same technique to each of the challenges; since the polynomial has degree at most in any indeterminate by definition, and because each polynomial is determined prior to the choice of concrete challenge by similarly bounding we ensure that vanishes over , completing the proof.
Implementation
Halo 2 proofs
Proofs as opaque byte streams
In proving system implementations like bellman
, there is a concrete Proof
struct that
encapsulates the proof data, is returned by a prover, and can be passed to a verifier.
halo2
does not contain any proof-like structures, for several reasons:
- The Proof structures would contain vectors of (vectors of) curve points and scalars. This complicates serialization/deserialization of proofs because the lengths of these vectors depend on the configuration of the circuit. However, we didn't want to encode the lengths of vectors inside of proofs, because at runtime the circuit is fixed, and thus so are the proof sizes.
- It's easy to accidentally put stuff into a Proof structure that isn't also placed in the transcript, which is a hazard when developing and implementing a proving system.
- We needed to be able to create multiple PLONK proofs at the same time; these proofs share many different substructures when they are for the same circuit.
Instead, halo2
treats proof objects as opaque byte streams. Creation and consumption of
these byte streams happens via the transcript:
- The
TranscriptWrite
trait represents something that we can write proof components to (at proving time). - The
TranscriptRead
trait represents something that we can read proof components from (at verifying time).
Crucially, implementations of TranscriptWrite
are responsible for simultaneously writing
to some std::io::Write
buffer at the same time that they hash things into the transcript,
and similarly for TranscriptRead
/std::io::Read
.
As a bonus, treating proofs as opaque byte streams ensures that verification accounts for the cost of deserialization, which isn't negligible due to point compression.
Proof encoding
A Halo 2 proof, constructed over a curve , is encoded as a stream of:
- Points (for commitments to polynomials), and
- Scalars (for evaluations of polynomials, and blinding values).
For the Pallas and Vesta curves, both points and scalars have 32-byte encodings, meaning that proofs are always a multiple of 32 bytes.
The halo2
crate supports proving multiple instances of a circuit simultaneously, in
order to share common proof components and protocol logic.
In the encoding description below, we will use the following circuit-specific constants:
- - the size parameter of the circuit (which has rows).
- - the number of advice columns.
- - the number of fixed columns.
- - the number of instance columns.
- - the number of lookup arguments.
- - the number of permutation arguments.
- - the number of columns involved in permutation argument .
- - the maximum degree for the quotient polynomial.
- - the number of advice column queries.
- - the number of fixed column queries.
- - the number of instance column queries.
- - the number of instances of the circuit that are being proven simultaneously.
As the proof encoding directly follows the transcript, we can break the encoding into sections matching the Halo 2 protocol:
-
PLONK commitments:
- points (repeated times).
- points (repeated times).
- points (repeated times).
- points (repeated times).
-
Vanishing argument:
- points.
- scalars (repeated times).
- scalars (repeated times).
- scalars.
- scalars.
-
PLONK evaluations:
- scalars (repeated times).
- scalars (repeated times).
-
Multiopening argument:
- 1 point.
- 1 scalar per set of points in the multiopening argument.
-
Polynomial commitment scheme:
- points.
- scalars.
Fields
The Pasta curves
that we use in halo2
are designed to be highly 2-adic, meaning that a large
multiplicative subgroup exists in
each field. That is, we can write with odd. For both Pallas
and Vesta, ; this helps to simplify the field implementations.
Sarkar square-root algorithm (table-based variant)
We use a technique from Sarkar2020 to compute
square roots in halo2
. The intuition behind
the algorithm is that we can split the task into computing square roots in each
multiplicative subgroup.
Suppose we want to find the square root of modulo one of the Pasta primes , where is a non-zero square in . We define a root of unity where is a non-square in , and precompute the following tables:
Let . We can then define as an element of the multiplicative subgroup.
Let
i = 0, 1
Using , we lookup such that
Define
i = 2
Lookup s.t.
Define
i = 3
Lookup s.t.
Define
Final result
Lookup such that
Let .
We can now write
Squaring the RHS, we observe that Therefore, the square root of is ; the first part we computed earlier, and the second part can be computed with three multiplications using lookups in .
Selector combining
Heavy use of custom gates can lead to a circuit defining many binary selectors, which would increase proof size and verification time.
This section describes an optimization, applied automatically by halo2, that combines binary selector columns into fewer fixed columns.
The basic idea is that if we have binary selectors labelled that are enabled on disjoint sets of rows, then under some additional conditions we can combine them into a single fixed column, say , such that:
However, the devil is in the detail.
The halo2 API allows defining some selectors to be "simple selectors", subject to the following condition:
Every polynomial constraint involving a simple selector must be of the form where is a polynomial involving no simple selectors.
Suppose that has label in some set of simple selectors that are combined into as above. Then this condition ensures that replacing by will not change the meaning of any constraints.
It would be possible to relax this condition by ensuring that every use of a binary selector is substituted by a precise interpolation of its value from the corresponding combined selector. However,
- the restriction simplifies the implementation, developer tooling, and human understanding and debugging of the resulting constraint system;
- the scope to apply the optimization is not impeded very much by this restriction for typical circuits.
Note that replacing by will increase the degree of constraints selected by by , and so we must choose the selectors that are combined in such a way that the maximum degree bound is not exceeded.
Identifying selectors that can be combined
We need a partition of the overall set of selectors into subsets (called "combinations"), such that no two selectors in a combination are enabled on the same row.
Labels must be unique within a combination, but they are not unique across combinations. Do not confuse a selector's index with its label.
Suppose that we are given , the degree bound of the circuit.
We use the following algorithm:
- Leave nonsimple selectors unoptimized, i.e. map each of them to a separate fixed column.
- Check (or ensure by construction) that all polynomial constraints involving each simple selector are of the form where do not involve any simple selectors. For each , record the maximum degree of any as .
- Compute a binary "exclusion matrix" such that is whenever
and and are enabled on the same row; and otherwise.
Since is symmetric and is zero on the diagonal, we can represent it by either its upper or lower triangular entries. The rest of the algorithm is guaranteed only to access only the entries where .
- Initialize a boolean array to all .
will record whether has been included in any combination.
- Iterate over the that have not yet been added to any combination:
- a. Add to a fresh combination , and set .
- b. Let mut .
is used to keep track of the largest degree, excluding the selector expression, of any gate involved in the combination so far.
- c. Iterate over all the selectors for that can potentially join ,
i.e. for which is false:
- i. (Optimization) If , break to the outer loop, since no more selectors can be added to .
- ii. Let .
- iii. If is for any in , or if
, break to the outer
loop.
is the maximum degree, including the selector expression, of any constraint that would result from adding to the combination .
- iv. Set .
- v. Add to and set .
- d. Allocate a fixed column , initialized to all-zeroes.
- e. For each selector :
- i. Label with a distinct index where .
- ii. Record that should be substituted with in all gate constraints.
- iii. For each row such that is enabled at , assign the value to at row .
The above algorithm is implemented in
halo2_proofs/src/plonk/circuit/compress_selectors.rs.
This is used by the compress_selectors
function of
halo2_proofs/src/plonk/circuit.rs
which does the actual substitutions.
Writing circuits to take best advantage of selector combining
For this optimization it is beneficial for a circuit to use simple selectors as far as possible, rather than fixed columns. It is usually not beneficial to do manual combining of selectors, because the resulting fixed columns cannot take part in the automatic combining. That means that to get comparable results you would need to do a global optimization manually, which would interfere with writing composable gadgets.
Whether two selectors are enabled on the same row (and so are inhibited from being combined) depends on how regions are laid out by the floor planner. The currently implemented floor planners do not attempt to take this into account. We suggest not worrying about it too much — the gains that can be obtained by cajoling a floor planner to shuffle around regions in order to improve combining are likely to be relatively small.
Gadgets
In this section we document the gadgets and chip designs provided in the halo2_gadgets
crate.
Neither these gadgets, nor their implementations, have been reviewed, and they should not be used in production.
Elliptic Curves
EccChip
halo2_gadgets
provides a chip that implements EccInstructions
using 10 advice columns.
The chip is currently restricted to the Pallas curve, but will be extended to support the
Vesta curve in the near future.
Chip assumptions
A non-exhaustive list of assumptions made by EccChip
:
- is not an -coordinate of a valid point on the curve.
- Holds for Pallas because is not square in .
- is not a -coordinate of a valid point on the curve.
- Holds for Pallas because is not a cube in .
Layout
The following table shows how columns are used by the gates for various chip sub-areas:
- - witnessing points.
- - incomplete point addition.
- - complete point addition.
- - Fixed-base scalar multiplication.
- - variable-base scalar multiplication, incomplete rounds.
- - variable-base scalar multiplication, complete rounds.
- - variable-base scalar multiplication, overflow check.
Witnessing points
We represent elliptic curve points in the circuit in their affine representation . The identity is represented as the pseudo-coordinate , which we assume is not a valid point on the curve.
Non-identity points
To constrain a coordinate pair as representing a valid point on the curve, we directly check the curve equation. For Pallas and Vesta, this is:
Points including the identity
To allow to represent either a valid point on the curve, or the pseudo-coordinate , we define a separate gate that enforces the curve equation check unless both and are zero.
We will use formulae for curve arithmetic using affine coordinates on short Weierstrass curves, derived from section 4.1 of Hüseyin Hışıl's thesis.
Incomplete addition
- Inputs:
- Output:
The formulae from Hışıl's thesis are:
Rename to , to , and to , giving
which is equivalent to
Assuming , we have
So we get the constraints:
-
- Note that this constraint is unsatisfiable for (when ), and so cannot be used with arbitrary inputs.
Constraints
Complete addition
Suppose that we represent as . ( is not an -coordinate of a valid point because we would need , and is not square in . Also is not a -coordinate of a valid point because is not a cube in .)
For the doubling case, Hışıl's thesis tells us that has to instead be computed as .
Define
Witness where:
Constraints
Max degree: 6
Analysis of constraints
Propositions:
Cases:
Note that we rely on the fact that is not a valid -coordinate or -coordinate of a point on the Pallas curve other than .
-
-
Completeness:
-
Soundness: is the only solution to
-
-
for
-
Completeness:
-
Soundness: is the only solution to
-
-
for
-
Completeness:
-
Soundness: is the only solution to
-
-
for
-
Completeness:
-
Soundness: is computed correctly, and is the only solution.
-
-
for
-
Completeness:
-
Soundness: is the only solution to
-
-
for and and
-
Completeness:
-
Soundness: is computed correctly, and is the only solution.
-
Fixed-base scalar multiplication
There are fixed bases in the Orchard protocol:
- , used in deriving the nullifier;
- , used in spend authorization;
- base for ;
- and bases for ; and
- base for .
Decompose scalar
We support fixed-base scalar multiplication with three types of scalars:
Full-width scalar
A -bit scalar from . We decompose a full-width scalar into -bit windows:
The scalar multiplication will be computed correctly for representing any integer in the range - that is, the scalar is allowed to be non-canonical.
We range-constrain each -bit word of the scalar decomposition using a polynomial range-check constraint: where
Base field element
We support using a base field element as the scalar in fixed-base multiplication. This occurs, for example, in the scalar multiplication for the nullifier computation of the Action circuit : here, the scalar is the result of a base field addition.
Decompose the base field element into three-bit windows, and range-constrain each window, using the short range decomposition gadget in strict mode, with
If is witnessed directly then no issue of canonicity arises. However, because the scalar is given as a base field element here, care must be taken to ensure a canonical representation, since . That is, we must check that where the is Pallas base field modulus Note that
To do this, we decompose into three pieces:
We check the correctness of this decomposition by: If the MSB is not set, then However, in the case where , we must check:
- :
- ,
To check that we make use of the three-bit running sum decomposition:
- Firstly, we constrain to be a -bit value by enforcing its high bits to be all-zero. We can get from the decomposition:
- Then, we constrain bits of to be zeroes; in other words, we constrain the three-bit word We make use of the running sum decomposition to obtain
Define . To check that we use 13 ten-bit lookups, where we constrain the running sum output of the lookup to be if
Short signed scalar
A short signed scalar is witnessed as a magnitude and sign such that
This is used for . We want to compute , where
and are each already constrained to bits (by their use as inputs to ).
Decompose the magnitude into three-bit windows, and range-constrain each window, using the short range decomposition gadget in strict mode, with
We have two additional constraints: where .
Load fixed base
Then, we precompute multiples of the fixed base for each window. This takes the form of a window table: such that:
- for the first (W-1) rows :
- in the last row :
The additional term lets us avoid adding the point at infinity in the case . We offset these accumulated terms by subtracting them in the final window, i.e. we subtract .
Note: Although an offset of would naively suffice, it introduces an edge case when . In this case, the window table entries evaluate to the same point:
In fixed-base scalar multiplication, we sum the multiples of at each window (except the last) using incomplete addition. Since the point doubling case is not handled by incomplete addition, we avoid it by using an offset of
For each window of fixed-base multiples :
- Define a Lagrange interpolation polynomial that maps to the -coordinate of the multiple , i.e.
- Find a value such that is a square in the field, but the wrong-sign -coordinate does not produce a square.
Repeating this for all windows, we end up with:
- an table storing coefficients interpolating the coordinate for each window. Each -coordinate interpolation polynomial will be of the form where and 's are the coefficients for each power of ; and
- a length- array of 's.
We load these precomputed values into fixed columns whenever we do fixed-base scalar multiplication in the circuit.
Fixed-base scalar multiplication
Given a decomposed scalar and a fixed base , we compute as follows:
- For each in the scalar decomposition, witness the - and -coordinates
- Check that is on the curve: .
- Witness such that .
- For all windows but the last, use incomplete addition to sum the 's, resulting in .
- For the last window, use complete addition and return the final result.
Note: complete addition is required in the final step to correctly map to a representation of the point at infinity, ; and also to handle a corner case for which the last step is a doubling.
where (from the Pallas curve equation).
Signed short exponent
Recall that the signed short exponent is witnessed as a bit magnitude , and a sign Using the above algorithm, we compute . Then, to get the final result we conditionally negate using .
Layout
Note: this doesn't include the last row that uses complete addition. In the implementation this is allocated in a different region.
Variable-base scalar multiplication
In the Orchard circuit we need to check where and the scalar field is with .
We have and , for .
Witness scalar
We're trying to compute for . Set and . Then we can compute
provided that , i.e. which covers the whole range we need because in fact .
Thus, given a scalar , we witness the boolean decomposition of (We use big-endian bit order for convenient input into the variable-base scalar multiplication algorithm.)
Variable-base scalar multiplication
We use an optimized double-and-add algorithm, copied from "Faster variable-base scalar multiplication in zk-SNARK circuits" with some variable name changes:
Acc := [2] T
for i from n-1 down to 0 {
P := k_{i+1} ? T : −T
Acc := (Acc + P) + Acc
}
return (k_0 = 0) ? (Acc - T) : Acc
It remains to check that the x-coordinates of each pair of points to be added are distinct.
When adding points in a prime-order group, we can rely on Theorem 3 from Appendix C of the Halo paper, which says that if we have two such points with nonzero indices wrt a given odd-prime order base, where the indices taken in the range are distinct disregarding sign, then they have different x-coordinates. This is helpful, because it is easier to reason about the indices of points occurring in the scalar multiplication algorithm than it is to reason about their x-coordinates directly.
So, the required check is equivalent to saying that the following "indexed version" of the above algorithm never asserts:
acc := 2
for i from n-1 down to 0 {
p = k_{i+1} ? 1 : −1
assert acc ≠ ± p
assert (acc + p) ≠ acc // X
acc := (acc + p) + acc
assert 0 < acc ≤ (q-1)/2
}
if k_0 = 0 {
assert acc ≠ 1
acc := acc - 1
}
The maximum value of acc
is:
<--- n 1s --->
1011111...111111
= 1100000...000000 - 1
=
The assertion labelled X obviously cannot fail because . It is possible to see that acc is monotonically increasing except in the last conditional. It reaches its largest value when is maximal, i.e. .
So to entirely avoid exceptional cases, we would need . But we can use larger by if the last iterations use complete addition.
The first for which the algorithm using only incomplete addition fails is going to be , since . We need to make the wraparound technique above work.
sage: q = 0x40000000000000000000000000000000224698fc0994a8dd8c46eb2100000001
sage: 2^253 + 2^252 - 1 < (q-1)//2
False
sage: 2^252 + 2^251 - 1 < (q-1)//2
True
So the last three iterations of the loop () need to use complete addition, as does the conditional subtraction at the end. Writing this out using ⸭ for incomplete addition (as we do in the spec), we have:
Acc := [2] T
for i from 253 down to 3 {
P := k_{i+1} ? T : −T
Acc := (Acc ⸭ P) ⸭ Acc
}
for i from 2 down to 0 {
P := k_{i+1} ? T : −T
Acc := (Acc + P) + Acc // complete addition
}
return (k_0 = 0) ? (Acc + (-T)) : Acc // complete addition
Constraint program for optimized double-and-add
Define a running sum , where and:
The helper . After substitution of , and , this becomes:
Here, is assigned to a cell. This is unlike previous 's, which were implicitly derived from , but never actually assigned.
The bits are used in three further steps, using complete addition:
If the least significant bit we set otherwise we set . Then we return using complete addition.
Let
Output .
(Note that represents .)
Incomplete addition
We need six advice columns to witness . However, since are the same, we can perform two incomplete additions in a single row, reusing the same . We split the scalar bits used in incomplete addition into and halves and process them in parallel. This means that we effectively have two for loops:
- the first, covering the half for from down to , with a special case at ; and
- the second, covering the half for the remaining from down to , with a special case at .
For each and half, we have three sets of gates. Note that is going from ; is NOT indexing the rows.
This gate is only used on the first row (before the for loop). We check that are initialized to values consistent with the initial where
This gate is used on all rows corresponding to the for loop except the last.
where
This gate is used on the final iteration of the for loop, handling the special case where we check that the output has been witnessed correctly. where
Complete addition
We reuse the complete addition constraints to implement the final rounds of double-and-add. This requires two rows per round because we need 9 advice columns for each complete addition. In the 10th advice column we stash the other cells that we need to correctly implement the double-and-add:
- The base coordinate, so we can conditionally negate it as input to one of the complete additions.
- The running sum, which we constrain over two rows instead of sequentially.
Layout
Constraints
In addition to the complete addition constraints, we define the following gate:
where .
LSB
Layout
Constraints
where .
Overflow check
cannot overflow for any , because it is a weighted sum of bits only up to , which is smaller than (and also ).
However, can overflow .
Note: for full-width scalar mul, it may not be possible to represent in the base field (e.g. when the base field is Pasta's and ). In that case, we need to special-case the row that would mention so that it is correct for whatever representation we use for a full-width scalar. Our representation for will be the pair . We'll use in place of for , constraining to 254 bits so that it fits in an element. Then we just have to generalize the argument below to work for (because the maximum value of is ).
Since overflow can only occur in the final step that constrains , we have . It is then sufficient to also check that (so that ) and that . These conditions together imply that as an integer, and so as required.
Note: the bits do not represent a value reduced modulo , but rather a representation of the unreduced .
Optimized check for
Since (also true if and are swapped), we have
We may assume that .
(This is true for the use of variable-base scalar mul in Orchard, where we know that . If is also true if we swap and so that we have . It is not true for a full-width scalar when .)
Therefore,
Given , we prove equivalence of and as follows:
- shift the range by to give ;
- observe that is guaranteed to be in and therefore cannot overflow or underflow modulo ;
- using the fact that , observe that .
(We can see in a different way that this is correct by observing that it checks whether , so the upper bound is aligned as we would expect.)
Now, we can continue optimizing from :
Constraining to be all- or not-all- can be implemented almost "for free", as follows.
Recall that , so we have:
So are all exactly when .
Finally, we can merge the -bit decompositions for the and cases by checking that .
Overflow check constraints
Let . The constraints for the overflow check are:
Define
Witness , and decompose as .
Then the needed gates are:
where can be computed by another running sum. Note that the factor of has no effect on the constraint, since the RHS is zero.
Running sum range check
We make use of a -bit lookup range check in the circuit to subtract the low bits of . The range check subtracts the first bits of and right-shifts the result to give
Overflow check (general)
Recall that we defined , where .
cannot overflow for any , because it is a weighted sum of bits only up to and including . When and this sum can be at most , which is smaller than (and also ).
However, for full-width scalar mul, it may not be possible to represent in the base field (e.g. when the base field is Pasta's and ). In that case can overflow .
So, we need to special-case the row that would mention so that it is correct for whatever representation we use for a full-width scalar.
Our representation for will be the pair . We'll use in place of for , constraining to 254 bits so that it fits in an element.
Then we just have to generalize the overflow check used for variable-base scalar mul in the Orchard circuit to work for (because the maximum value of is ).
Note: the bits do not represent a value reduced modulo , but rather a representation of the unreduced .
Overflow can only occur in the final step that constrains , and only if has the bit with weight set (i.e. if ). If we instead set , now cannot overflow and should be equal to .
It is then sufficient to also check that as an integer where .
Represent as where we constrain and and to boolean. For this to be a canonical representation we also need .
Let .
If :
- constrain and . This cannot overflow because in this case and so .
If :
- we should have iff , i.e. witness as boolean and then
- If then constrain .
- This can be done by constraining either or . ( cannot overflow.)
- If then constrain .
- This can be done by constraining and .
- If then constrain .
Overflow check constraints (general)
Represent as as above.
The constraints for the overflow check are:
Note that the four 130-bit constraints marked are in two pairs that occur in disjoint cases. We can therefore combine them into two 130-bit constraints using a new witness variable ; the other constraint always being on :
( is unconstrained and can be witnessed as in the case .)
Cost
- 25 10-bit and one 3-bit range check, to constrain to 253 bits;
- 25 10-bit and one 3-bit range check, to constrain to 253 bits when ;
- two times 13 10-bit range checks.
Sinsemilla
Overview
Sinsemilla is a collision-resistant hash function and commitment scheme designed to be efficient in algebraic circuit models that support lookups, such as PLONK or Halo 2.
The security properties of Sinsemilla are similar to Pedersen hashes; it is not designed to be used where a random oracle, PRF, or preimage-resistant hash is required. The only claimed security property of the hash function is collision-resistance for fixed-length inputs.
Sinsemilla is roughly 4 times less efficient than the algebraic hashes Rescue and Poseidon inside a circuit, but around 19 times more efficient than Rescue outside a circuit. Unlike either of these hashes, the collision resistance property of Sinsemilla can be proven based on cryptographic assumptions that have been well-established for at least 20 years. Sinsemilla can also be used as a computationally binding and perfectly hiding commitment scheme.
The general approach is to split the message into -bit pieces, and for each piece, select from a table of bases in our cryptographic group. We combine the selected bases using a double-and-add algorithm. This ends up being provably as secure as a vector Pedersen hash, and makes advantageous use of the lookup facility supported by Halo 2.
Description
This section is an outline of how Sinsemilla works: for the normative specification, refer to §5.4.1.9 Sinsemilla Hash Function in the protocol spec. The incomplete point addition operator, ⸭, that we use below is also defined there.
Let be a cryptographic group of prime order . We write additively, with identity , and using for scalar multiplication of by .
Let be an integer chosen based on efficiency considerations (the table size will be ). Let be an integer, fixed for each instantiation, such that messages are bits, where . We use zero-padding to the next multiple of bits if necessary.
: Choose and as independent, verifiably random generators of , using a suitable hash into , such that none of or are .
In Orchard, we define to be dependent on a domain separator . The protocol specification uses in place of and in place of .
:
- Split into groups of bits. Interpret each group as a -bit little-endian integer .
- let
- for from up to :
- let
- return
Let be the -coordinate of . (This assumes that is a prime-order elliptic curve in short Weierstrass form, as is the case for Pallas and Vesta.)
It is slightly more efficient to express a double-and-add as . We also use incomplete additions: it is shown in the Sinsemilla security argument that in the case where is a prime-order short Weierstrass elliptic curve, an exceptional case for addition would lead to finding a discrete logarithm, which can be assumed to occur with negligible probability even for adversarial input.
Use as a commitment scheme
Choose another generator independently of and .
The randomness for a commitment is chosen uniformly on .
Let .
Let be the of . (This again assumes that is a prime-order elliptic curve in short Weierstrass form.)
Note that unlike a simple Pedersen commitment, this commitment scheme ( or ) is not additively homomorphic.
Efficient implementation
The aim of the design is to optimize the number of bits that can be processed for each step of the algorithm (which requires a doubling and addition in ) for a given table size. Using a single table of size group elements, we can process bits at a time.
Incomplete addition
In each step of Sinsemilla we want to compute . Let be the intermediate result such that . Recalling the incomplete addition formulae:
Let . Substituting the coordinates for each of the incomplete additions in turn, and rearranging, we get
and
Constraint program
Let .
Input: . (The message words are 1-indexed here, as in the protocol spec, but we start the loop from so that corresponds to in the protocol spec.)
Output: .
- for from up to :
PLONK / Halo 2 constraints
Message decomposition
We have an -bit message . (Note that the message words are 1-indexed as in the protocol spec.)
Initialise the running sum and define . We will end up with
Rearranging gives us an expression for each word of the original message , which we can look up in the table. We position and in adjacent rows of the same column, so we can sequentially apply the constraint across the entire message.
In other words, .
For a little-endian decomposition as used here, the running sum is initialized to the scalar and ends at 0. For a big-endian decomposition as used in variable-base scalar multiplication, the running sum would start at 0 and end with recovering the original scalar.
Efficient packing
The running sum only applies to message words within a single field element. That means if then we will need several disjoint running sums. A longer message can be constructed by splitting the message words across several field elements, and then running several instances of the constraints below.
The expression for above requires rows in the column, leaving a one-row gap in adjacent columns and making tricker to accumulate. In order to support chaining multiple field elements without a gap, we use a slightly more complicated expression for that includes a selector:
This effectively forces to zero for the last step of each element, which allows the cell that would have been to be used to reinitialize the running sum for the next element.
With this sorted out, the incomplete addition accumulator can eliminate almost entirely, by substituting for and values in the current and next rows. The two exceptions are at the start of Sinsemilla (where we need to constrain the accumulator to have initial value ), and the end (where we need to witness for use outside of Sinsemilla).
Selectors
We need a total of four logical selectors to:
- Control the Sinsemilla gate and lookup.
- Distinguish between the last message word in a running sum and its earlier words.
- Mark the start of Sinsemilla.
- Mark the end of Sinsemilla.
We use regular selector columns for the Sinsemilla gate selector and Sinsemilla start selector The other two selectors are synthesized from a single fixed column as follows:
We set to on most Sinsemilla rows, and for the last step of each element, except for the last element where it is set to . We can then use to toggle between constraining the substituted on adjacent rows, and the witnessed at the end of Sinsemilla:
Generator lookup table
The Sinsemilla circuit makes use of pre-computed random generators. These are loaded into a lookup table:
Layout
, , , etc. are copied in using equality constraints.
Optimized Sinsemilla gate
The Halo 2 circuit API can automatically substitute , , , and , so we don't need to do that manually.
- for from up to :
Note that each term of the last constraint is multiplied by relative to the constraint program given earlier. This is a small optimization that avoids divisions by .
By gating the lookup expression on , we avoid the need to fill in unused cells with dummy values to pass the lookup argument. The optimized lookup value (using a default index of ) is:
This increases the degree of the lookup argument to .
MerkleCRH
Message decomposition
is used in the hash function. The input to is:
where:
- ,
- ,
- ,
with and are allowed to be non-canonical -bit encodings of and .
Sinsemilla operates on multiples of 10 bits, so we start by decomposing the message into chunks:
Then we recompose the chunks into MessagePiece
s:
Each message piece is constrained by to its stated length. Additionally, and are witnessed as field elements, so we know that they are canonical. However, we need additional constraints to enforce that the chunks are the correct bit lengths (or else they could overlap in the decompositions and allow the prover to witness an arbitrary message).
Some of these constraints can be implemented with reusable circuit gadgets. We define a custom gate controlled by the selector to hold the remaining constraints.
Bit length constraints
Chunk is directly constrained by Sinsemilla. We constrain the remaining chunks with the following constraints:
, the index-1 running sum output of , is copied into the gate. has been constrained by to be bits, and is precisely . We recover chunk using
, the index-1 running sum output of , is copied into the gate. has been constrained by to be bits. We witness the subpieces outside this gate, and constrain them each to be bits. Inside the gate, we check that We also recover the subpiece using :
Constraints
where is a short lookup range check.
Decomposition constraints
We have now derived or witnessed every subpiece, and range-constrained every subpiece:
- ( bits), derived as ;
- ( bits), equal to ;
- ( bits), derived as ;
- ( bits) is witnessed and constrained outside the gate;
- ( bits) is witnessed and constrained outside the gate;
- ( bits) is witnessed and constrained outside the gate.
- is constrained to equal .
We can now use them to reconstruct the original field element inputs:
Region layout
Circuit components
The Orchard circuit spans advice columns while the chip only uses advice columns. We distribute the path hashing evenly across two chips to make better use of the available circuit area. Since the output from the previous layer hash is copied into the next layer hash, we maintain continuity even when moving from one chip to the other.
Decomposition
Given a field element , these gadgets decompose it into -bit windows where each a -bit value.
This is done using a running sum We initialize the running sum and compute subsequent terms This gives us:
Strict mode
Strict mode constrains the running sum output to be zero, thus range-constraining the field element to be within bits.
In strict mode, we are also assured that gives us the last window in the decomposition.
Lookup decomposition
This gadget makes use of a -bit lookup table to decompose a field element into -bit words. Each -bit word is range-constrained by a lookup in the -bit table.
The region layout for the lookup decomposition uses a single advice column , and two selectors and
Short range check
Using two -bit lookups, we can range-constrain a field element to be bits, where To do this:
- Constrain to be within bits using a -bit lookup.
- Constrain to be within bits using a -bit lookup.
The short variant of the lookup decomposition introduces a selector. The same advice column has here been renamed to for clarity:
where Note that is assigned to a fixed column at keygen, and copied in at proving time. This is used in the gate enabled by the selector to check that was shifted correctly:
Combined lookup expression
Since the lookup decomposition and its short variant both make use of the same lookup table, we combine their lookup input expressions into a single one:
where and are the same cell (but distinguished here for clarity of usage).
Short range decomposition
For a short range (for instance, where ), we can range-constrain each word using a degree- polynomial constraint instead of a lookup:
SHA-256
Specification
SHA-256 is specified in NIST FIPS PUB 180-4.
Unlike the specification, we use for addition modulo , and for field addition. is used for XOR.
Gadget interface
SHA-256 maintains state in eight 32-bit variables. It processes input as 512-bit blocks, but internally splits these blocks into 32-bit chunks. We therefore designed the SHA-256 gadget to consume input in 32-bit chunks.
Chip instructions
The SHA-256 gadget requires a chip with the following instructions:
#![allow(unused)] fn main() { extern crate halo2_proofs; use halo2_proofs::plonk::Error; use std::fmt; trait Chip: Sized {} trait Layouter<C: Chip> {} const BLOCK_SIZE: usize = 16; const DIGEST_SIZE: usize = 8; pub trait Sha256Instructions: Chip { /// Variable representing the SHA-256 internal state. type State: Clone + fmt::Debug; /// Variable representing a 32-bit word of the input block to the SHA-256 compression /// function. type BlockWord: Copy + fmt::Debug; /// Places the SHA-256 IV in the circuit, returning the initial state variable. fn initialization_vector(layouter: &mut impl Layouter<Self>) -> Result<Self::State, Error>; /// Starting from the given initial state, processes a block of input and returns the /// final state. fn compress( layouter: &mut impl Layouter<Self>, initial_state: &Self::State, input: [Self::BlockWord; BLOCK_SIZE], ) -> Result<Self::State, Error>; /// Converts the given state into a message digest. fn digest( layouter: &mut impl Layouter<Self>, state: &Self::State, ) -> Result<[Self::BlockWord; DIGEST_SIZE], Error>; } }
TODO: Add instruction for computing padding.
This set of instructions was chosen to strike a balance between the reusability of the instructions, and the scope for chips to internally optimise them. In particular, we considered splitting the compression function into its constituent parts (Ch, Maj etc), and providing a compression function gadget that implemented the round logic. However, this would prevent chips from using relative references between the various parts of a compression round. Having an instruction that implements all compression rounds is also similar to the Intel SHA extensions, which provide an instruction that performs multiple compression rounds.
16-bit table chip for SHA-256
This chip implementation is based around a single 16-bit lookup table. It requires a minimum of circuit rows, and is therefore suitable for use in larger circuits.
We target a maximum constraint degree of . That will allow us to handle constraining carries and "small pieces" to a range of up to in one row.
Compression round
There are compression rounds. Each round takes 32-bit values as input, and performs the following operations:
where must handle a carry .
Define as a table mapping a -bit input to an output interleaved with zero bits. We do not require a separate table for range checks because can be used.
Modular addition
To implement addition modulo , we note that this is equivalent to adding the operands using field addition, and then masking away all but the lowest 32 bits of the result. For example, if we have two operands and :
we decompose each operand (along with the result) into 16-bit chunks:
and then reformulate the constraint using field addition:
More generally, any bit-decomposition of the output can be used, not just a decomposition into 16-bit chunks. Note that this correctly handles the carry from .
This constraint requires that each chunk is correctly range-checked (or else an assignment could overflow the field).
-
The operand and result chunks can be constrained using , by looking up each chunk in the "dense" column within a subset of the table. This way we additionally get the "spread" form of the output for free; in particular this is true for the output of the bottom-right which becomes , and the output of the leftmost which becomes . We will use this below to optimize and .
-
must be constrained to the precise range of allowed carry values for the number of operands. We do this with a small range constraint.
Maj function
can be done in lookups: chunks
- As mentioned above, after the first round we already have in spread form . Similarly, and are equal to the and respectively of the previous round, and therefore in the steady state we already have them in spread form and . In fact we can also assume we have them in spread form in the first round, either from the fixed IV or from the use of to reduce the output of the feedforward in the previous block.
- Add the spread forms in the field: ;
- We can add them as -bit words or in pieces; it's equivalent
- Witness the compressed even bits and the compressed odd bits for ;
- Constrain , where is the function output.
Note: by "even" bits we mean the bits of weight an even-power of , i.e. of weight . Similarly by "odd" bits we mean the bits of weight an odd-power of .
Ch function
TODO: can probably be optimized to or lookups using an additional table.
can be done in lookups: chunks
- As mentioned above, after the first round we already have in spread form . Similarly, and are equal to the and respectively of the previous round, and therefore in the steady state we already have them in spread form and . In fact we can also assume we have them in spread form in the first round, either from the fixed IV or from the use of to reduce the output of the feedforward in the previous block.
- Calculate and , where .
- We can add them as -bit words or in pieces; it's equivalent.
- works to compute the spread of even though negation and do not commute in general. It works because each spread bit in is subtracted from , so there are no borrows.
- Witness such that , and similarly for .
- is the function output.
Σ_0 function
can be done in lookups.
To achieve this we first split into pieces , of lengths bits respectively counting from the little end. At the same time we obtain the spread forms of these pieces. This can all be done in two PLONK rows, because the and -bit pieces can be handled using lookups, and the -bit piece can be split into -bit subpieces. The latter and the remaining -bit piece can be range-checked by polynomial constraints in parallel with the two lookups, two small pieces in each row. The spread forms of these small pieces are found by interpolation.
Note that the splitting into pieces can be combined with the reduction of , i.e. no extra lookups are needed for the latter. In the last round we reduce after adding the feedforward (requiring a carry of up to which is fine).
is equivalent to :
Then, using more lookups we obtain the result as the even bits of a linear combination of the pieces:
That is, we witness the compressed even bits and the compressed odd bits , and constrain where is the function output.
Σ_1 function
can be done in lookups.
To achieve this we first split into pieces , of lengths bits respectively counting from the little end. At the same time we obtain the spread forms of these pieces. This can all be done in two PLONK rows, because the and -bit pieces can be handled using lookups, the -bit piece can be split into and -bit subpieces, and the -bit piece can be split into -bit subpieces. The four small pieces can be range-checked by polynomial constraints in parallel with the two lookups, two small pieces in each row. The spread forms of these small pieces are found by interpolation.
Note that the splitting into pieces can be combined with the reduction of , i.e. no extra lookups are needed for the latter. In the last round we reduce after adding the feedforward (requiring a carry of up to which is fine).
is equivalent to .
Then, using more lookups we obtain the result as the even bits of a linear combination of the pieces, in the same way we did for :
That is, we witness the compressed even bits and the compressed odd bits , and constrain where is the function output.
Block decomposition
For each block of the padded message, words of bits each are constructed as follows:
- The first are obtained by splitting into -bit blocks
- The remaining words are constructed using the formula: for .
Note: -based numbering is used for the word indices.
Note: is a right-shift, not a rotation.
σ_0 function
is equivalent to .
As above but with pieces of lengths counting from the little end. Split into two -bit subpieces.
σ_1 function
is equivalent to .
TODO: this diagram doesn't match the expression on the right. This is just for consistency with the other diagrams.
As above but with pieces of lengths counting from the little end. Split into -bit subpieces.
Message scheduling
We apply to , and to . In order to avoid redundant applications of , we can merge the splitting into pieces for and in the case of . Merging the piece lengths and gives pieces of lengths .
If we can do the merged split in rows (as opposed to a total of rows when splitting for and separately), we save rows.
These might even be doable in rows; not sure. —Daira
We can merge the reduction mod of into their splitting when they are used to compute subsequent words, similarly to what we did for and in the round function.
We will still need to reduce since they are not split. (Technically we could leave them unreduced since they will be reduced later when they are used to compute and -- but that would require handling a carry of up to rather than , so it's not worth the complexity.)
The resulting message schedule cost is:
- rows to constrain to bits
- This is technically optional, but let's do it for robustness, since the rest of the input is constrained for free.
- rows to split into -bit pieces
- rows to split into -bit pieces (merged with a reduction for )
- rows to split into -bit pieces (merged with a reduction)
- rows to extract the results of for
- rows to extract the results of for
- rows to reduce
- rows.
Overall cost
For each round:
- rows for
- rows for
- rows for
- rows for
- and are always free
- per round
This gives rows for all of "step 3", to which we need to add:
- rows for message scheduling
- rows for reductions mod in "step 4"
giving a total of rows.
Tables
We only require one table , with rows and columns. We need a tag column to allow selecting -bit subsets of the table for and .
spread
table
row | tag | table (16b) | spread (32b) |
---|---|---|---|
0 | 0000000000000000 | 00000000000000000000000000000000 | |
0 | 0000000000000001 | 00000000000000000000000000000001 | |
0 | 0000000000000010 | 00000000000000000000000000000100 | |
0 | 0000000000000011 | 00000000000000000000000000000101 | |
... | 0 | ... | ... |
0 | 0000000001111111 | 00000000000000000001010101010101 | |
1 | 0000000010000000 | 00000000000000000100000000000000 | |
... | 1 | ... | ... |
1 | 0000001111111111 | 00000000000001010101010101010101 | |
... | 2 | ... | ... |
2 | 0000011111111111 | 00000000010101010101010101010101 | |
... | 3 | ... | ... |
3 | 0001111111111111 | 00000001010101010101010101010101 | |
... | 4 | ... | ... |
4 | 0011111111111111 | 00000101010101010101010101010101 | |
... | 5 | ... | ... |
5 | 1111111111111111 | 01010101010101010101010101010101 |
For example, to do an -bit lookup, we polynomial-constrain the tag to be in . For the most common case of a -bit lookup, we don't need to constrain the tag. Note that we can fill any unused rows beyond with a duplicate entry, e.g. all-zeroes.
Gates
Choice gate
Input from previous operations:
- 64-bit spread forms of 32-bit words , assumed to be constrained by previous operations
- in practice, we'll have the spread forms of after they've been decomposed into 16-bit subpieces
- is defined as
E ∧ F
s_ch | |||||
---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | ||||
1 | {0,1,2,3,4,5} | ||||
0 | {0,1,2,3,4,5} | ||||
0 | {0,1,2,3,4,5} |
¬E ∧ G
s_ch_neg | ||||||
---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | |||||
1 | {0,1,2,3,4,5} | |||||
0 | {0,1,2,3,4,5} | |||||
0 | {0,1,2,3,4,5} |
Constraints:
s_ch
(choice):s_ch_neg
(negation):s_ch
with an extra negation check- lookup on
- permutation between
Output:
Majority gate
Input from previous operations:
- 64-bit spread forms of 32-bit words , assumed to be constrained by previous operations
- in practice, we'll have the spread forms of after they've been decomposed into -bit subpieces
s_maj | ||||||
---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | |||||
1 | {0,1,2,3,4,5} | |||||
0 | {0,1,2,3,4,5} | |||||
0 | {0,1,2,3,4,5} |
Constraints:
s_maj
(majority):- lookup on
- permutation between
Output:
Σ_0 gate
is a 32-bit word split into -bit chunks, starting from the little end. We refer to these chunks as respectively, and further split into three 3-bit chunks . We witness the spread versions of the small chunks.
s_upp_sigma_0 | |||||||
---|---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | ||||||
1 | {0,1,2,3,4,5} | ||||||
0 | {0,1,2,3,4,5} | ||||||
0 | {0,1,2,3,4,5} |
Constraints:
s_upp_sigma_0
( constraint):
- lookup on
- 2-bit range check and 2-bit spread check on
- 3-bit range check and 3-bit spread check on
(see section Helper gates)
Output:
Σ_1 gate
is a 32-bit word split into -bit chunks, starting from the little end. We refer to these chunks as respectively, and further split into two 3-bit chunks and into (2,3)-bit chunks . We witness the spread versions of the small chunks.
s_upp_sigma_1 | ||||||||
---|---|---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | |||||||
1 | {0,1,2,3,4,5} | |||||||
0 | {0,1,2,3,4,5} | |||||||
0 | {0,1,2,3,4,5} |
Constraints:
s_upp_sigma_1
( constraint):
- lookup on
- 2-bit range check and 2-bit spread check on
- 3-bit range check and 3-bit spread check on
(see section Helper gates)
Output:
σ_0 gate
v1
v1 of the gate takes in a word that's split into -bit chunks (already constrained by message scheduling). We refer to these chunks respectively as is further split into two 2-bit chunks We witness the spread versions of the small chunks. We already have and from the message scheduling.
is equivalent to .
s_low_sigma_0 | |||||||
---|---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | ||||||
1 | {0,1,2,3,4,5} | ||||||
0 | {0,1,2,3,4,5} | ||||||
0 | {0,1,2,3,4,5} |
Constraints:
s_low_sigma_0
( v1 constraint):
- check that
b
was properly split into subsections for 4-bit pieces. - 2-bit range check and 2-bit spread check on
- 3-bit range check and 3-bit spread check on
v2
v2 of the gate takes in a word that's split into -bit chunks (already constrained by message scheduling). We refer to these chunks respectively as We already have from the message scheduling. The 1-bit remain unchanged by the spread operation and can be used directly. We further split into two 2-bit chunks We witness the spread versions of the small chunks.
is equivalent to .
s_low_sigma_0_v2 | ||||||||
---|---|---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | |||||||
1 | {0,1,2,3,4,5} | |||||||
0 | {0,1,2,3,4,5} | |||||||
0 | {0,1,2,3,4,5} |
Constraints:
s_low_sigma_0_v2
( v2 constraint):
- check that
b
was properly split into subsections for 4-bit pieces. - 2-bit range check and 2-bit spread check on
- 3-bit range check and 3-bit spread check on
σ_1 gate
v1
v1 of the gate takes in a word that's split into -bit chunks (already constrained by message scheduling). We refer to these chunks respectively as is further split into -bit chunks We witness the spread versions of the small chunks. We already have and from the message scheduling.
is equivalent to .
s_low_sigma_1 | |||||||
---|---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | ||||||
1 | {0,1,2,3,4,5} | ||||||
0 | {0,1,2,3,4,5} | ||||||
0 | {0,1,2,3,4,5} |
Constraints:
-
s_low_sigma_1
( v1 constraint): -
check that
b
was properly split into subsections for 7-bit pieces. -
2-bit range check and 2-bit spread check on
-
3-bit range check and 3-bit spread check on
v2
v2 of the gate takes in a word that's split into -bit chunks (already constrained by message scheduling). We refer to these chunks respectively as We already have from the message scheduling. The 1-bit remain unchanged by the spread operation and can be used directly. We further split into two 2-bit chunks We witness the spread versions of the small chunks.
is equivalent to .
s_low_sigma_1_v2 | ||||||||
---|---|---|---|---|---|---|---|---|
0 | {0,1,2,3,4,5} | |||||||
1 | {0,1,2,3,4,5} | |||||||
0 | {0,1,2,3,4,5} | |||||||
0 | {0,1,2,3,4,5} |
Constraints:
s_low_sigma_1_v2
( v2 constraint):
- check that
b
was properly split into subsections for 4-bit pieces. - 2-bit range check and 2-bit spread check on
- 3-bit range check and 3-bit spread check on
Helper gates
Small range constraints
Let . Constraining this expression to equal zero enforces that is in
2-bit range check
sr2 | |
---|---|
1 | a |
2-bit spread
ss2 | ||
---|---|---|
1 | a | a' |
with interpolation polynomials:
- ()
- ()
- ()
- ()
3-bit range check
sr3 | |
---|---|
1 | a |
3-bit spread
ss3 | ||
---|---|---|
1 | a | a' |
with interpolation polynomials:
- ()
- ()
- ()
- ()
- ()
- ()
- ()
- ()
reduce_6 gate
Addition of 6 elements
Input:
Check:
Assume inputs are constrained to 16 bits.
- Addition gate (sa):
- Carry gate (sc):
sa | sc | ||||||||
---|---|---|---|---|---|---|---|---|---|
1 | 0 | ||||||||
1 | 1 |
Assume inputs are constrained to 16 bits.
- Addition gate (sa):
- Carry gate (sc):
sa | sc | ||||
---|---|---|---|---|---|
0 | 0 | ||||
1 | 1 | ||||
0 | 0 | ||||
1 | 0 |
reduce_7 gate
Addition of 7 elements
Input:
Check:
Assume inputs are constrained to 16 bits.
- Addition gate (sa):
- Carry gate (sc):
sa | sc | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | |||||||||
1 | 1 |
Message scheduling region
For each block of the padded message, words of bits each are constructed as follows:
- the first are obtained by splitting into -bit blocks
- the remaining words are constructed using the formula: for .
sw | sd0 | sd1 | sd2 | sd3 | ss0 | ss0_v2 | ss1 | ss1_v2 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4} | |||||||||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
.. | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | |
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3} | |||||||||
0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | |||||||||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
.. | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | |
0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | {0,1,2,3} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
.. | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | |
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} |
Constraints:
sw
: construct word usingsd0
: decomposition gate forsd1
: decomposition gate for (split into -bit pieces)sd2
: decomposition gate for (split into -bit pieces)sd3
: decomposition gate for (split into -bit pieces)
Compression region
+----------------------------------------------------------+
| |
| decompose E, |
| Σ_1(E) |
| |
| +---------------------------------------+
| | |
| | reduce_5() to get H' |
| | |
+----------------------------------------------------------+
| decompose F, decompose G |
| |
| Ch(E,F,G) |
| |
+----------------------------------------------------------+
| |
| decompose A, |
| Σ_0(A) |
| |
| |
| +---------------------------------------+
| | |
| | reduce_7() to get A_new, |
| | using H' |
| | |
+------------------+---------------------------------------+
| decompose B, decompose C |
| |
| Maj(A,B,C) |
| |
| +---------------------------------------+
| | reduce_6() to get E_new, |
| | using H' |
+------------------+---------------------------------------+
Initial round:
s_digest | sd_abcd | sd_efgh | ss0 | ss1 | s_maj | s_ch_neg | s_ch | s_a_new | s_e_new | s_h_prime | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} |
Steady-state:
s_digest | sd_abcd | sd_efgh | ss0 | ss1 | s_maj | s_ch_neg | s_ch | s_a_new | s_e_new | s_h_prime | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} | |||||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | {0,1,2,3,4,5} |
Final digest:
s_digest | sd_abcd | sd_efgh | ss0 | ss1 | s_maj | s_ch_neg | s_ch | s_a_new | s_e_new | s_h_prime | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |||||||
1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |||||||
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Background Material
This section covers the background material required to understand the Halo 2 proving system. It is targeted at an ELI15 (Explain It Like I'm 15) level; if you think anything could do with additional explanation, let us know!
Fields
A fundamental component of many cryptographic protocols is the algebraic structure known as a field. Fields are sets of objects (usually numbers) with two associated binary operators and such that various field axioms hold. The real numbers are an example of a field with uncountably many elements.
Halo makes use of finite fields which have a finite number of elements. Finite fields are fully classified as follows:
- if is a finite field, it contains elements for some integer and some prime ;
- any two finite fields with the same number of elements are isomorphic. In particular, all of the arithmetic in a prime field is isomorphic to addition and multiplication of integers modulo , i.e. in . This is why we often refer to as the modulus.
We'll write a field as where . The prime is called its characteristic. In the cases where the field is a -degree extension of the field . (By analogy, the complex numbers are an extension of the real numbers.) However, in Halo we do not use extension fields. Whenever we write we are referring to what we call a prime field which has a prime number of elements, i.e. .
Important notes:
- There are two special elements in any field: , the additive identity, and , the multiplicative identity.
- The least significant bit of a field element, when represented as an integer in binary format, can be interpreted as its "sign" to help distinguish it from its additive inverse (negation). This is because for some nonzero element which has a least significant bit we have that has a least significant bit , and vice versa. We could also use whether or not an element is larger than to give it a "sign."
Finite fields will be useful later for constructing polynomials and elliptic curves. Elliptic curves are examples of groups, which we discuss next.
Groups
Groups are simpler and more limited than fields; they have only one binary operator and fewer axioms. They also have an identity, which we'll denote as .
Any non-zero element in a group has an inverse , which is the unique element such that .
For example, the set of nonzero elements of forms a group, where the group operation is given by multiplication on the field.
(aside) Additive vs multiplicative notation
If is written as or omitted (i.e. written as ), the identity as , and inversion as , as we did above, then we say that the group is "written multiplicatively". If is written as , the identity as or , and inversion as , then we say it is "written additively".
It's conventional to use additive notation for elliptic curve groups, and multiplicative notation when the elements come from a finite field.
When additive notation is used, we also write
for nonnegative and call this "scalar multiplication"; we also often use uppercase letters for variables denoting group elements. When multiplicative notation is used, we also write
and call this "exponentiation". In either case we call the scalar such that or the "discrete logarithm" of to base . We can extend scalars to negative integers by inversion, i.e. or .
The order of an element of a finite group is defined as the smallest positive integer such that (in multiplicative notation) or (in additive notation). The order of the group is the number of elements.
Groups always have a generating set, which is a set of elements such that we can produce any element of the group as (in multiplicative terminology) a product of powers of those elements. So if the generating set is , we can produce any element of the group as . There can be many different generating sets for a given group.
A group is called cyclic if it has a (not necessarily unique) generating set with only a single element — call it . In that case we can say that generates the group, and that the order of is the order of the group.
Any finite cyclic group of order is isomorphic to the integers modulo (denoted ), such that:
- the operation in corresponds to addition modulo ;
- the identity in corresponds to ;
- some generator corresponds to .
Given a generator , the isomorphism is always easy to compute in the direction; it is just (or in additive notation, ). It may be difficult in general to compute in the direction; we'll discuss this further when we come to elliptic curves.
If the order of a finite group is prime, then the group is cyclic, and every non-identity element is a generator.
The multiplicative group of a finite field
We use the notation for the multiplicative group (i.e. the group operation is multiplication in ) over the set .
A quick way of obtaining the inverse in is . The reason for this stems from Fermat's little theorem, which states that for any integer . If is nonzero, we can divide by twice to get
Let's assume that is a generator of , so it has order (equal to the number of elements in ). Therefore, for any element in there is a unique integer such that .
Notice that where can really be interpreted as where and . Indeed, it holds that for all . As a result the multiplication of nonzero field elements can be interpreted as addition modulo with respect to some fixed generator . The addition just happens "in the exponent."
This is another way to look at where comes from for computing inverses in the field:
so .
Montgomery's Trick
Montgomery's trick, named after Peter Montgomery (RIP) is a way to compute many group inversions at the same time. It is commonly used to compute inversions in , which are quite computationally expensive compared to multiplication.
Imagine we need to compute the inverses of three nonzero elements . Instead, we'll compute the products and , and compute the inversion
We can now multiply by to obtain and multiply by to obtain , which we can then multiply by to obtain their respective inverses.
This technique generalizes to arbitrary numbers of group elements with just a single inversion necessary.
Multiplicative subgroups
A subgroup of a group with operation , is a subset of elements of that also form a group under .
In the previous section we said that is a generator of the -order multiplicative group . This group has composite order, and so by the Chinese remainder theorem1 it has strict subgroups. As an example let's imagine that , and so factors into . Thus, there is a generator of the -order subgroup and a generator of the -order subgroup. All elements in , therefore, can be written uniquely as for some (modulo ) and some (modulo ).
If we have notice what happens when we compute
we have effectively "killed" the -order subgroup component, producing a value in the -order subgroup.
Lagrange's theorem (group theory) states that the order of any subgroup of a finite group divides the order of . Therefore, the order of any subgroup of must divide
PLONK-based proving systems like Halo 2 are more convenient to use with fields that have a large number of multiplicative subgroups with a "smooth" distribution (which makes the performance cliffs smaller and more granular as circuit sizes increase). The Pallas and Vesta curves specifically have primes of the form
with and odd (i.e. has 32 lower zero-bits). This means they have multiplicative subgroups of order for all . These 2-adic subgroups are nice for efficient FFTs, as well as enabling a wide variety of circuit sizes.
Square roots
In a field exactly half of all nonzero elements are squares; the remainder are non-squares or "quadratic non-residues". In order to see why, consider an that generates the -order multiplicative subgroup of (this exists because is divisible by since is a prime greater than ) and that generates the -order multiplicative subgroup of where . Then every element can be written uniquely as with and . Half of all elements will have and the other half will have .
Let's consider the simple case where and so is odd (if is even, then would be divisible by , which contradicts being ). If is a square, then there must exist such that . But this means that
In other words, all squares in this particular field do not generate the -order multiplicative subgroup, and so since half of the elements generate the -order subgroup then at most half of the elements are square. In fact exactly half of the elements are square (since squaring each nonsquare element gives a unique square). This means we can assume all squares can be written as for some , and therefore finding the square root is a matter of exponentiating by .
In the event that then things get more complicated because does not exist. Let's write as with odd. The case is impossible, and the case is what we already described, so consider . generates a -order multiplicative subgroup and generates the odd -order multiplicative subgroup. Then every element can be written as for and . If the element is a square, then there exists some which can be written for and . This means that , therefore we have , and . would have to be even in this case because otherwise it would be impossible to have for any . In the case that is not a square, then is odd, and so half of all elements are squares.
In order to compute the square root, we can first raise the element to the power to "kill" the -order component, giving
and then raise this result to the power to undo the effect of the original exponentiation on the -order component:
(since is relatively prime to ). This leaves bare the value which we can trivially handle. We can similarly kill the -order component to obtain , and put the values together to obtain the square root.
It turns out that in the cases there are simpler algorithms that merge several of these exponentiations together for efficiency. For other values of , the only known way is to manually extract by squaring until you obtain the identity for every single bit of . This is the essence of the Tonelli-Shanks square root algorithm and describes the general strategy. (There is another square root algorithm that uses quadratic extension fields, but it doesn't pay off in efficiency until the prime becomes quite large.)
Roots of unity
In the previous sections we wrote with odd, and stated that an element generated the -order subgroup. For convenience, let's denote The elements are known as the th roots of unity.
The primitive root of unity, is an th root of unity such that except when .
Important notes:
-
If is an th root of unity, satisfies If then
-
Equivalently, the roots of unity are solutions to the equation
-
("Negation lemma"). Proof: Since the order of is , Therefore,
-
("Halving lemma"). Proof: In other words, if we square each element in the th roots of unity, we would get back only half the elements, (i.e. the th roots of unity). There is a two-to-one mapping between the elements and their squares.
References
Polynomials
Let be a polynomial over with formal indeterminate . As an example,
defines a degree- polynomial. is referred to as the constant term. Polynomials of degree have coefficients. We will often want to compute the result of replacing the formal indeterminate with some concrete value , which we denote by .
In mathematics this is commonly referred to as "evaluating at a point ". The word "point" here stems from the geometrical usage of polynomials in the form , where is the coordinate of a point in two-dimensional space. However, the polynomials we deal with are almost always constrained to equal zero, and will be an element of some field. This should not be confused with points on an elliptic curve, which we also make use of, but never in the context of polynomial evaluation.
Important notes:
- Multiplication of polynomials produces a product polynomial that is the sum of the degrees of its factors. Polynomial division subtracts from the degree.
- Given a polynomial of degree , if we obtain evaluations of the polynomial at distinct points then these evaluations perfectly define the polynomial. In other words, given these evaluations we can obtain a unique polynomial of degree via polynomial interpolation.
- is the coefficient representation of the polynomial . Equivalently, we could use its evaluation representation at distinct points. Either representation uniquely specifies the same polynomial.
(aside) Horner's rule
Horner's rule allows for efficient evaluation of a polynomial of degree , using only multiplications and additions. It is the following identity:
Fast Fourier Transform (FFT)
The FFT is an efficient way of converting between the coefficient and evaluation representations of a polynomial. It evaluates the polynomial at the th roots of unity where is a primitive th root of unity. By exploiting symmetries in the roots of unity, each round of the FFT reduces the evaluation into a problem only half the size. Most commonly we use polynomials of length some power of two, , and apply the halving reduction recursively.
Motivation: Fast polynomial multiplication
In the coefficient representation, it takes operations to multiply two polynomials :
where each of the terms in the first polynomial has to be multiplied by the terms of the second polynomial.
In the evaluation representation, however, polynomial multiplication only requires operations:
where each evaluation is multiplied pointwise.
This suggests the following strategy for fast polynomial multiplication:
- Evaluate polynomials at all points;
- Perform fast pointwise multiplication in the evaluation representation ();
- Convert back to the coefficient representation.
The challenge now is how to evaluate and interpolate the polynomials efficiently. Naively, evaluating a polynomial at points would require operations (we use the Horner's rule at each point):
For convenience, we will denote the matrices above as:
( is known as the Discrete Fourier Transform of ; is also called the Vandermonde matrix.)
The (radix-2) Cooley-Tukey algorithm
Our strategy is to divide a DFT of size into two interleaved DFTs of size . Given the polynomial we split it up into even and odd terms:
To recover the original polynomial, we do
Trying this out on points and , we start to notice some symmetries:
Notice that we are only evaluating and over half the domain (halving lemma). This gives us all the terms we need to reconstruct over the full domain : which means we have transformed a length- DFT into two length- DFTs.
We choose to be a power of two (by zero-padding if needed), and apply this divide-and-conquer strategy recursively. By the Master Theorem1, this gives us an evaluation algorithm with operations, also known as the Fast Fourier Transform (FFT).
Inverse FFT
So we've evaluated our polynomials and multiplied them pointwise. What remains is to convert the product from the evaluation representation back to coefficient representation. To do this, we simply call the FFT on the evaluation representation. However, this time we also:
- replace by in the Vandermonde matrix, and
- multiply our final result by a factor of .
In other words:
(To understand why the inverse FFT has a similar form to the FFT, refer to Slide 13-1 of 2. The below image was also taken from 2.)
The Schwartz-Zippel lemma
The Schwartz-Zippel lemma informally states that "different polynomials are different at most points." Formally, it can be written as follows:
Let be a nonzero polynomial of variables with degree . Let be a finite set of numbers with at least elements in it. If we choose random from ,
In the familiar univariate case , this reduces to saying that a nonzero polynomial of degree has at most roots.
The Schwartz-Zippel lemma is used in polynomial equality testing. Given two multi-variate polynomials and of degrees respectively, we can test if for random where the size of is at least If the two polynomials are identical, this will always be true, whereas if the two polynomials are different then the equality holds with probability at most .
Vanishing polynomial
Consider the order- multiplicative subgroup with primitive root of unity . For all we have In other words, every element of fulfils the equation
meaning every element is a root of We call the vanishing polynomial over because it evaluates to zero on all elements of
This comes in particularly handy when checking polynomial constraints. For instance, to check that over we simply have to check that is some multiple of . In other words, if dividing our constraint by the vanishing polynomial still yields some polynomial we are satisfied that over
Lagrange basis functions
TODO: explain what a basis is in general (briefly).
Polynomials are commonly written in the monomial basis (e.g. ). However, when working over a multiplicative subgroup of order , we find a more natural expression in the Lagrange basis.
Consider the order- multiplicative subgroup with primitive root of unity . The Lagrange basis corresponding to this subgroup is a set of functions , where
We can write this more compactly as where is the Kronecker delta function.
Now, we can write our polynomial as a linear combination of Lagrange basis functions,
which is equivalent to saying that evaluates to at , to at , to at and so on.
When working over a multiplicative subgroup, the Lagrange basis function has a convenient sparse representation of the form
where is the barycentric weight. (To understand how this form was derived, refer to 3.) For we have .
Suppose we are given a set of evaluation points . Since we cannot assume that the 's form a multiplicative subgroup, we consider also the Lagrange polynomials 's in the general case. Then we can construct:
Here, every will produce a zero numerator term causing the whole product to evaluate to zero. On the other hand, will evaluate to at every term, resulting in an overall product of one. This gives the desired Kronecker delta behaviour on the set .
Lagrange interpolation
Given a polynomial in its evaluation representation
we can reconstruct its coefficient form in the Lagrange basis:
where
References
Cryptographic groups
In the section Inverses and groups we introduced the concept of groups. A group has an identity and a group operation. In this section we will write groups additively, i.e. the identity is and the group operation is .
Some groups can be used as cryptographic groups. At the risk of oversimplifying, this means that the problem of finding a discrete logarithm of a group element to a given base , i.e. finding such that , is hard in general.
Pedersen commitment
The Pedersen commitment [P99] is a way to commit to a secret message in a verifiable way. It uses two random public generators where is a cryptographic group of order . A random secret is chosen in , and the message to commit to is from any subset of . The commitment is
To open the commitment, the committer reveals and thus allowing anyone to verify that is indeed a commitment to
Notice that the Pedersen commitment scheme is homomorphic:
Assuming the discrete log assumption holds, Pedersen commitments are also perfectly hiding and computationally binding:
- hiding: the adversary chooses messages The committer commits to one of these messages Given the probability of the adversary guessing the correct is no more than .
- binding: the adversary cannot pick two different messages and randomness such that
Vector Pedersen commitment
We can use a variant of the Pedersen commitment scheme to commit to multiple messages at once, . This time, we'll have to sample a corresponding number of random public generators along with a single random generator as before (for use in hiding). Then, our commitment scheme is:
TODO: is this positionally binding?
Diffie–Hellman
An example of a protocol that uses cryptographic groups is Diffie–Hellman key agreement [DH1976]. The Diffie–Hellman protocol is a method for two users, Alice and Bob, to generate a shared private key. It proceeds as follows:
- Alice and Bob publicly agree on two prime numbers, and where is large and is a primitive root (Note that is a generator of the group )
- Alice chooses a large random number as her private key. She computes her public key and sends to Bob.
- Similarly, Bob chooses a large random number as his private key. He computes his public key and sends to Alice.
- Now both Alice and Bob compute their shared key which Alice computes as and Bob computes as
A potential eavesdropper would need to derive knowing only and : in other words, they would need to either get the discrete logarithm from or from which we assume to be computationally infeasible in
More generally, protocols that use similar ideas to Diffie–Hellman are used throughout cryptography. One way of instantiating a cryptographic group is as an elliptic curve. Before we go into detail on elliptic curves, we'll describe some algorithms that can be used for any group.
Multiscalar multiplication
TODO: Pippenger's algorithm
Reference: https://jbootle.github.io/Misc/pippenger.pdf
Elliptic curves
Elliptic curves constructed over finite fields are another important cryptographic tool.
We use elliptic curves because they provide a cryptographic group, i.e. a group in which the discrete logarithm problem (discussed below) is hard.
There are several ways to define the curve equation, but for our purposes, let be a large (255-bit) field, and then let the set of solutions to for some constant define the -rational points on an elliptic curve . These coordinates are called "affine coordinates". Each of the -rational points, together with a "point at infinity" that serves as the group identity, can be interpreted as an element of a group. By convention, elliptic curve groups are written additively.
"Three points on a line sum to zero, which is the point at infinity."
The group addition law is simple: to add two points together, find the line that intersects both points and obtain the third point, and then negate its -coordinate. The case that a point is being added to itself, called point doubling, requires special handling: we find the line tangent to the point, and then find the single other point that intersects this line and then negate. Otherwise, in the event that a point is being "added" to its negation, the result is the point at infinity.
The ability to add and double points naturally gives us a way to scale them by integers, called scalars. The number of points on the curve is the group order. If this number is a prime , then the scalars can be considered as elements of a scalar field, .
Elliptic curves, when properly designed, have an important security property. Given two random elements finding such that , otherwise known as the discrete log of with respect to , is considered computationally infeasible with classical computers. This is called the elliptic curve discrete log assumption.
If an elliptic curve group has prime order (like the ones used in Halo 2), then it is a finite cyclic group. Recall from the section on groups that this implies it is isomorphic to , or equivalently, to the scalar field . Each possible generator fixes the isomorphism; then an element on the scalar side is precisely the discrete log of the corresponding group element with respect to . In the case of a cryptographically secure elliptic curve, the isomorphism is hard to compute in the direction because the elliptic curve discrete log problem is hard.
It is sometimes helpful to make use of this isomorphism by thinking of group-based cryptographic protocols and algorithms in terms of the scalars instead of in terms of the group elements. This can make proofs and notation simpler.
For instance, it has become common in papers on proof systems to use the notation to denote a group element with discrete log , where the generator is implicit.
We also used this idea in the "distinct-x theorem", in order to prove correctness of optimizations for elliptic curve scalar multiplication in Sapling, and an endomorphism-based optimization in Appendix C of the original Halo paper.
Curve arithmetic
Point doubling
The simplest situation is doubling a point . Continuing with our example , this is done first by computing the derivative
To obtain expressions for we consider
To get the expression for we substitute into the elliptic curve equation:
Comparing coefficients for the term gives us
Projective coordinates
This unfortunately requires an expensive inversion of . We can avoid this by arranging our equations to "defer" the computation of the inverse, since we often do not need the actual affine coordinate of the resulting point immediately after an individual curve operation. Let's introduce a third coordinate and scale our curve equation by like so:
Our original curve is just this curve at the restriction . If we allow the affine point to be represented by , and then we have the homogenous projective curve
Obtaining from is as simple as computing when . (When we are dealing with the point at infinity .) In this form, we now have a convenient way to defer the inversion required by doubling a point. The general strategy is to express as rational functions using and , rearrange to make their denominators the same, and then take the resulting point to have be the shared denominator and .
Projective coordinates are often, but not always, more efficient than affine coordinates. There may be exceptions to this when either we have a different way to apply Montgomery's trick, or when we're in the circuit setting where multiplications and inversions are about equally as expensive (at least in terms of circuit size).
The following shows an example of doubling a point without an inversion. Substituting with gives us
and gives us
Notice how the denominators of and are the same. Thus, instead of computing we can compute with and set to the corresponding numerators such that and . This completely avoids the need to perform an inversion when doubling, and something analogous to this can be done when adding two distinct points.
Point addition
We now add two points with distinct -coordinates, and where to obtain The line has slope
Using the expression for , we compute -coordinate of as:
Plugging the expression for into the curve equation yields
Comparing coefficients for the term gives us .
Important notes:
- There exist efficient formulae1 for point addition that do not have edge cases (so-called "complete" formulae) and that unify the addition and doubling cases together. The result of adding a point to its negation using those formulae produces , which represents the point at infinity.
- In addition, there are other models like the Jacobian representation where where the curve is rescaled by instead of , and this representation has even more efficient arithmetic but no unified/complete formulae.
- We can easily compare two curve points and for equality in the homogenous projective coordinate space by "homogenizing" their Z-coordinates; the checks become and .
Curve endomorphisms
Imagine that has a primitive cube root of unity, or in other words that and so an element generates a -order multiplicative subgroup. Notice that a point on our example elliptic curve has two cousin points: , because the computation effectively kills the component of the -coordinate. Applying the map is an application of an endomorphism over the curve. The exact mechanics involved are complicated, but when the curve has a prime number of points (and thus a prime "order") the effect of the endomorphism is to multiply the point by a scalar in which is also a primitive cube root in the scalar field.
Curve point compression
Given a point on the curve , we know that its negation is also on the curve. To uniquely specify a point, we need only encode its -coordinate along with the sign of its -coordinate.
Serialization
As mentioned in the Fields section, we can interpret the least significant
bit of a field element as its "sign", since its additive inverse will always have the
opposite LSB. So we record the LSB of the -coordinate as sign
.
Pallas and Vesta are defined over the and fields, which
elements can be expressed in bits. This conveniently leaves one unused bit in a
32-byte representation. We pack the -coordinate sign
bit into the highest bit in
the representation of the -coordinate:
<----------------------------------- x --------------------------------->
Enc(P) = [_ _ _ _ _ _ _ _] [_ _ _ _ _ _ _ _] ... [_ _ _ _ _ _ _ _] [_ _ _ _ _ _ _ sign]
^ <-------------------------------------> ^
LSB 30 bytes MSB
The "point at infinity" that serves as the group identity, does not have an affine representation. However, it turns out that there are no points on either the Pallas or Vesta curve with or . We therefore use the "fake" affine coordinates to encode , which results in the all-zeroes 32-byte array.
Deserialization
When deserializing a compressed curve point, we first read the most significant bit as
ysign
, the sign of the -coordinate. Then, we set this bit to zero to recover the
original -coordinate.
If we return the "point at infinity" . Otherwise, we proceed
to compute Here, we read the least significant bit of as sign
.
If sign == ysign
, we already have the correct sign and simply return the curve point
. Otherwise, we negate and return .
Cycles of curves
Let be an elliptic curve over a finite field where is a prime. We denote this by and we denote the group of points of over with order For this curve, we call the "base field" and the "scalar field".
We instantiate our proof system over the elliptic curve . This allows us to prove statements about -arithmetic circuit satisfiability.
(aside) If our curve is over why is the arithmetic circuit instead in ? The proof system is basically working on encodings of the scalars in the circuit (or more precisely, commitments to polynomials whose coefficients are scalars). The scalars are in when their encodings/commitments are elliptic curve points in .
However, most of the verifier's arithmetic computations are over the base field and are thus efficiently expressed as an -arithmetic circuit.
(aside) Why are the verifier's computations (mainly) over ? The Halo 2 verifier actually has to perform group operations using information output by the circuit. Group operations like point doubling and addition use arithmetic in , because the coordinates of points are in
This motivates us to construct another curve with scalar field , which has an -arithmetic circuit that can efficiently verify proofs from the first curve. As a bonus, if this second curve had base field it would generate proofs that could be efficiently verified in the first curve's -arithmetic circuit. In other words, we instantiate a second proof system over forming a 2-cycle with the first:
TODO: Pallas-Vesta curves
Reference: https://github.com/zcash/pasta
Hashing to curves
Sometimes it is useful to be able to produce a random point on an elliptic curve corresponding to some input, in such a way that no-one will know its discrete logarithm (to any other base).
This is described in detail in the Internet draft on Hashing to Elliptic Curves. Several algorithms can be used depending on efficiency and security requirements. The framework used in the Internet Draft makes use of several functions:
hash_to_field
: takes a byte sequence input and maps it to a element in the base fieldmap_to_curve
: takes an element and maps it to .
TODO: Simplified SWU
Reference: https://eprint.iacr.org/2019/403.pdf
References
Polynomial commitment using inner product argument
We want to commit to some polynomial , and be able to provably evaluate the committed polynomial at arbitrary points. The naive solution would be for the prover to simply send the polynomial's coefficients to the verifier: however, this requires communication. Our polynomial commitment scheme gets the job done using communication.
Setup
Given a parameter we generate the common reference string defining certain constants for this scheme:
- is a group of prime order
- is a vector of random group elements;
- is a random group element; and
- is the finite field of order
Commit
The Pedersen vector commitment is defined as
for some polynomial and some blinding factor Here, each element of the vector is the coefficient for the th degree term of and is of maximal degree
Open
(prover) and OpenVerify
(verifier)
The modified inner product argument is an argument of knowledge for the relation
where is composed of increasing powers of the evaluation point This allows a prover to demonstrate to a verifier that the polynomial contained “inside” the commitment evaluates to at and moreover, that the committed polynomial has maximum degree
The inner product argument proceeds in rounds. For our purposes, it is sufficient to know about its final outputs, while merely providing intuition about the intermediate rounds. (Refer to Section 3 in the Halo paper for a full explanation.)
Before beginning the argument, the verifier selects a random group element and sends it to the prover. We initialize the argument at round with the vectors and In each round :
- the prover computes two values and by taking some inner product of with and . Note that are in some sense "cross-terms": the lower half of is used with the higher half of and , and vice versa:
- the verifier issues a random challenge ;
- the prover uses to compress the lower and higher halves of , thus producing a new vector of half the original length The vectors and are similarly compressed to give and (using instead of ).
- , and are input to the next round
Note that at the end of the last round we are left with , , each of length 1. The intuition is that these final scalars, together with the challenges and "cross-terms" from each round, encode the compression in each round. Since the prover did not know the challenges in advance, they would have been unable to manipulate the round compressions. Thus, checking a constraint on these final terms should enforce that the compression had been performed correctly, and that the original satisfied the relation before undergoing compression.
Note that are simply rearrangements of the publicly known with the round challenges mixed in: this means the verifier can compute independently and verify that the prover had provided those same values.
Recursion
Alternative terms: Induction; Accumulation scheme; Proof-carrying data
However, the computation of requires a length- multiexponentiation where is composed of the round challenges arranged in a binary counting structure. This is the linear-time computation that we want to amortise across a batch of proof instances. Instead of computing notice that we can express as a commitment to a polynomial
where is a polynomial with degree
Since is a commitment, it can be checked in an inner product argument. The verifier circuit witnesses and brings out as public inputs to the proof The next verifier instance checks using the inner product argument; this includes checking that evaluates at some random point to the expected value for the given challenges Recall from the previous section that this check only requires work. At the end of checking and the circuit is left with a new along with the challenges sampled for the check. To fully accept as valid, we should perform a linear-time computation of . Once again, we delay this computation by witnessing and bringing out as public inputs to the proof This goes on from one proof instance to the next, until we are satisfied with the size of our batch of proofs. We finally perform a single linear-time computation, thus deciding the validity of the whole batch. |
We recall from the section Cycles of curves that we can instantiate this protocol over a two-cycle, where a proof produced by one curve is efficiently verified in the circuit of the other curve. However, some of these verifier checks can actually be efficiently performed in the native circuit; these are "deferred" to the next native circuit (see diagram below) instead of being immediately passed over to the other curve.