# Polynomial commitment using inner product argument

We want to commit to some polynomial $p(X)∈F_{p}[X]$, and be able to provably evaluate the committed polynomial at arbitrary points. The naive solution would be for the prover to simply send the polynomial's coefficients to the verifier: however, this requires $O(n)$ communication. Our polynomial commitment scheme gets the job done using $O(gn)$ communication.

`Setup`

Given a parameter $d=2_{k},$ we generate the common reference string $σ=(G,G,H,F_{p})$ defining certain constants for this scheme:

- $G$ is a group of prime order $p;$
- $G∈G_{d}$ is a vector of $d$ random group elements;
- $H∈G$ is a random group element; and
- $F_{p}$ is the finite field of order $p.$

`Commit`

The Pedersen vector commitment $Commit$ is defined as

$Commit(σ,p(X);r)=⟨a,G⟩+[r]H,$

for some polynomial $p(X)∈F_{p}[X]$ and some blinding factor $r∈F_{p}.$ Here, each element of the vector $a_{i}∈F_{p}$ is the coefficient for the $i$th degree term of $p(X),$ and $p(X)$ is of maximal degree $d−1.$

`Open`

(prover) and `OpenVerify`

(verifier)

The modified inner product argument is an argument of knowledge for the relation

${((P,x,v);(a,r)):P=⟨a,G⟩+[r]H,v=⟨a,b⟩} ,$

where $b=(1,x,x_{2},⋯,x_{d−1})$ is composed of increasing powers of the evaluation point $x.$ This allows a prover to demonstrate to a verifier that the polynomial contained “inside” the commitment $P$ evaluates to $v$ at $x,$ and moreover, that the committed polynomial has maximum degree $d−1.$

The inner product argument proceeds in $k=g_{2}d$ rounds. For our purposes, it is sufficient to know about its final outputs, while merely providing intuition about the intermediate rounds. (Refer to Section 3 in the Halo paper for a full explanation.)

Before beginning the argument, the verifier selects a random group element $U$ and sends it to the prover. We initialize the argument at round $k,$ with the vectors $a_{(k)}:=a,$ $G_{(k)}:=G$ and $b_{(k)}:=b.$ In each round $j=k,k−1,⋯,1$:

- the prover computes two values $L_{j}$ and $R_{j}$ by taking some inner product of $a_{(j)}$ with $G_{(j)}$ and $b_{(j)}$. Note that are in some sense "cross-terms": the lower half of $a$ is used with the higher half of $G$ and $b$, and vice versa:

$L_{j}R_{j} =⟨a_{lo},G_{hi}⟩+[l_{j}]H+[⟨a_{lo},b_{hi}⟩]U=⟨a_{hi},G_{lo}⟩+[l_{j}]H+[⟨a_{hi},b_{lo}⟩]U $

- the verifier issues a random challenge $u_{j}$;
- the prover uses $u_{j}$ to compress the lower and higher halves of $a_{(j)}$, thus producing a new vector of half the original length $a_{(j−1)}=a_{hi}⋅u_{j}+a_{lo}⋅u_{j}.$ The vectors $G_{(j)}$ and $b_{(j)}$ are similarly compressed to give $G_{(j−1)}$ and $b_{(j−1)}$.
- $a_{(j−1)}$, $G_{(j−1)}$ and $b_{(j−1)}$ are input to the next round $j−1.$

Note that at the end of the last round $j=1,$ we are left with $a:=a_{(0)}$, $G:=G_{(0)}$, $b:=b_{(0)},$ each of length 1. The intuition is that these final scalars, together with the challenges ${u_{j}}$ and "cross-terms" ${L_{j},R_{j}}$ from each round, encode the compression in each round. Since the prover did not know the challenges $U,{u_{j}}$ in advance, they would have been unable to manipulate the round compressions. Thus, checking a constraint on these final terms should enforce that the compression had been performed correctly, and that the original $a$ satisfied the relation before undergoing compression.

Note that $G,b$ are simply rearrangements of the publicly known $G,b,$ with the round challenges ${u_{j}}$ mixed in: this means the verifier can compute $G,b$ independently and verify that the prover had provided those same values.