# Protocol Description

## Preliminaries

We take $λ$ as our security parameter, and unless explicitly noted all algorithms and adversaries are probabilistic (interactive) Turing machines that run in polynomial time in this security parameter. We use $negl(λ)$ to denote a function that is negligible in $λ$.

### Cryptographic Groups

We let $G$ denote a cyclic group of prime order $p$. The identity of a group is written as $O$. We refer to the scalars of elements in $G$ as elements in a scalar field $F$ of size $p$. Group elements are written in capital letters while scalars are written in lowercase or Greek letters. Vectors of scalars or group elements are written in boldface, i.e. $a∈F_{n}$ and $G∈G_{n}$. Group operations are written additively and the multiplication of a group element $G$ by a scalar $a$ is written $[a]G$.

We will often use the notation $⟨a,b⟩$ to describe the inner product of two like-length vectors of scalars $a,b∈F_{n}$. We also use this notation to represent the linear combination of group elements such as $⟨a,G⟩$ with $a∈F_{n},G∈G_{n}$, computed in practice by a multiscalar multiplication.

We use $0_{n}$ to describe a vector of length $n$ that contains only zeroes in $F$.

Discrete Log Relation Problem.The advantage metric $Adv_{G,n}(A,λ)=Pr[G_{G,n}(A,λ)]$ is defined with respect the following game. $ GameG_{G,n}(A,λ): G←G_{λ}a←A(G)Return(⟨a,G⟩=O∧a=0_{n}) $

Given an $n$-length vector $G∈G_{n}$ of group elements, the
*discrete log relation problem* asks for $g∈F_{n}$ such that
$g=0_{n}$ and yet $⟨g,G⟩=O$, which we refer to as a *non-trivial* discrete log relation. The hardness
of this problem is tightly implied by the hardness of the discrete log problem
in the group as shown in Lemma 3 of [JT20].
Formally, we use the game $G_{G,n}$ defined above to capture this problem.

### Interactive Proofs

*Interactive proofs* are a triple of algorithms $IP=(Setup,P,V)$. The algorithm $Setup(1_{λ})$ produces as its output some *public
parameters* commonly referred to by $pp$. The prover $P$ and verifier
$V$ are interactive machines (with access to $pp$) and we denote by
$⟨P(x),V(y)⟩$ an algorithm that executes a
two-party protocol between them on inputs $x,y$. The output of this protocol, a
*transcript* of their interaction, contains all of the messages sent between
$P$ and $V$. At the end of the protocol, the verifier outputs a
decision bit.

### Zero knowledge Arguments of Knowledge

Proofs of knowledge are interactive proofs where the prover aims to convince the
verifier that they know a witness $w$ such that $(x,w)∈R$ for a
statement $x$ and polynomial-time decidable relation $R$. We will work
with *arguments* of knowledge which assume computationally-bounded provers.

We will analyze arguments of knowledge through the lens of four security notions.

**Completeness:**If the prover possesses a valid witness, can they*always*convince the verifier? It is useful to understand this property as it can have implications for the other security notions.**Soundness:**Can a cheating prover falsely convince the verifier of the correctness of a statement that is not actually correct? We refer to the probability that a cheating prover can falsely convince the verifier as the*soundness error*.**Knowledge soundness:**When the verifier is convinced the statement is correct, does the prover actually possess ("know") a valid witness? We refer to the probability that a cheating prover falsely convinces the verifier of this knowledge as the*knowledge error*.**Zero knowledge:**Does the verifier learn anything besides that which can be inferred from the correctness of the statement and the prover's knowledge of a valid witness?

First, we will visit the simple definition of completeness.

Perfect Completeness.An interactive argument $(Setup,P,V)$ hasperfect completenessif for all polynomial-time decidable relations $R$ and for all non-uniform polynomial-time adversaries $A$ $Pr[(x,w)∈/R∨⟨P(pp,x,w),V(pp,x)⟩accepts pp←Setup(1_{λ})(x,w)←A(pp) ]=1$

#### Soundness

Complicating our analysis is that although our protocol is described as an
interactive argument, it is realized in practice as a *non-interactive argument*
through the use of the Fiat-Shamir transformation.

Public coin.We say that an interactive argument ispublic coinwhen all of the messages sent by the verifier are each sampled with fresh randomness.

Fiat-Shamir transformation.In this transformation an interactive, public coin argument can be madenon-interactivein therandom oracle modelby replacing the verifier algorithm with a cryptographically strong hash function that produces sufficiently random looking output.

This transformation means that in the concrete protocol a cheating prover can
easily "rewind" the verifier by forking the transcript and sending new messages
to the verifier. Studying the concrete security of our construction *after*
applying this transformation is important. Fortunately, we are able to follow a
framework of analysis by Ghoshal and Tessaro
([GT20]) that has been applied to
constructions similar to ours.

We will study our protocol through the notion of *state-restoration soundness*.
In this model the (cheating) prover is allowed to rewind the verifier to any
previous state it was in. The prover wins if they are able to produce an
accepting transcript.

State-Restoration Soundness.Let $IP$ be an interactive argument with $r>=r(λ)$ verifier challenges and let the $i$th challenge be sampled from $Ch_{i}$. The advantage metric $Adv_{IP}(P,λ)=Pr[SRS_{P}(λ)]$ of a state restoration prover $P$ is defined with respect to the following game. $ GameSRS_{IP}(λ): win←false;tr←ϵpp←IP.Setup(1_{λ})(x,st_{P})←P_{λ}(pp)RunP_{λ}(st_{P})Return win OracleO_{SRS}(τ=(a_{1},c_{1},...,a_{i−1},c_{i−1}),a_{i}): Ifτ∈trthenIfi≤rthenc_{i}←Ch_{i};tr←tr∣∣(τ,a_{i},c_{i});Returnc_{i}Else ifi=r+1thend←IP.V(pp,x,(τ,a_{i}));tr←(τ,a_{i})Ifd=1then win←trueReturndReturn⊥ $

As shown in [GT20] (Theorem 1) state restoration soundness is tightly related to soundness after applying the Fiat-Shamir transformation.

#### Knowledge Soundness

We will show that our protocol satisfies a strengthened notion of knowledge
soundness known as *witness extended emulation*. Informally, this notion states
that for any successful prover algorithm there exists an efficient *emulator*
that can extract a witness from it by rewinding it and supplying it with fresh
randomness.

However, we must slightly adjust our definition of witness extended emulation to account for the fact that our provers are state restoration provers and can rewind the verifier. Further, to avoid the need for rewinding the state restoration prover during witness extraction we study our protocol in the algebraic group model.

Algebraic Group Model (AGM).An adversary $P_{alg}$ is said to bealgebraicif whenever it outputs a group element $X$ it also outputs arepresentation$x∈F_{n}$ such that $⟨x,G⟩=X$ where $G∈G_{n}$ is the vector of group elements that $P_{alg}$ has seen so far. Notationally, we write ${X}$ to describe a group element $X$ enhanced with this representation. We also write ${X}_{i}$ to identify the component of the representation of $X$ that corresponds with $G_{i}$. In other words, $X=i=0∑n−1 [{X}_{i}]G_{i}$

The algebraic group model allows us to perform so-called "online" extraction for some protocols: the extractor can obtain the witness from the representations themselves for a single (accepting) transcript.

State Restoration Witness Extended EmulationLet $IP$ be an interactive argument for relation $R$ with $r=r(λ)$ challenges. We define for all non-uniform algebraic provers $P_{alg}$, extractors $E$, and computationally unbounded distinguishers $D$ the advantage metric $Adv_{IP,R}(P_{alg},D,E,λ)=Pr[WEE-real_{IP,R}(λ)]−Pr[WEE-ideal_{IP,R}(λ)]$ is defined with the respect to the following games. $ GameWEE-real_{IP,R}(λ): tr←ϵpp←IP.Setup(1_{λ})(x,st_{P})←P_{alg}(pp)RunP_{alg}_{O_{real}}(st_{P})b←D(tr)Returnb=1GameWEE-ideal_{IP,R}(λ): tr←ϵpp←IP.Setup(1_{λ})(x,st_{P})←P_{alg}(pp)st_{E}←(1_{λ},pp,x)RunP_{alg}_{O_{ideal}}(st_{P})w←E(st_{E},⊥)b←D(tr)Return(b=1)∧(Acc(tr)⟹(x,w)∈R) OracleO_{real}(τ=(a_{1},c_{1},...,a_{i−1},c_{i−1}),a_{i}): Ifτ∈trthenIfi≤rthenc_{i}←Ch_{i};tr←tr∣∣(τ,a_{i},c_{i});Returnc_{i}Else ifi=r+1thend←IP.V(pp,x,(τ,a_{i}));tr←(τ,a_{i})Ifd=1then win←trueReturndReturn⊥OracleO_{ideal}(τ,a): Ifτ∈trthen(r,st_{E})←E(st_{E},[(τ,a)])tr←tr∣∣(τ,a,r)ReturnrReturn⊥ $

#### Zero Knowledge

We say that an argument of knowledge is *zero knowledge* if the verifier also
does not learn anything from their interaction besides that which can be learned
from the existence of a valid $w$. More formally,

Perfect Special Honest-Verifier Zero Knowledge.A public coin interactive argument $(Setup,P,V)$ hasperfect special honest-verifier zero knowledge(PSHVZK) if for all polynomial-time decidable relations $R$ and for all $(x,w)∈R$ and for all non-uniform polynomial-time adversaries $A_{1},A_{2}$ there exists a probabilistic polynomial-time simulator $S$ such that $= Pr A_{1}(σ,x,tr)=1 pp←Setup(1_{λ});(x,w,ρ)←A_{2}(pp);tr←⟨P(pp,x,w),V(pp,x,ρ)⟩ Pr A_{1}(σ,x,tr)=1 pp←Setup(1_{λ});(x,w,ρ)←A_{2}(pp);tr←S(pp,x,ρ) $ where $ρ$ is the internal randomness of the verifier.

In this (common) definition of zero-knowledge the verifier is expected to act "honestly" and send challenges that correspond only with their internal randomness; they cannot adaptively respond to the prover based on the prover's messages. We use a strengthened form of this definition that forces the simulator to output a transcript with the same (adversarially provided) challenges that the verifier algorithm sends to the prover.

## Protocol

Let $ω∈F$ be a $n=2_{k}$ primitive root of unity forming the domain $D=(ω_{0},ω_{1},...,ω_{n−1})$ with $t(X)=X_{n}−1$ the vanishing polynomial over this domain. Let $n_{g},n_{a},n_{e}$ be positive integers with $n_{a},n_{e}<n$ and $n_{g}≥4$. We present an interactive argument $Halo=(Setup,P,V)$ for the relation $R=⎩⎨⎧ ((g(X,C_{0},...,C_{n_{a}−1},a_{0}(X),...,a_{n_{a}−1}(X,C_{0},...,C_{n_{a}−1},a_{0}(X),...,a_{n_{a}−2}(X))));(a_{0}(X),a_{1}(X,C_{0},a_{0}(X)),...,a_{n_{a}−1}(X,C_{0},...,C_{n_{a}−1},a_{0}(X),...,a_{n_{a}−2}(X))) ):g(ω_{i},⋯)=0∀i∈[0,2_{k}) ⎭⎬⎫ $ where $a_{0},a_{1},...,a_{n_{a}−1}$ are (multivariate) polynomials with degree $n−1$ in $X$ and $g$ has degree $n_{g}(n−1)$ at most in any indeterminates $X,C_{0},C_{1},...$.

$Setup(λ)$ returns $pp=(G,F,G∈G_{n},U,W∈G)$.

For all $i∈[0,n_{a})$:

- Let $p_{i}$ be the exhaustive set of integers $j$ (modulo $n$) such that $a_{i}(ω_{j}X,⋯)$ appears as a term in $g(X,⋯)$.
- Let $q$ be a list of distinct sets of integers containing $p_{i}$ and the set $q_{0}={0}$.
- Let $σ(i)=q_{j}$ when $q_{j}=p_{i}$.

Let $n_{q}≤n_{a}$ denote the size of $q$, and let $n_{e}$ denote the size of every $p_{i}$ without loss of generality.

In the following protocol, we take it for granted that each polynomial $a_{i}(X,⋯)$ is defined such that $n_{e}+1$ blinding factors are freshly sampled by the prover and are each present as an evaluation of $a_{i}(X,⋯)$ over the domain $D$. In all of the following, the verifier's challenges cannot be zero or an element in $D$, and some additional limitations are placed on specific challenges as well.

- $P$ and $V$ proceed in the following $n_{a}$ rounds of interaction, where in round $j$ (starting at $0$)

- $P$ sets $a_{j}(X)=a_{j}(X,c_{0},c_{1},...,c_{j−1},a_{0}(X,⋯),...,a_{j−1}(X,⋯,c_{j−1}))$
- $P$ sends a hiding commitment $A_{j}=⟨a_{′},G⟩+[⋅]W$ where $a_{′}$ are the coefficients of the univariate polynomial $a_{j}(X)$ and $⋅$ is some random, independently sampled blinding factor elided for exposition. (This elision notation is used throughout this protocol description to simplify exposition.)
- $V$ responds with a challenge $c_{j}$.

- $P$ sets $g_{′}(X)=g(X,c_{0},c_{1},...,c_{n_{a}−1},⋯)$.
- $P$ sends a commitment $R=⟨r,G⟩+[⋅]W$ where $r∈F_{n}$ are the coefficients of a randomly sampled univariate polynomial $r(X)$ of degree $n−1$.
- $P$ computes univariate polynomial $h(X)=t(X)g_{′}(X) $ of degree $n_{g}(n−1)−n$.
- $P$ computes at most $n−1$ degree polynomials $h_{0}(X),h_{1}(X),...,h_{n_{g}−2}(X)$ such that $h(X)=i=0∑n_{g}−2 X_{ni}h_{i}(X)$.
- $P$ sends commitments $H_{i}=⟨h_{i},G⟩+[⋅]W$ for all $i$ where $h_{i}$ denotes the vector of coefficients for $h_{i}(X)$.
- $V$ responds with challenge $x$ and computes $H_{′}=i=0∑n_{g}−2 [x_{ni}]H_{i}$.
- $P$ sets $h_{′}(X)=i=0∑n_{g}−2 x_{ni}h_{i}(X)$.
- $P$ sends $r=r(x)$ and for all $i∈[0,n_{a})$ sends $a_{i}$ such that $(a_{i})_{j}=a_{i}(ω_{(p_{i})_{j}}x)$ for all $j∈[0,n_{e}−1]$.
- For all $i∈[0,n_{a})$ $P$ and $V$ set $s_{i}(X)$ to be the lowest degree univariate polynomial defined such that $s_{i}(ω_{(p_{i})_{j}}x)=(a_{i})_{j}$ for all $j∈[0,n_{e}−1)$.
- $V$ responds with challenges $x_{1},x_{2}$ and initializes $Q_{0},Q_{1},...,Q_{n_{q}−1}=O$.

- Starting at $i=0$ and ending at $n_{a}−1$ $V$ sets $Q_{σ(i)}:=[x_{1}]Q_{σ(i)}+A_{i}$.
- $V$ finally sets $Q_{0}:=[x_{1}]Q_{0}+[x_{1}]H_{′}+R$.

- $P$ initializes $q_{0}(X),q_{1}(X),...,q_{n_{q}−1}(X)=0$.

- Starting at $i=0$ and ending at $n_{a}−1$ $P$ sets $q_{σ(i)}:=x_{1}q_{σ(i)}+a_{′}(X)$.
- $P$ finally sets $q_{0}(X):=x_{1}q_{0}(X)+x_{1}h_{′}(X)+r(X)$.

- $P$ and $V$ initialize $r_{0}(X),r_{1}(X),...,r_{n_{q}−1}(X)=0$.

- Starting at $i=0$ and ending at $n_{a}−1$ $P$ and $V$ set $r_{σ(i)}(X):=x_{1}r_{σ(i)}(X)+s_{i}(X)$.
- Finally $P$ and $V$ set $r_{0}:=x_{1}r_{0}+x_{1}h+r$ and where $h$ is computed by $V$ as $t(x)g_{′}(x) $ using the values $r,a$ provided by $P$.

- $P$ sends $Q_{′}=⟨q_{′},G⟩+[⋅]W$ where $q_{′}$ defines the coefficients of the polynomial $q_{′}(X)=i=0∑n_{q}−1 x_{2} j=0∏n_{e}−1 (X−ω_{(q_{i})_{j}}x)q_{i}(X)−r_{i}(X) $
- $V$ responds with challenge $x_{3}$.
- $P$ sends $u∈F_{n_{q}}$ such that $u_{i}=q_{i}(x_{3})$ for all $i∈[0,n_{q})$.
- $V$ responds with challenge $x_{4}$.
- $V$ sets $P=Q_{′}+x_{4}i=0∑n_{q}−1 [x_{4}]Q_{i}$ and $v=$ $i=0∑n_{q}−1 x_{2} j=0∏n_{e}−1 (x_{3}−ω_{(q_{i})_{j}}x)u_{i}−r_{i}(x_{3}) +x_{4}i=0∑n_{q}−1 x_{4}u_{i}$
- $P$ sets $p(X)=q_{′}(X)+[x_{4}]i=0∑n_{q}−1 x_{4}q_{i}(X)$.
- $P$ samples a random polynomial $s(X)$ of degree $n−1$ with a root at $x_{3}$ and sends a commitment $S=⟨s,G⟩+[⋅]W$ where $s$ defines the coefficients of $s(X)$.
- $V$ responds with challenges $ξ,z$.
- $V$ sets $P_{′}=P−[v]G_{0}+[ξ]S$.
- $P$ sets $p_{′}(X)=p(X)−p(x_{3})+ξs(X)$ (where $p(x_{3})$ should correspond with the verifier's computed value $v$).
- Initialize $p_{′}$ as the coefficients of $p_{′}(X)$ and $G_{′}=G$ and $b=(x_{3},x_{3},...,x_{3})$. $P$ and $V$ will interact in the following $k$ rounds, where in the $j$th round starting in round $j=0$ and ending in round $j=k−1$:

- $P$ sends $L_{j}=⟨p_{′}_{hi},G_{′}_{lo}⟩+[z⟨p_{′}_{hi},b_{lo}⟩]U+[⋅]W$ and $R_{j}=⟨p_{′}_{lo},G_{′}_{hi}⟩+[z⟨p_{′}_{lo},b_{hi}⟩]U+[⋅]W$.
- $V$ responds with challenge $u_{j}$ chosen such that $1+u_{k−1−j}x_{3}$ is nonzero.
- $P$ and $V$ set $G_{′}:=G_{′}_{lo}+u_{j}G_{′}_{hi}$ and $b:=b_{lo}+u_{j}b_{hi}$.
- $P$ sets $p_{′}:=p_{′}_{lo}+u_{j}p_{′}_{hi}$.

- $P$ sends $c=p_{′}_{0}$ and synthetic blinding factor $f$ computed from the elided blinding factors.
- $V$ accepts only if $∑_{j=0}[u_{j}]L_{j}+P_{′}+∑_{j=0}[u_{j}]R_{j}=[c]G_{′}_{0}+[cb_{0}z]U+[f]W$.

### Zero-knowledge and Completeness

We claim that this protocol is *perfectly complete*. This can be verified by
inspection of the protocol; given a valid witness $a_{i}(X,⋯)∀i$ the
prover succeeds in convincing the verifier with probability $1$.

We claim that this protocol is *perfect special honest-verifier zero knowledge*.
We do this by showing that a simulator $S$ exists which can produce an
accepting transcript that is equally distributed with a valid prover's
interaction with a verifier with the same public coins. The simulator will act
as an honest prover would, with the following exceptions:

- In step $1$ of the protocol $S$ chooses random degree $n−1$ polynomials (in $X$) $a_{i}(X,⋯)∀i$.
- In step $5$ of the protocol $S$ chooses a random $n−1$ degree polynomials $h_{0}(X),h_{1}(X),...,h_{n_{g}−2}(X)$.
- In step $14$ of the protocol $S$ chooses a random $n−1$ degree polynomial $q_{′}(X)$.
- In step $20$ of the protocol $S$ uses its foreknowledge of the verifier's choice of $ξ$ to produce a degree $n−1$ polynomial $s(X)$ conditioned only such that $p(X)−v+ξs(X)$ has a root at $x_{3}$.

First, let us consider why this simulator always succeeds in producing an
*accepting* transcript. $S$ lacks a valid witness and simply commits to
random polynomials whenever knowledge of a valid witness would be required by
the honest prover. The verifier places no conditions on the scalar values in the
transcript. $S$ must only guarantee that the check in step $26$ of the
protocol succeeds. It does so by using its knowledge of the challenge $ξ$ to
produce a polynomial which interferes with $p_{′}(X)$ to ensure it has a root at
$x_{3}$. The transcript will thus always be accepting due to perfect completeness.

In order to see why $S$ produces transcripts distributed identically to the
honest prover, we will look at each piece of the transcript and compare the
distributions. First, note that $S$ (just as the honest prover) uses a
freshly random blinding factor for every group element in the transcript, and so
we need only consider the *scalars* in the transcript. $S$ acts just as the
prover does except in the mentioned cases so we will analyze each case:

- $S$ and an honest prover reveal $n_{e}$ openings of each polynomial $a_{i}(X,⋯)$, and at most one additional opening of each $a_{i}(X,⋯)$ in step $16$. However, the honest prover blinds their polynomials $a_{i}(X,⋯)$ (in $X$) with $n_{e}+1$ random evaluations over the domain $D$. Thus, the openings of $a_{i}(X,⋯)$ at the challenge $x$ (which is prohibited from being $0$ or in the domain $D$ by the protocol) are distributed identically between $S$ and an honest prover.
- Neither $S$ nor the honest prover reveal $h(x)$ as it is computed by the verifier. However, the honest prover may reveal $h_{′}(x_{3})$ --- which has a non-trivial relationship with $h(X)$ --- were it not for the fact that the honest prover also commits to a random degree $n−1$ polynomial $r(X)$ in step $3$, producing a commitment $R$ and ensuring that in step $12$ when the prover sets $q_{0}(X):=x_{1}q_{0}(X)+x_{1}h_{′}(X)+r(X)$ the distribution of $q_{0}(x_{3})$ is uniformly random. Thus, $h_{′}(x_{3})$ is never revealed by the honest prover nor by $S$.
- The expected value of $q_{′}(x_{3})$ is computed by the verifier (in step $18$) and so the simulator's actual choice of $q_{′}(X)$ is irrelevant.
- $p(X)−v+ξs(X)$ is conditioned on having a root at $x_{3}$, but otherwise no conditions are placed on $s(X)$ and so the distribution of the degree $n−1$ polynomial $p(X)−v+ξs(X)$ is uniformly random whether or not $s(X)$ has a root at $x_{3}$. Thus, the distribution of $c$ produced in step $25$ is identical between $S$ and an honest prover. The synthetic blinding factor $f$ also revealed in step $25$ is a trivial function of the prover's other blinding factors and so is distributed identically between $S$ and an honest prover.

Notes:

- In an earlier version of our protocol, the prover would open each individual commitment $H_{0},H_{1},...$ at $x$ as part of the multipoint opening argument, and the verifier would confirm that a linear combination of these openings (with powers of $x_{n}$) agreed to the expected value of $h(x)$. This was done because it's more efficient in recursive proofs. However, it was unclear to us what the expected distribution of the openings of these commitments $H_{0},H_{1},...$ was and so proving that the argument was zero-knowledge is difficult. Instead, we changed the argument so that the
*verifier*computes a linear combination of the commitments and that linear combination is opened at $x$. This avoided leaking $h_{i}(x)$. - As mentioned, in step $3$ the prover commits to a random polynomial as a way of ensuring that $h_{′}(x_{3})$ is not revealed in the multiopen argument. This is done because it's unclear what the distribution of $h_{′}(x_{3})$ would be.
- Technically it's also possible for us to prove zero-knowledge with a simulator that uses its foreknowledge of the challenge $x$ to commit to an $h(X)$ which agrees at $x$ to the value it will be expected to. This would obviate the need for the random polynomial $s(X)$ in the protocol. This may make the analysis of zero-knowledge for the remainder of the protocol a little bit tricky though, so we didn't go this route.
- Group element blinding factors are
*technically*not necessary after step $23$ in which the polynomial is completely randomized. However, it's simpler in practice for us to ensure that every group element in the protocol is randomly blinded to make edge cases involving the point at infinity harder. - It is crucial that the verifier cannot challenge the prover to open polynomials at points in $D$ as otherwise the transcript of an honest prover will be forced to contain what could be portions of the prover's witness. We therefore restrict the space of challenges to include all elements of the field except $D$ and, for simplicity, we also prohibit the challenge of $0$.

## Witness-extended Emulation

Let $Halo=Halo[G]$ be the interactive argument described above for relation $R$ and some group $G$ with scalar field $F$. We can always construct an extractor $E$ such that for any non-uniform algebraic prover $P_{alg}$ making at most $q$ queries to its oracle, there exists a non-uniform adversary $H$ with the property that for any computationally unbounded distinguisher $D$

$Adv_{Halo,R}(P_{alg},D,E,λ)≤qϵ+Adv_{G,n+2}(H,λ)$

where $ϵ≤∣Ch∣n_{g}⋅(n−1) $.

*Proof.* We will prove this by invoking Theorem 1 of [GT20]. First, we note that the challenge space for all rounds is the same, i.e. $∀iCh=Ch_{i}$. Theorem 1 requires us to define:

- $BadCh(tr_{′})∈Ch$ for all partial transcripts $tr_{′}=(pp,x,[a_{0}],c_{0},…,[a_{i}])$ such that $∣BadCh(tr_{′})∣/∣Ch∣≤ϵ$.
- an extractor function $e$ that takes as input an accepting extended transcript $tr$ and either returns a valid witness or fails.
- a function $p_{fail}(Halo,P_{alg},e,R)$ returning a probability.

We say that an accepting extended transcript $tr$ contains "bad challenges" if and only if there exists a partial extended transcript $tr_{′}$, a challenge $c_{i}∈BadCh(tr_{′})$, and some sequence of prover messages and challenges $([a_{i+1}],c_{i+1},…[a_{j}])$ such that $tr=tr_{′}∣∣(c_{i},[a_{i+1}],c_{i+1},…[a_{j}])$.

Theorem 1 requires that $e$, when given an accepting extended transcript $tr$ that does not contain "bad challenges", returns a valid witness for that transcript except with probability bounded above by $p_{fail}(Halo,P_{alg},e,R)$.

Our strategy is as follows: we will define $e$, establish an upper bound on $p_{fail}$ with respect to an adversary $H$ that plays the $dl-rel_{G,n+2}$ game, substitute these into Theorem 1, and then walk through the protocol to determine the upper bound of the size of $BadCh(tr_{′})$. The adversary $H$ plays the $dl-rel_{G,n+2}$ game as follows: given the inputs $U,W∈G,G∈G_{n}$, the adversary $H$ simulates the game $sr-wee_{Halo,R}$ to $P_{alg}$ using the inputs from the $dl-rel_{G,n+2}$ game as public parameters. If $P_{alg}$ manages to produce an accepting extended transcript $tr$, $H$ invokes a function $h$ on $tr$ and returns its output. We shall define $h$ in such a way that for an accepting extended transcript $tr$ that does not contain "bad challenges", $e(tr)$ *always* returns a valid witness whenever $h(tr)$ does *not* return a non-trivial discrete log relation. This means that the probability $p_{fail}(Halo,P_{alg},e,R)$ is no greater than $Adv_{G,n+2}(H,λ)$, establishing our claim.

#### Helpful substitutions

We will perform some substitutions to aid in exposition. First, let us define the polynomial

$κ(X)=j=0∏k−1 (1+u_{k−1−j}X_{2_{j}})$

so that we can write $b_{0}=κ(x_{3})$. The coefficient vector $s$ of $κ(X)$ is defined such that

$s_{i}=j=0∏k−1 u_{k−1−j}$

where $f(i,j)$ returns $1$ when the $j$th bit of $i$ is set, and $0$ otherwise. We can also write $G_{′}_{0}=⟨s,G⟩$.

### Description of function $h$

Recall that an accepting transcript $tr$ is such that

$i=0∑k−1 [u_{j}]{L_{j}}+{P_{′}}+i=0∑k−1 [u_{j}]{R_{j}}=[c]G_{′}_{0}+[czb_{0}]U+[f]W$

By inspection of the representations of group elements with respect to $G,U,W$ (recall that $P_{alg}$ is algebraic and so $H$ has them), we obtain the $n$ equalities

$i=0∑k−1 u_{j}{L_{j}}_{i}+{P_{′}}_{i}+i=0∑k−1 u_{j}{R_{j}}_{i}=cs_{i}∀i∈[0,n)$

and the equalities

$i=0∑k−1 u_{j}{L_{j}}_{U}+{P_{′}}_{U}+i=0∑k−1 u_{j}{R_{j}}_{U}=czκ(x_{3})$

$i=0∑k−1 u_{j}{L_{j}}_{W}+{P_{′}}_{W}+i=0∑k−1 u_{j}{R_{j}}_{W}=f$

We define the linear-time function $h$ that returns the representation of

$i=0∑n−1 ++ [∑_{i=0}u_{j}{L_{j}}_{i}+{P_{′}}_{i}+∑_{i=0}u_{j}{R_{j}}_{i}−cs_{i}][∑_{i=0}u_{j}{L_{j}}_{U}+{P_{′}}_{U}+∑_{i=0}u_{j}{R_{j}}_{U}−czκ(x_{3})][∑_{i=0}u_{j}{L_{j}}_{W}+{P_{′}}_{W}+∑_{i=0}u_{j}{R_{j}}_{W}−f] G_{i}UW $

which is always a discrete log relation. If any of the equalities above are not satisfied, then this discrete log relation is non-trivial. This is the function invoked by $H$.

#### The extractor function $e$

The extractor function $e$ simply returns $a_{i}(X)$ from the representation ${A_{i}}$ for $i∈[0,n_{a})$. Due to the restrictions we will place on the space of bad challenges in each round, we are guaranteed to obtain polynomials such that $g(X,C_{0},C_{1},⋯,a_{0}(X),a_{1}(X),⋯)$ vanishes over $D$ whenever the discrete log relation returned by the adversary's function $h$ is trivial. This trivially gives us that the extractor function $e$ succeeds with probability bounded above by $p_{fail}$ as required.

#### Defining $BadCh(tr_{′})$

Recall from before that the following $n$ equalities hold:

$i=0∑k−1 u_{j}{L_{j}}_{i}+{P_{′}}_{i}+i=0∑k−1 u_{j}{R_{j}}_{i}=cs_{i}∀i∈[0,n)$

as well as the equality

$i=0∑k−1 u_{j}{L_{j}}_{U}+{P_{′}}_{U}+i=0∑k−1 u_{j}{R_{j}}_{U}=czκ(x_{3})$

For convenience let us introduce the following notation

$M_{i}(m)M(m) =∑_{i=0}u_{j}{L_{j}}_{i}+{P_{′}}_{i}+∑_{i=0}u_{j}{R_{j}}_{i}=∑_{i=0}u_{j}{L_{j}}_{U}+{P_{′}}_{U}+∑_{i=0}u_{j}{R_{j}}_{U} $

so that we can rewrite the above (after expanding for $κ(x_{3})$) as

$M_{i}(k)=cs_{i}∀i∈[0,n)$

$M(k)=czj=0∏k−1 (1+u_{k−1−j}x_{3})$

We can combine these equations by multiplying both sides of each instance of the first equation by $s_{i}$ (because $s_{i}$ is never zero) and substituting for $c$ in the second equation, yielding the following $n$ equalities:

$M(k)=M_{i}(k)⋅s_{i}zj=0∏k−1 (1+u_{k−1−j}x_{3})∀i∈[0,n)$

Lemma 1.If $M(k)=M_{i}(k)⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})∀i∈[0,n)$ then it follows that ${P_{′}}_{U}=zi=0∑2_{k}−1 x_{3}{P_{′}}_{i}$ for all transcripts that do not contain bad challenges.

Proof.It will be useful to introduce yet another abstraction defined starting with $Z_{k}(m,i)=M_{i}(m)$ and then recursively defined for all integers $r$ such that $0<r≤k$ $Z_{k−r}(m,i)=Z_{k−r+1}(m,i)+x_{3}Z_{k−r+1}(m,i+2_{k−r})$ This allows us to rewrite our above equalities as $M(k)=Z_{k}(k,i)⋅s_{i}zj=0∏k−1 (1+u_{k−1−j}x_{3})∀i∈[0,n)$We will now show that for all integers $r$ such that $0<r≤k$ that whenever the following holds for $r$ $M(r)=Z_{r}(r,i)⋅s_{i}zj=0∏r−1 (1+u_{k−1−j}x_{3})∀i∈[0,2_{r})$ that the same

alsoholds for $M(r−1)=Z_{r−1}(r−1,i)⋅s_{i}zj=0∏r−2 (1+u_{k−2−j}x_{3})∀i∈[0,2_{r−1})$For all integers $r$ such that $0<r≤k$ we have that $s_{i+2_{r−1}}=u_{r−1}s_{i}∀i∈[0,2_{r−1})$ by the definition of $s$. This gives us $s_{i+2_{r−1}}=s_{i}u_{r−1}∀i∈[0,2_{r−1})$ as no value in $s$ nor any challenge $u_{r}$ are zeroes. We can use this to relate one half of the equalities with the other half as so: $M(r) =Z_{r}(r,i)⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})=Z_{r}(r,i+2_{r−1})⋅s_{i}u_{r−1}z∏_{j=0}(1+u_{k−1−j}x_{3})∀i∈[0,2_{r−1}) $

Notice that $Z_{r}(r,i)$ can be rewritten as $u_{r−1}{L_{r−1}}_{i}+Z_{r}(r−1,i)+u_{r−1}{R_{r−1}}_{i}$ for all $i∈[0,2_{r})$. Thus we can rewrite the above as

$M(r) =(u_{r−1}{L_{r−1}}_{i}+Z_{r}(r−1,i)+u_{r−1}{R_{r−1}}_{i})⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})=(u_{r−1}{L_{r−1}}_{i+2_{r−1}}+Z_{r}(r−1,i+2_{r−1})+u_{r−1}{R_{r−1}}_{i+2_{r−1}})⋅s_{i}u_{r−1}z∏_{j=0}(1+u_{k−1−j}x_{3})∀i∈[0,2_{r−1}) $

Now let us rewrite these equalities substituting $u_{r−1}$ with formal indeterminate $X$.

$ X_{−1}{L_{r−1}}_{U}+M(r−1)+X{R_{r−1}}_{U}=(X_{−1}{L_{r−1}}_{i}+Z_{r}(r−1,i)+X{R_{r−1}}_{i})⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})(1+x_{3}X)=(X_{−1}{L_{r−1}}_{i+2_{r−1}}+Z_{r}(r−1,i+2_{r−1})+X{R_{r−1}}_{i+2_{r−1}})⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})(X_{−1}+x_{3})∀i∈[0,2_{r−1}) $

Now let us rescale everything by $X_{2}$ to remove negative exponents.

$ X{L_{r−1}}_{U}+X_{2}M(r−1)+X_{3}{R_{r−1}}_{U}=(X_{−1}{L_{r−1}}_{i}+Z_{r}(r−1,i)+X{R_{r−1}}_{i})⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})(X_{2}+x_{3}X_{3})=(X_{−1}{L_{r−1}}_{i+2_{r−1}}+Z_{r}(r−1,i+2_{r−1})+X{R_{r−1}}_{i+2_{r−1}})⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})(X+x_{3}X_{2})∀i∈[0,2_{r−1}) $

This gives us $2_{r−1}$ triples of maximal degree-$4$ polynomials in $X$ that agree at $u_{r−1}$ despite having coefficients determined prior to the choice of $u_{r−1}$. The probability that two of these polynomials would agree at $u_{r−1}$ and yet be distinct would be $∣Ch∣4 $ by the Schwartz-Zippel lemma and so by the union bound the probability that the three of these polynomials agree and yet any of them is distinct from another is $∣Ch∣8 $. By the union bound again the probability that any of the $2_{r−1}$ triples have multiple distinct polynomials is $∣Ch∣2_{r−1}⋅8 $. By restricting the challenge space for $u_{r−1}$ accordingly we obtain $∣BadCh(tr_{′}∣_{u_{r}})∣/∣Ch∣≤∣Ch∣2_{r−1}⋅8 $ for integers $0<r≤k$ and thus $∣BadCh(tr_{′}∣_{u_{k}})∣/∣Ch∣≤∣Ch∣4n ≤ϵ$.

We can now conclude an equality of polynomials, and thus of coefficients. Consider the coefficients of the constant terms first, which gives us the $2_{r−1}$ equalities $0=0=s_{i}z(j=0∏r−2 (1+u_{k−1−j}x_{3}))⋅{L_{r−1}}_{i+2_{r−1}}∀i∈[0,2_{r−1})$

No value of $s$ is zero, $z$ is never chosen to be $0$ and each $u_{j}$ is chosen so that $1+u_{k−1−j}x_{3}$ is nonzero, so we can then conclude $0={L_{r−1}}_{i+2_{r−1}}∀i∈[0,2_{r−1})$

An identical process can be followed with respect to the coefficients of the $X_{4}$ term in the equalities to establish $0={R_{r−1}}_{i}∀i∈[0,2_{r−1})$ contingent on $x_{3}$ being nonzero, which it always is. Substituting these in our equalities yields us something simpler

$ X{L_{r−1}}_{U}+X_{2}M(r−1)+X_{3}{R_{r−1}}_{U}=(X_{−1}{L_{r−1}}_{i}+Z_{r}(r−1,i))⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})(X_{2}+x_{3}X_{3})=(Z_{r}(r−1,i+2_{r−1})+X{R_{r−1}}_{i+2_{r−1}})⋅s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})(X+x_{3}X_{2})∀i∈[0,2_{r−1}) $

Now we will consider the coefficients in $X$, which yield the equalities

${L_{r−1}}_{U} =s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})⋅{L_{r−1}}_{i}=s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})⋅Z_{r}(r−1,i+2_{r−1})∀i∈[0,2_{r−1}) $

which for similar reasoning as before yields the equalities ${L_{r−1}}_{i}=Z_{r}(r−1,i+2_{r−1})∀i∈[0,2_{r−1})$

Finally we will consider the coefficients in $X_{2}$ which yield the equalities

$M(r−1) =s_{i}z∏_{j=0}(1+u_{k−1−j}x_{3})⋅(Z_{r}(r−1,i)+{L_{r−1}}_{i}x_{3})∀i∈[0,2_{r−1}) $

which by substitution gives us $∀i∈[0,2_{r−1})$ $M(r−1)=s_{i}zj=0∏r−2 (1+u_{k−1−j}x_{3})⋅(Z_{r}(r−1,i)+Z_{r}(r−1,i+2_{r−1})x_{3})$

Notice that by the definition of $Z_{r−1}(m,i)$ we can rewrite this as

$M(r−1)=Z_{r−1}(r−1,i)⋅s_{i}zj=0∏r−2 (1+u_{k−1−j}x_{3})∀i∈[0,2_{r−1})$

which is precisely in the form we set out to demonstrate.

We now proceed by induction from the case $r=k$ (which we know holds) to reach $r=0$, which gives us $M(0)=Z_{0}(0,0)⋅s_{0}z$

and because $M(0)={P_{′}}_{U}$ and $Z$