Fields
A fundamental component of many cryptographic protocols is the algebraic structure known as a field. Fields are sets of objects (usually numbers) with two associated binary operators $+$ and $×$ such that various field axioms hold. The real numbers $R$ are an example of a field with uncountably many elements.
Halo makes use of finite fields which have a finite number of elements. Finite fields are fully classified as follows:
 if $F$ is a finite field, it contains $∣F∣=p_{k}$ elements for some integer $k≥1$ and some prime $p$;
 any two finite fields with the same number of elements are isomorphic. In particular, all of the arithmetic in a prime field $F_{p}$ is isomorphic to addition and multiplication of integers modulo $p$, i.e. in $Z_{p}$. This is why we often refer to $p$ as the modulus.
We'll write a field as $F_{q}$ where $q=p_{k}$. The prime $p$ is called its characteristic. In the cases where $k>1$ the field $F_{q}$ is a $k$degree extension of the field $F_{p}$. (By analogy, the complex numbers $C=R(i)$ are an extension of the real numbers.) However, in Halo we do not use extension fields. Whenever we write $F_{p}$ we are referring to what we call a prime field which has a prime $p$ number of elements, i.e. $k=1$.
Important notes:
 There are two special elements in any field: $0$, the additive identity, and $1$, the multiplicative identity.
 The least significant bit of a field element, when represented as an integer in binary format, can be interpreted as its "sign" to help distinguish it from its additive inverse (negation). This is because for some nonzero element $a$ which has a least significant bit $0$ we have that $−a=p−a$ has a least significant bit $1$, and vice versa. We could also use whether or not an element is larger than $(p−1)/2$ to give it a "sign."
Finite fields will be useful later for constructing polynomials and elliptic curves. Elliptic curves are examples of groups, which we discuss next.
Groups
Groups are simpler and more limited than fields; they have only one binary operator $⋅$ and fewer axioms. They also have an identity, which we'll denote as $1$.
Any nonzero element $a$ in a group has an inverse $b=a_{−1}$, which is the unique element $b$ such that $a⋅b=1$.
For example, the set of nonzero elements of $F_{p}$ forms a group, where the group operation is given by multiplication on the field.
(aside) Additive vs multiplicative notation
If $⋅$ is written as $×$ or omitted (i.e. $a⋅b$ written as $ab$), the identity as $1$, and inversion as $a_{−1}$, as we did above, then we say that the group is "written multiplicatively". If $⋅$ is written as $+$, the identity as $0$ or $O$, and inversion as $−a$, then we say it is "written additively".
It's conventional to use additive notation for elliptic curve groups, and multiplicative notation when the elements come from a finite field.
When additive notation is used, we also write
$[k]A=ktimesA+A+⋯+A $
for nonnegative $k$ and call this "scalar multiplication"; we also often use uppercase letters for variables denoting group elements. When multiplicative notation is used, we also write
$a_{k}=ktimesa×a×⋯×a $
and call this "exponentiation". In either case we call the scalar $k$ such that $[k]g=a$ or $g_{k}=a$ the "discrete logarithm" of $a$ to base $g$. We can extend scalars to negative integers by inversion, i.e. $[−k]A+[k]A=O$ or $a_{−k}×a_{k}=1$.
The order of an element $a$ of a finite group is defined as the smallest positive integer $k$ such that $a_{k}=1$ (in multiplicative notation) or $[k]a=O$ (in additive notation). The order of the group is the number of elements.
Groups always have a generating set, which is a set of elements such that we can produce any element of the group as (in multiplicative terminology) a product of powers of those elements. So if the generating set is $g_{1..k}$, we can produce any element of the group as $i=1∏k g_{i}$. There can be many different generating sets for a given group.
A group is called cyclic if it has a (not necessarily unique) generating set with only a single element — call it $g$. In that case we can say that $g$ generates the group, and that the order of $g$ is the order of the group.
Any finite cyclic group $G$ of order $n$ is isomorphic to the integers modulo $n$ (denoted $Z/nZ$), such that:
 the operation $⋅$ in $G$ corresponds to addition modulo $n$;
 the identity in $G$ corresponds to $0$;
 some generator $g∈G$ corresponds to $1$.
Given a generator $g$, the isomorphism is always easy to compute in the $Z/nZ→G$ direction; it is just $a↦g_{a}$ (or in additive notation, $a↦[a]g$). It may be difficult in general to compute in the $G→Z/nZ$ direction; we'll discuss this further when we come to elliptic curves.
If the order $n$ of a finite group is prime, then the group is cyclic, and every nonidentity element is a generator.
The multiplicative group of a finite field
We use the notation $F_{p}$ for the multiplicative group (i.e. the group operation is multiplication in $F_{p}$) over the set $F_{p}−{0}$.
A quick way of obtaining the inverse in $F_{p}$ is $a_{−1}=a_{p−2}$. The reason for this stems from Fermat's little theorem, which states that $a_{p}=a(modp)$ for any integer $a$. If $a$ is nonzero, we can divide by $a$ twice to get $a_{p−2}=a_{−1}.$
Let's assume that $α$ is a generator of $F_{p}$, so it has order $p−1$ (equal to the number of elements in $F_{p}$). Therefore, for any element in $a∈F_{p}$ there is a unique integer $i∈{0..p−2}$ such that $a=α_{i}$.
Notice that $a×b$ where $a,b∈F_{p}$ can really be interpreted as $α_{i}×α_{j}$ where $a=α_{i}$ and $b=α_{j}$. Indeed, it holds that $α_{i}×α_{j}=α_{i+j}$ for all $0≤i,j<p−1$. As a result the multiplication of nonzero field elements can be interpreted as addition modulo $p−1$ with respect to some fixed generator $α$. The addition just happens "in the exponent."
This is another way to look at where $a_{p−2}$ comes from for computing inverses in the field:
$p−2≡−1(modp−1),$
so $a_{p−2}=a_{−1}$.
Montgomery's Trick
Montgomery's trick, named after Peter Montgomery (RIP) is a way to compute many group inversions at the same time. It is commonly used to compute inversions in $F_{p}$, which are quite computationally expensive compared to multiplication.
Imagine we need to compute the inverses of three nonzero elements $a,b,c∈F_{p}$. Instead, we'll compute the products $x=ab$ and $y=xc=abc$, and compute the inversion
$z=y_{p−2}=abc1 .$
We can now multiply $z$ by $x$ to obtain $c1 $ and multiply $z$ by $c$ to obtain $ab1 $, which we can then multiply by $a,b$ to obtain their respective inverses.
This technique generalizes to arbitrary numbers of group elements with just a single inversion necessary.
Multiplicative subgroups
A subgroup of a group $G$ with operation $⋅$, is a subset of elements of $G$ that also form a group under $⋅$.
In the previous section we said that $α$ is a generator of the $(p−1)$order multiplicative group $F_{p}$. This group has composite order, and so by the Chinese remainder theorem^{1} it has strict subgroups. As an example let's imagine that $p=11$, and so $p−1$ factors into $5⋅2$. Thus, there is a generator $β$ of the $5$order subgroup and a generator $γ$ of the $2$order subgroup. All elements in $F_{p}$, therefore, can be written uniquely as $β_{i}⋅γ_{j}$ for some $i$ (modulo $5$) and some $j$ (modulo $2$).
If we have $a=β_{i}⋅γ_{j}$ notice what happens when we compute
$a_{5}=(β_{i}⋅γ_{j})_{5}=β_{i⋅5}⋅γ_{j⋅5}=β_{0}⋅γ_{j⋅5}=γ_{j⋅5};$
we have effectively "killed" the $5$order subgroup component, producing a value in the $2$order subgroup.
Lagrange's theorem (group theory) states that the order of any subgroup $H$ of a finite group $G$ divides the order of $G$. Therefore, the order of any subgroup of $F_{p}$ must divide $p−1.$
PLONKbased proving systems like Halo 2 are more convenient to use with fields that have a large number of multiplicative subgroups with a "smooth" distribution (which makes the performance cliffs smaller and more granular as circuit sizes increase). The Pallas and Vesta curves specifically have primes of the form
$T⋅2_{S}=p−1$
with $S=32$ and $T$ odd (i.e. $p−1$ has 32 lower zerobits). This means they have multiplicative subgroups of order $2_{k}$ for all $k≤32$. These 2adic subgroups are nice for efficient FFTs, as well as enabling a wide variety of circuit sizes.
Square roots
In a field $F_{p}$ exactly half of all nonzero elements are squares; the remainder are nonsquares or "quadratic nonresidues". In order to see why, consider an $α$ that generates the $2$order multiplicative subgroup of $F_{p}$ (this exists because $p−1$ is divisible by $2$ since $p$ is a prime greater than $2$) and $β$ that generates the $t$order multiplicative subgroup of $F_{p}$ where $p−1=2t$. Then every element $a∈F_{p}$ can be written uniquely as $α_{i}⋅β_{j}$ with $i∈Z_{2}$ and $j∈Z_{t}$. Half of all elements will have $i=0$ and the other half will have $i=1$.
Let's consider the simple case where $p≡3(mod4)$ and so $t$ is odd (if $t$ is even, then $p−1$ would be divisible by $4$, which contradicts $p$ being $3(mod4)$). If $a∈F_{p}$ is a square, then there must exist $b=α_{i}⋅β_{j}$ such that $b_{2}=a$. But this means that
$a=(α_{i}⋅β_{j})_{2}=α_{2i}⋅β_{2j}=β_{2j}.$
In other words, all squares in this particular field do not generate the $2$order multiplicative subgroup, and so since half of the elements generate the $2$order subgroup then at most half of the elements are square. In fact exactly half of the elements are square (since squaring each nonsquare element gives a unique square). This means we can assume all squares can be written as $β_{m}$ for some $m$, and therefore finding the square root is a matter of exponentiating by $2_{−1}(modt)$.
In the event that $p≡1(mod4)$ then things get more complicated because $2_{−1}(modt)$ does not exist. Let's write $p−1$ as $2_{k}⋅t$ with $t$ odd. The case $k=0$ is impossible, and the case $k=1$ is what we already described, so consider $k≥2$. $α$ generates a $2_{k}$order multiplicative subgroup and $β$ generates the odd $t$order multiplicative subgroup. Then every element $a∈F_{p}$ can be written as $α_{i}⋅β_{j}$ for $i∈Z_{2_{k}}$ and $j∈Z_{t}$. If the element is a square, then there exists some $b=a $ which can be written $b=α_{i_{′}}⋅β_{j_{′}}$ for $i_{′}∈Z_{2_{k}}$ and $j_{′}∈Z_{t}$. This means that $a=b_{2}=α_{2i_{′}}⋅β_{2j_{′}}$, therefore we have $i≡2i_{′}(mod2_{k})$, and $j≡2j_{′}(modt)$. $i$ would have to be even in this case because otherwise it would be impossible to have $i≡2i_{′}(mod2_{k})$ for any $i_{′}$. In the case that $a$ is not a square, then $i$ is odd, and so half of all elements are squares.
In order to compute the square root, we can first raise the element $a=α_{i}⋅β_{j}$ to the power $t$ to "kill" the $t$order component, giving
$a_{t}=α_{it(mod2_{k})}⋅β_{jt(modt)}=α_{it(mod2_{k})}$
and then raise this result to the power $t_{−1}(mod2_{k})$ to undo the effect of the original exponentiation on the $2_{k}$order component:
$(α_{itmod2_{k}})_{t_{−1}(mod2_{k})}=α_{i}$
(since $t$ is relatively prime to $2_{k}$). This leaves bare the $α_{i}$ value which we can trivially handle. We can similarly kill the $2_{k}$order component to obtain $β_{j⋅2_{−1}(modt)}$, and put the values together to obtain the square root.
It turns out that in the cases $k=2,3$ there are simpler algorithms that merge several of these exponentiations together for efficiency. For other values of $k$, the only known way is to manually extract $i$ by squaring until you obtain the identity for every single bit of $i$. This is the essence of the TonelliShanks square root algorithm and describes the general strategy. (There is another square root algorithm that uses quadratic extension fields, but it doesn't pay off in efficiency until the prime becomes quite large.)
Roots of unity
In the previous sections we wrote $p−1=2_{k}⋅t$ with $t$ odd, and stated that an element $α∈F_{p}$ generated the $2_{k}$order subgroup. For convenience, let's denote $n:=2_{k}.$ The elements ${1,α,…,α_{n−1}}$ are known as the $n$th roots of unity.
The primitive root of unity, $ω,$ is an $n$th root of unity such that $ω_{i}=1$ except when $i≡0(modn)$.
Important notes:

If $α$ is an $n$th root of unity, $α$ satisfies $α_{n}−1=0.$ If $α=1,$ then $1+α+α_{2}+⋯+α_{n−1}=0.$

Equivalently, the roots of unity are solutions to the equation $X_{n}−1=(X−1)(X−α)(X−α_{2})⋯(X−α_{n−1}).$

$ω_{2n+i}=−ω_{i} $ ("Negation lemma"). Proof: $ω_{n}=1 ⟹ω_{n}−1=0⟹(ω_{n/2}+1)(ω_{n/2}−1)=0. $ Since the order of $ω$ is $n$, $ω_{n/2}=1.$ Therefore, $ω_{n/2}=−1.$

$(ω_{2n+i})_{2}=(ω_{i})_{2} $ ("Halving lemma"). Proof: $(ω_{2n+i})_{2}=ω_{n+2i}=ω_{n}⋅ω_{2i}=ω_{2i}=(ω_{i})_{2}.$ In other words, if we square each element in the $n$th roots of unity, we would get back only half the elements, ${(ω_{n})_{2}}={ω_{n/2}}$ (i.e. the $2n $th roots of unity). There is a twotoone mapping between the elements and their squares.