 Hello, thank you for watching this talk on polynomial IOPs for linear algebra. This is joint work between Alan Schapinitz and myself, Yun Congzhang. This work deals with succinct non-interactive argument of knowledge, or snogs. In this setting, there are two parties, the prover and the verifier. The prover is trying to convince the verifier that given the instance x, the prover knows a witness w such that the x and w satisfy a certain relation that is typically described by a computation C. This job can be accomplished by a snog. A snog consists of three algorithms, namely the setup, which often involves pre-processing the computation C. The prover inputs both the instance and the witness, produces a proof string denoted by pi, and sends it to the verifier, who decides whether to accept this proof. A snog has the following properties. By succinctness, it requires that the proof size is logarithmic to the circuit size. Sometimes, this logarithmic restriction also applies to the running time of the verifier. By non-interactivity, the prover generates the proof string without receiving any information from the verifier, so one proof string can be snored and repeatedly used to convince more than one verifier. Finally, by argument of knowledge, the prover shows that the witness w not only exists but is known to the prover. Recently, many snogs have been constructed. Both constructions can be fed into the following pipeline. This pipeline starts from the computation, the C in the previous slide, which is mostly described by a program or a circuit. The first step, called the iris-metalization, transforms this computation into a form that is more friendly to mathematicians. RCS and HPR are popular candidates for this step. The next step, which is really the most complex step, is to design an information theoretic protocol for the iris-metal representation. This protocol is designed in the context of an idealized model, for example, PCP, linear PCP, IOP, and so on. As pointed out by Bouncy et cetera, all these idealized models can be viewed as a special case of the polynomial IOP model. A polynomial IOP is an interactive protocol where all the messages sent from the prover to the verifier are polynomials of a finite field. These polynomials can be healed. The verifier cannot read the entire polynomials, because the verifier is much weaker than the prover. Therefore, in the polynomial IOP model, the verifier only receives an evaluation oracle for this polynomial. The verifier can pick arbitrary points, say y and z, and query this oracle to obtain f, y and f, z, respectively. Obviously, polynomial oracles do not exist in the real world, so we need to replace them by a cryptographic tool called polynomial commitments. This brings us back to the last step of the construction pipeline, where we finally get a snark by compiling the information theoretic protocol using cryptographic tools, like the KZG or dark polynomial commitments, and the Fiat-Charmier heuristic, which is frequently used to transform interactive protocols into non-interactive schemes. In this work, we focus on the second step, information theoretic compilation. This step transforms an arithmetic representation into a polynomial IOP. The arithmetic representation is typically described by matrices, vectors, and operations in linear algebra. The polynomial IOP, however, provides a difference interface. The objects are polynomials, and the operations are invaderations. Therefore, the key questions in designing polynomial IOPs are representing the matrices and vectors by polynomials, and simulating the operations in linear algebra by those of polynomials. Most existing works take the resultant code basis representation, where a vector is identified by the polynomial evaluations over a domain, which is a subset in the finite field this snark is based on. The problem with this representation is that the domain must have a nice algebraic structure, so the choices of finite fields are limited. A more straightforward approach is to use the monomial basis, which directly embeds the vector entries into the polynomial coefficients. This approach does not put any limits on the choice of finite fields. However, few works take this approach as a method for simulating the linear algebra operations in this representation largely unexplored. The only exception, Sonic, is outperformed by the popular works like Planck or Marlin that are based on the resultant code basis. To uncover the potential of the monomial basis representation, we construct a polynomial IOP, which we name Claymore, after type of Scottish sword. Our work reveals a rich set of linear algebra operations that can be implemented in the monomial basis with competitive efficiency. Claymore is built for the arithmetic representation of circuits called the Hadamard's Product Relation, shorthand HPR. This is a variant of the circuit representation proposed by the work of Butto in 2016. This relation is indexed by matrix M, which is determined by the circuit. The instance of HPR is a sparse vector X, corresponding to the public inputs and outputs of the circuit. The witness W consists of three vectors that respectively correspond to the wires of the left inputs, right inputs, and outputs of the multiplication gates. The instance in the witness pair satisfies this HPR if the multiplication of the left inputs and the right inputs exactly produce the values of the output wires, and the wire values satisfy the linear relations specified by the matrix M. The first step to construct the polynomial IOP is to represent the objects in HPR by polynomials. According to that, we chose the monomial basis representation. So for the vector A, the entries of the vector are taken as the polynomial coefficients. We denote the polynomial by f subscript a. For the matrix, we simply concatenate all the rule vectors before treating them as the coefficients. Next, we will introduce how to implement the polynomial IOP for all the necessary operations, each in the sub-protocol. We introduce these sub-protocols from bottom to top, and finally compose them into the Klingmore protocol. We start from mod-reduce, which is not a linear algebra operation, but is required in the matrix vector product protocol. The mod-reduce protocol assumes that the modifier already has two polynomial articles, fx and rx, and convinces the modifier that rx is the remainder of fx divided by the public polynomial b. The protocol is straightforward. The provot divides fx by px, and sends the quotient polynomial qx to the modifier, and the modifier checks the polynomial identity, deduced from the definition of polynomial division, as a uniformly random point there. The equality at this random point implies the equality of the entire polynomials, due to the stress-safe lemma. Now the mod-reduce protocol is finished. Next is the linear product protocol. This protocol convinces the modifier that the linear product between two vectors a and b is the public value c, when the modifier has articles for polynomials fax and fbx. To achieve this, the provot reverses the coefficient of fax. The reversion is achieved by substituting x by x inverse, then multiplying x to the power of d, the maximum degree of fx. Then multiply this reversed fx with fbx to get hx, whose coefficient for x to d is exactly the linear product between a and b. So the job of the provot becomes showing that the d's coefficient of hx equals c. To accomplish this, the provot tries to find such an h by x and express hx in this form. It's easy to check that a polynomial of this form is guaranteed to have a coefficient c for the d's power. With properly chosen gamma, it's also easy to find such an h by x. So the provot sends the oracle of h by x to the modifier. Then the modifier checks the validity of h by x by definition, at a uniformly randomly sampled point today. This completes the inner product protocol. Based on the inner product protocol, we are now ready to implement the hardmat product. The modifier has access to three polynomial oracles, faf, fb and fc, and the provot tries to convince the modifier that a times b is c, where the multiplication is the hardmat product, which is the first name for an introvert product between vectors. The idea is still to use Schrodinger's simple lemma. By this lemma, the vector identity a times b equals c can be implied from the equality between fab alpha and fc alpha for uniformly random alpha. So the modifier samples alpha and checks this identity. The right-hand side is simply evaluating a cx at alpha, while the left-hand side is the inner product between the coefficient vectors of faf alpha x and fb x. Since the modifier has the oracle for fax, the modifier can simulate the oracle for faf alpha x by multiplying alpha to whatever is queried to this oracle. Therefore, the modifier can check the equality by running the inner product protocol with their provot. Now we are left with the last linear operation, under the most complex one, the matrix vector product. We introduce two protocols for this job. First, consider the case of the matrix b as dense. We call this version the dense MVP protocol. Assume that the size of b is m times n. Recall that the coefficient of fbx is the concatenation of all the row vectors. Now for uniformly random alpha, consider reducing fbx modulo x2n minus alpha. This effectively replaces all the x2n with alpha. Now look at the remainder polynomial rx. The coefficient vector of rx is exactly the linear combination of the rows of b by one alpha alpha squared all the way to alpha to minus one. This is exactly multiplying the alpha vector to the left of the matrix b. With this in mind, the identity c equals ba can be verified as follows. We multiply the alpha vector to both sides. By short-step dilemma, this new identity implies the original identity. The left-hand side is simply fc alpha. The right-hand side is the inner product between rx and the vector a. Now we formalize this observation into the following postcode. Assume the verifier has polynomial r equals fc, fb and fa. To prove that c equals ba, the verifier samples a random alpha as discussed before. The prover descends the remainder polynomial rx to the verifier. The verifier first ensures that the inner product of r and a is fc alpha by the inner product postcode which we just introduced. Then the verifier checks that rx is correct by the mod reduced postcode. In this postcode, the running time of the prover is at least linear to the total number of entries in the matrix. This is the case even if most of the entries are zero. This can be inefficient since in practice the matrix is often large and sparse. Therefore, for sparse matrix, the number of zero entries is much smaller than the total number of entries with just an alternative protocol where the running time of the prover depends only on the number of zero entries. To explain the sparse MVP protocol we look at the identity c equals ba again. We multiply the alpha vector to both sides as we did before. The left-hand side is still fc alpha and the right-hand side is the inner product between r and a. Different from its dense MVP after the prover sends rx to the verifier, instead of using the mod reduced postcode as in dense MVP, we let the verifier validate the rx with another approach. Now the original problem has been reduced to checking this new identity. We apply Schwarzweil lemma again by multiplying another random vector which records a beta vector to both sides of the equation where beta is sampled independently from alpha. Obviously, the left-hand side is r beta. What about the right-hand side? This is where the sparsity of the matrix comes into play. Assume there are k-nazero entries in the matrix with row indices a1, a2, up to ak, column indices b1, b2, up to pk, and entry value c1, c2, up to ck. Then the right-hand side can be written as a sum of kik items. Write this sum in the form of vectors we denote the vector of all the ck by v. The other two multiplicands are similarly collected into vectors of sizes k denoted by x and y respectively. Then the sum is the inner product between v and the head-mounted product of x and y. The modifier can then check this identity by a head-mounted protocol followed by an inner product protocol. The problem is how can the modifier obtain the polynomial oracles fvx, fx, x, and fyx? For fvx, notice the vector v depends only on the matrix b, which is learned before the protocol started. Therefore, the modifier may pre-process this vector and generate this oracle offline. However, the vectors x and y depend on the random values f and beta, which cannot be predicted in advance. So the modifier can only obtain these oracles on the fly from the prover. For the prover to convince the modifier that these polynomials are correctly computed, the part is when the sub-protocol calls sparse-monomial vector. We omit the details of this sub-protocol from this doc and refer interest to the audience to the paper. Now we summarize above in the sparse-MVP protocol. Instead of fvx, which contains the entire matrix, the modifier now has an oracle of fv that contains only the non-zero entries. The degree of fv is much smaller than fv when the matrix is sparse. The start of the sparse-MVP protocol is the same as dense-MVP. The difference between the protocols lies in the validation of rx. In the dense-MVP protocol, the polynomial oracle for rx is checked by the simple yet slow mod-reduced protocol. In sparse-MVP, mod-reduced is replaced with a more complex procedure, but the running time of the prover is reduced exploiting the sparsity of the matrix. Now, all the sub-protocols are ready. The Claymore protocol is a straightforward combination of the matrix vector products and the Hadamard protocols. Here we only present the dense version of Claymore, as the sparse version is very similar. The prover is trying to convince the vifier its knowledge of three vectors, WL, WR, and WO that satisfy the linear relations specified by the matrix M and public vector X, and that WL times WR is WO. In the protocol, we use a small trick. Instead of sending three vectors in three polynomials individually, we let the prover concatenate WL and WR into a new vector WI, thus eliminating one online polynomial oracle. To obtain the Hadamard product between WL and WR, the vifier write shifts WI and multiply with the itself. The vector shifts are carried out by multiplying powers of X to the polynomials. Finally, the linear relation is validated by the MVP protocol. Now, we have finished the description of Claymore. Next, I'll briefly discuss how to make it their knowledge. In the polynomial IOP, all the information the vifier receives from the prover is obtained from querying the polynomial oracles. In Claymore, there are only three polynomials that contain information that are not already publicly known, namely FWI, FWO, and H bar X. For the protocol to be their knowledge, we want the vifier to be able to simulate the query replies from these polynomials without knowledge of their content. Observing the entire protocol, we find that each of these three polynomials is queried at two different evaluation points. So, if we append two uniformly random coefficients to each polynomial, the query results would also be uniformly random. We first insert all the columns to the matrix M corresponding to the positions of the random coefficients. Then, during the protocol, the prover inserts random elements in the witness vectors before sending the polynomial oracles to the vifier. The random elements are sampled in a way that WL times WR is still WO. When we multiply M with the witness vector, the random elements will be multiplied to the all zero columns, so the randomization does not affect the satisfaction of HPR. For H bar X, this polynomial is computed and randomized by their prover during the inner product protocol. We will not dive into the details here. Finally, we analyze the performance of Claymore compared to the state of the arts. The metrics we consider include the number of polynomials involved in the protocol, either sent by the prover online or pre-processed offline, the number of evaluation queries, the number of distinct evaluation points, and the maximum polynomial degree. All these metrics affect the performance of the compiled snark. Here are the results. The F here is the maximum findings of addition gates. Typically, it could be two or three. In the protocol design, we focused on optimizing the number of polynomial oracles. We have partially succeeded in this respect. This Claymore has the advantage in the number of polynomials compared to the rest and the sacrifice of maximum degree. While sparse Claymore reduces the maximum degree of this Claymore at the cost of more polynomials and evaluation queries. In conclusion, this work shows the possibility of constructing polynomial IOPs for linear algebra operations in a monomial basis, including inner product, head-mount product, and matrix vector product. Composing them together, we get Claymore, a polynomial IOP that can be compiled into a snark for verifying circuits' computations. Compared to snarks in the resultant code basis, our snark has competitive efficiency and no longer requires the finite field to have a nicely structured subset of proper size. Thank you for watching this talk. For more details, please read our paper which is available on ePrint.