 Hello and welcome to my talk about the paper Practical Exec Proofs from Lattice's New Techniques to Exploit Fully Splitting Racks. I'm Gregor Zeiler and this is joint work with the Kahn Nguyen and Mohamed Eskin. Okay, so our paper is about a Lattice-based Your Knowledge Proof System. So let me quickly revisit the structure of general proof systems. Usually there are two tasks that one need to solve. So first there needs to be a commitment scheme that allows to commit to some secret vector S and then one has to have means to prove what we call linear and product relations between the coefficients of this vector. So on this slide there are two equations. The first equation is a linear relation because it is linear in the secret coefficients S1 and S2 and then there's a product relation which is called product relation because it involves the product of two secret coefficients. So in this case it's a simple square of S1. Now in this paper the goal is basically to focus on the first part and construct a more practical linear proof for a particular Lattice-based commitment scheme, namely the BDLOP commitment scheme. And then this can be combined with the product proof or efficient product proof that also exists to construct a full-fledged proof system. And as soon as one has such a proof system then in principle this allows to prove that the vector S is a preimage of some arbitrary circuit, which is a very powerful tool. I say in principle because when the vector S becomes long and the circuit large then the performance of the proof system depends on how the proof system scales with these parameters, so for example the length of S. And unfortunately we don't yet know how to construct a sub-linear proof system based on Lattice-Hardness assumptions. So sub-linear means that for example the proof size scales sub-linearly in the length of S. But this is not the end of the world at all because there is actually an interesting regime of smaller statements where the scaling is not really important because the performance of the proof system is completely determined by constants. And this is where for this regime this is where this line of research is aiming at. Okay, so one of the reasons for the linear scaling of our proof system is that we use this BDLOP commitment scheme. And the linear scaling comes from the fact that the BDLOP commitment scheme scales linearly in the length of the message. So since you need to output the commitment the proofs also are linear. The reason why we still use BDLOP over other commitment schemes, so there would be other commitment schemes that scale sub-linearly or other Lattice-based commitment schemes that scale sub-linearly. So the reason why we still use BDLOP is that it has some very nice powerful homomorphic properties, for example like this linear property on the slide. And secondly, on top of the linear proof that we construct in this paper there are already very efficient opening and product proofs for this commitment scheme. So by an opening proof I mean just the basic building block of being able to prove that one knows an opening to a commitment which is kind of used internally in both product and linear proofs. So this is the reason for BDLOP to better explain what this linear equation on the homomorphic property on the previous slide means. I need to introduce the algebra that we are using. And as is usually the case in Lattice cryptography we work over a cyclotomic polynomial ring and more precisely over the polynomial ring which is consisting of polynomials over zq modulo the power of 2 cyclotomic x to the power of 128 plus 1. And then this nice ring has some very nice property that we call the entity basis. And what this means is that depending on this prime q the Chinese reminder theorem says that our ring rq can actually be identified with the vector space zq to the 128. And this identification or this isomorphism is a map which goes from rq to this vector space and we call this the entity isomorphism and we write the image of some polynomial f as f hat. So essentially f hat is the entity vector associated to the polynomial f. The reason why we call this entity is because computationally if you really want to compute this map what you have to do in an implementation of many schemes then you do this usually via what is called the number theoretic transform and then this number theoretic transform really essentially just computes this Chinese reminder isomorphism but we still call the map the entity map. Okay, so now in the beginning I said we need to be able to commit to vectors but the BDLOP commitment scheme is usually defined as a commitment scheme that takes polynomials in rq as messages. So we need to say how we go from there to vectors and maybe the straightforward approach would be to just use the coefficient vectors of a polynomial as the vector representation of some polynomial and commit to basically a vector by taking the polynomial which has basically the coefficients as in the vector but we do something differently because we use the entity basis from the previous slide. And what this means concretely is that we define a new commitment scheme which I now call comp prime and comp prime takes some vector as message so let's write this vector as s hat as basically the entity of some polynomial s and then commit to it by just computing the commitment of the polynomial s. And yeah, this new commitment scheme of course inherits some homomorphic property from com and more precisely it also has some linear homomorphic property where the linear combination now involves point-first product and the reason for this is that before in the linear relation we had polynomials products but they translate to point-first products under the entity. So what this means is we can manipulate vector commitments in the following way. If we are given two commitments to s1 and s2 and I now always drop the hat if I pronounce some vector just for simplicity then we can multiply these commitments by two other vectors a1 and a2 in some way that I'm not going to go into now and then we get a new commitment to the linear combination with point-first product of s1 and s2. And yeah, the reason why these point-first products are very useful for the knowledge proofs is that for the product proofs point-first products is really what we need because if you think about standard product proof then what one has to do there is to multiply the coefficient with another secret coefficient and this is very close to what a point-first product achieves and in contrast a polynomial product kind of mixes up all the coefficients so this is not really useful for product proofs. So much for the commitment scheme I'm now explaining in some detail but still at a high level how our linear proof for the commitment scheme works and this will basically proceed in a couple of steps. The first step is based on the pretty standard observation that if the vector s multiplied by some matrix A and then this vector multiplied in an inner product fashion or with a scalar product with some uniformly random challenge vector phi then the scalar product if or if the scalar product is zero this actually shows that A times s is zero with a sum of 0, 1 over q and the reason is very simple because if A is 1 over 0 then the scalar product would be completely uniformly random and hence 0 only with probability 1 over q. Now in a second step we can rewrite the scalar product and we do this by pulling over the matrix A to the other side. So now we write the scalar product as a scalar product of a secret vector with the transport of A times phi. The reason why we do this is because both this matrix A which defines the linear equation that we interested in and the challenge vector phi they are public so it makes sense to essentially group together the public things on one side of the scalar product and the secret things on the other side. So with this new scalar product which I now again just write as a scalar product of s with some vector phi so essentially drop the A because it's not really important anymore after all just clutters notation so the phi from now on is basically what the 85 was before if I've given such a scalar product then if I think about what the scalar product really does then I see that I can decompose this into two parts so first I can take a point wise product of these two vectors s and phi and then in the second step I need to sum up over all the products or basically over all the coefficients of this point wise product so they're the coefficient products this is what is given in the first equation on the slide and then in the second slide I have used the homomorphic property of the entity map which means that the point wise product of the vectors s and phi can be written so remember that these vectors s and phi are actually the entity of two polynomials and now I can write this as the entity of the polynomial product of the two polynomials and the reason why this is interesting is because if we define the polynomial f to be the product of these two polynomials before then by a simple property of the entity the constant coefficient of this polynomial is at least up to scaling just the sum over the entity coefficients so what this means is that the constant coefficient f0 by the property of the entity map is the sum over the entity coefficients so this is the sum over the coefficients f I had in this equation and then because of how this polynomial f was constructed this is now the sum over the point wise products of the two vectors s and phi that we interested in which as we saw before is the scalar product we want to compute so what all of this means is that if the verify is somehow able to compute a commitment to f out of the commitment to s that is given as part of the proof then we only need to somehow come up with an efficient proof that the constant coefficient of this method polynomial f is 0 in order to prove that the scalar product is 0 and this is what I'm now explaining how this works so here the problem is that the BDLOP commitment scheme doesn't make it completely straight forward to prove single coefficients so what one can prove is for example that the polynomial is 0 but in our case it's not safe, it wouldn't be easier knowledge to reveal the full polynomial f so we need to somehow mask all the other coefficients except the constant coefficient and the way we achieve this is basically using the following observation so as before if our linear equation or our linear term as is non-zero then the constant coefficient is uniformly random depending on the challenge so if now the the proofer sends some polynomial h which is the sum of f and some masking polynomial g where we need to make sure that g is independent of phi then as soon as h has a constant coefficient this actually shows the linear equation a is equal to 0 again as long as there is one over q and the reason for this again is very simple because as I said if a is non-zero then the constant coefficient of f is uniformly random and I have said that I basically demand that g is independent of f and hence independent of the challenge so the constant coefficient of h stays uniformly random just depending on the challenge phi so if this is 0 then f must have constant coefficient 0 1 over q this is then performed in the following way to make the proof actually correct so the proofer will choose a masking vector where all the coefficients are uniformly random except the constant coefficient which is 0 so in this way the polynomial h will always have 0 constant coefficient in the honest execution which the verifier can check and if the relation that he claims to prove is not true then he cannot cheat because we force him to make the polynomial g independent of f and phi by forcing him to commit to g before he sees the challenge phi this is how this proof of the scalar product works and with this I'm finished with the high level overview of our main technique there now remains one central problem and this is that so far we have only managed to prove the linear relation with the probability or with the assumption of a q which is not negligible because in lattice cryptography we usually have very small q so for example think about the NIST finalists or the lattice based NIST finalists there are the encryption scheme kyber and then there are signature schemes there's Farkin which has a q which is about kraft thousand and secondly there's a signature scheme there's dilithium which is maybe closest to our protocols but this scheme still only has a q in the order of 23 so one of such a small q is certainly not negligible and we have to give a way to boost this probability or this yeah the sound yeah maybe as a side note you could just say okay why not just instantiate everything with a larger q like in the order of 2 to the 1 in 28 and there are two problems with this approach so first our proof system wouldn't be as efficient as we wanted well we'd like it to be and secondly this also has problems with the security so such large qs are certainly much less studied than the qs we use in actually well-studied schemes like the NIST schemes and also there seem to be recent results that point in this direction that large qs are not as secure as we might be hoping so as a takeaway we have to live with pretty small qs and boost the soundness in the paper there are two approaches for this which we call basically mapping down or going up so the first approach is the mapping down approach and in the talk just focusing on the first this works in the following way so in principle we now want to prove several scalar products so what this means is that we are given several polynomials fi and we want to prove that all these polynomials have vanishing constant coefficient and then by this we prove that several scalar products with different independent challenges are zero so a straightforward approach would be to just repeat the previous protocols several times for each fi but there would be costs involved for this approach like for example sending several mask vectors h and committing to the different masking polynomials g and these constant costs this increase in constant costs is precisely what we want to involve basically avoid in order to construct really efficient practical proof systems so in the paper we give a better approach and this works as follows the first observation is that there is a suffering in our ring RQ which we call SQ and now in this example I let SQ be the cyclotomic of degree 64 and concretely as a suffering the cyclotomic consists of all the polynomials where only every fourth coefficient is non-zero so the constant coefficient is non-zero and then the next three coefficients are zero and then again the fifth coefficient is non-zero and so on and on top of this property there is a nice map which goes from the large ring RQ to this small suffering SQ which is called the trace map and this trace map can be written as the sum over certain automorphisms or more precisely over the powers of some automorphism sigma and not just this the map also has the very important property for us that it essentially leaves the constant coefficient invariant so the constant coefficient is only again scaled by some factor but since we want to prove that it's zero we don't care about such a scale so why is this interesting for us and the reason is that we can now use this map to construct the following polynomial f which is just a linear combination of all the images of the fi under the trace map with factors that just shift the polynomials by multiplying with them by some simple power of x and if you look at this expression and basically the properties of the trace map that I've given before then we see that f is now a polynomial where the first four coefficients are the four scalar products we're interested in and by proving that all these four coefficients are zero we prove that four scalar products are zero at the same time and then this applies that a is equal to zero with a sum of zero one over Q to the four which is negligible for example if we assume that Q is about 32 bits long so now of course the question is how can we arrive at a commitment to this more complicated polynomial f now and fortunately basically the opening proof that we internally use already supports applying automorphisms to commitment so we can apply automorphisms and still be able to prove things about the message so we can actually implement this approach and commit or let the verifier compute a commitment to this new polynomial f which is using the automorphisms so this finishes the technical part and now coming to the results so what we have done now in our recent papers is that we usually benchmark new proof systems with some standard problem which is maybe kind of the drosophila of Lattice-Best and this problem is something which is motivated by Lattice cryptography and essentially consists of proving that one knows the secret in many LWE samples in dimension 1024 so concretely as a linear equation what we want to prove is that we know a ternary solution to a linear equation with 2048 variables and for this problem on this slide I've given basically a table of the proof sizes for several recent constructions and there we first see that there has been a pretty large almost a 100 fold improvement of proof size in essentially the last two years or so and secondly the other size proof system is really competitive even compared to for example the logarithmic PCP type system or our now in terms of proof size in terms of proof runtime we have also implemented our protocol and as is maybe also often the case in Lattice cryptography our protocol is very performant so our implementation needs proof of runtime of about a little more than 2 milliseconds for this LWE example and verify runtime in the order of 100 microseconds with this I thank you for listening and hope to see you at the conference I at least meet you virtually