 All right, so thank you for the introduction. Today I will be talking about lattice-based zero-knowledge proofs, and in particular, our new techniques to get shorter and faster constructions and their applications. And this is a joint work at my supervisors, Ron Steinfeld, Joseph Liu from Monash University, and Dongxia Liu from data 61. OK, so you have already been introduced to zero-knowledge proof, so I will just skip this one particular slide. So the type of zero-knowledge proof that we particularly focus on in this work is called the sigma protocol. So it has this three more structure, commitment, challenge, and response. And it has slightly different properties. And the important thing, as you have also seen in the previous talk, is that we can easily make this non-interactive using this so-called Chia-Chamir-Huristing. And in general, this completeness is quite easy to satisfy. And for the zero-knowledge property in the lattice-based crypto setting, we have standard techniques like rejection sampling, so we can use those ones. And again, the particular property that we focus on is this soundness. So let's define what we mean by this. So this will be parametrized by something k, some variable k. And we will call it k plus 1, special soundness. This means that we will be giving some algorithm these accepting protocol transcripts C, X, and Z. And C's will be common for all these sets. XIs will be different, these challenges will be different. And we want to extract a witness W efficiently that satisfies the relation that the prover is trying to prove. So if we can do this, we will say that the protocol is k plus 1, special sound. And this will mean that we want to build this so-called extractor. And it turns out that if the prover can guess this challenge X in advance, then he can actually prepare this commitment in a way that the verifier always accepts, whether the statement is true or not, whether he knows the witness or not. So this means that the larger the challenge set you choose, the better soundness guarantee you will get. And ideally, you would want to have a challenge set of size something like 2 to the power 128 or 256. And there is also commitment schemes. I will not define it very formally, but the important thing here is that this is just a function, a homomorphic function, where the inputs must be short. So this is the requirement for lattice-based commitment schemes. Just we can just assume that we have an additive homomorphic commitment scheme, where the inputs or whatever goes into this commitment brackets must be short. And there are, in general, two types of pseudo-knowledge proofs in lattice-based crypto. The first one is the so-called combinatorial proofs, which is based on Stern's protocol. And the challenge set for these protocols are very small. There are only three elements in general. And the advantage here is that you can prove very complex relations. But the disadvantage is that since the challenge set is small, you need to repeat the protocol many, many times, something like hundreds of times, to get a negligible soundness error, which means that you will get a very slow and very long proofs. And the second type is called algebraic proofs. So these are based on the Schnoer's protocol. And the challenge set in this case can be basically as large as you want. And the advantage here is that you do not need to repeat the protocol. So the proof that you get will be very short and efficient in terms of computation. So you get this so-called one-shot proofs. And in the farmer case, you get this so-called multi-shot proofs because you have protocol repetitions. And the disadvantage in the algebraic proof setting is that there are more limited in types of proofs that we can achieve so far. And additionally, the relation that you will be proving may be relaxed or approximate. And I will tell you more about this. So our focus in this talk is on the later type. So if you look at the general structure of a standard proof of knowledge, which is often called the Fiat-Chamis with a board in the lattice-based crypto setting because the prover aborts with some probability, which is not important for our purposes. That's for zero-knowledge property. What happens is the prover has a commitment C1, which is an input. So this input also shared by the verifier. And so in this representation, I'm just sending it publicly. So we have C0 and C1 as the first commitments. We get a challenge, and we have a response. And then the verifier checks this linear relation. This C0 plus XC1 is equal to some commitments. And also, since the input of the commitment must be short, the verifier makes sure that they are upper bounded by some real numbers. So in this setting, how we prove special soundness is that if you assume that you have two accepting protocol transcripts with the same initial steps of C0 and C1 is common, X challenges are different. So we have X0 and X1 here. And we have accepting responses with respect to X1 and X1. So you will get these relations, these equations satisfied. And if you subtract the later one from the former one, you get this relation. So far, so good. So we know that this thing is small. We check because we checked that f and z are short vectors. So at this point, we have this relation again here. You can do two approaches. The first one is this multi-shot approach. So you want to get rid of this extra challenge difference here. What you need to do is you need to multiply both sides by the inverse of this challenge difference. And you need to make sure that this challenge difference inverse is small because it will go into the commitment box. So you need to make sure that that's small. So in this case, you will not be able to choose a very large challenge set. So you will need to repeat the protocol. So you will get an inefficient protocol. But what you end up with is an exact relation or very close to exact relation. And in the second approach, you just be happy with what you got. So you have a relaxed relation here. But you try to handle this relaxation. So the relaxation factor that we have here is a challenge difference, x1 minus x0. And this has been already used in the works initiated by Lubaszewski and his signatures. And in this case, you will not have any protocol repetitions. So this will be very efficient. But the relation that you prove will be relaxed or approximate. So this is what we call this x1 minus x0 is the relaxation factor. All right. So going back to this picture, we can see that this relation checked by the verifier is linear in x. So this is a just degree one relation. And if you want to prove more complex relations, that's where our contribution comes in. So I will just list first our contributions. So our first contribution is one-shot proof techniques for nonlinear relations for a degree k, which might be bigger than one. So this is like a generalization of this Fiat-Chamir with Abort's technique. And then we also have two speed-up techniques. One is a CRT packing technique supporting inter-shot operations. And the other one is the so-called entity-friendly tools that allows you to do fast computation. And then, of course, we have applications of these tools to a bunch of protocols, like binary proof, one out of many proof range proof. And we have an application of these protocols to a ring signature. And we also have a preliminary application to anonymous credentials. So in this talk, I will focus on the first two techniques. And I will refer you to the paper for the other ones. And just to give you an idea of how effective these techniques are, so this is a comparison between post-quantum ring signatures. And at this point, you can just assume that ring signature is a signature where you have a bunch of users. And one user is trying to sign on behalf of all these users. And somehow, the signature length is related to this group size, which is called ring size in the case of ring signatures. So these are the sublinear size ones. You can see that the two is the smallest possible ring size. And we go up to 2 to the power 21. And you can see that they are not scaling linearly. And we have a dramatic improvement over these existing schemes. So these two DRS and KKW are symmetric key-based schemes. And the other ones are lattice-based schemes. And if you compare our work to linear-sized works that are based on standard lattice assumptions, we can see that for the smallest possible ring size of 2, we are at the same efficiency level with the linear-sized proposals, which are tailored to be efficient for small ring size. OK, so let's get more technical and try to see, try to get an intuition behind this one-shot proof technique. So in this case, we will be generalizing this proof of knowledge. So instead of sending two commitments, the proof will now send K plus 1 commitments. And the rest will be the same. And the verifier will now check a degree K relation. So that's why we have this parameter K, which is the degree of the relation that's being checked by the verifier. And again, we have these bound checks. But the important part is this degree K relation. And first of all, why do we care about this relation or this structure? This is because in the discrete logarithm settings, the very advanced and efficient protocols, which are introduced recently, most notably these blood proofs, they make use, they involve this degree K relations and this overall structure. Of course, they do not have this bound check. And it's much easier to deal with many of the problems. But there is no solution in the lattice setting yet. And we are providing the solution. So what we need to do is we need to prove a degree K relation. And also, we want to extract a valid opening this CK. And we want to do this for a homomorphic lattice-based commitment scheme. And we want to do this in one shot. So we will try to build this witness extraction procedure. So we will have an extractor. And this is the overview of the protocol. And in this case, we will have K plus 1 commitments. This is the first move. And then we will give the extractor distinct challenges, K plus 1 distinct challenges, and their accepting responses. So we know that this equation is satisfied for i from 0 up to K. And we want to get an opening of CK. So we can write this system of equations as a matrix times vector. So we have a matrix constructed by challenges and a vector of commitments and their corresponding commitment computations. And this is over some ring R. And actually, we know that this matrix is the so-called one-dimensional matrix. We know the structure. We know a few things about this one-dimensional matrix. And just to recall, our goal is to get an opening of this CK. So basically, we kind of want to get rid of this matrix here and somehow move it to the other side. So obviously, one clear, one naive approach is to just multiply both sides by the inverse of this one-dimensional matrix. And the inverse looks something like this. So we have challenge differences in the denominator. And again, actually, this is like the generalization of the problem that I mentioned before. Again, if you want to compute this V inverse, we need to make sure that challenge differences have short inverses. So if you do that, one option is to use binary challenges. But in this case, you will only have two challenges. And the other option is using this so-called monomial challenges. And this was done at ACNs this year. But again, even if you use monomial challenges, it will not be one-shot because the challenge set will not be exponentially large. So you will need to repeat the protocol. So this is not the problem. This is not the answer to our problem. And going back to this relation again, so we again want to relax our goal, as in the case of this one-shot proofs. And instead of finding an opening of ck, we want to add a relaxation factor in front of this commitment c. So we need to find y, m, and r, where y is some scalar in the ring. And we want to have a decommitment of y times c instead of just c. And we need to make sure that this y and also this m and r are short. So we want to kind of eliminate these one-dimensional matrix v here, but we do not want to see any inverse term. And thankfully, there is, and we want this scalar, so we want to find a matrix v prime such as when I multiply v prime and v, I should get some scalar. And thankfully, from linear algebra, we have this beautiful fact. So if you multiply a square matrix by the conjugate of that matrix, you get the determinant of the matrix times the identity matrix. And using this fact, we can multiply both sides of this relation by the conjugate of v. So we will get this thing here. So conjugate of v times v will give you the determinant of v. And on the right-hand side, you will get the conjugate of v. We also have the identity, but when you multiply it with a, you just get a. And this is the relaxation factor that you end up with. So the determinant of the van der Mont matrix is the relaxation factor. And it has this structure. It's just a product of challenge differences. And this is a scalar in the ring R because all these x i's and x j's are just some scalars in the ring. And if you actually put in k is equal to 1, which is the linear case, you actually end up with x 1 minus x 0 is the relaxation factor, and which is exactly the case that was obtained in the previous works. So we obtained the previous works as a special case of our work. And we also need to look at the quality of the witness. And the quality means that we need to look at how short is the witness. And in general, we can support challenge sets of this form, so with some certain degree d, or d minus 1 at most, and some certain hamming weight, and some infinite norm. In particular, you can make the challenge set, for example, the challenge set size to be 2 to the power 256 by setting these parameters. So this means that the proof will be one shot. And we have some generic results depending on what parameters you choose, depending on this dwp and the relation degree k. So how good or how short witness that you are going to get. And the good thing is that we do not need any immortability condition on the challenges. So this means that v, this one-dimensional matrix, can even be singular. And you can use the so-called entity-friendly modulus q, because you do not need to restrict your modulus to be of special form, so that the challenge differences are immortal. All right, so this is the first part. And the second part is the CRT packing technique. So I'll just try to give you an intuition behind this. So I'm going back to this proofs of knowledge. They actually make use of these, what we call an encoding of some message. So we define the encoding of some message m under a challenge x as x times m plus some random masking value rho. And let's suppose that we are working over this cyclotomic ring, rq, and f is an element of this rq. And if you choose q appropriately, you can make this x to the power d plus 1 split into s factors. Each of them will be of degree d over s. So s is a power of 2, and d is also a power of 2. And in this case, using the Chinese remainder theorem, these rq will be isomorphic to these s rings or fields, depending on whether these p is are irreducible or not. And we can actually use this CRT mapping to encode in different slots multiple messages. So this has already been used in the context of fully homomorphic encryption. And we are providing an application of this in the context of zero knowledge proof. So everything here is mod q. So I will just skip saying mod q. And in this case, the CRT of m will be equal to the m reduced of each of these polynomials p is. So we can try to encode multiple messages in a single f of this form using the CRT mapping. And let's do a first trial. So we just put. So let's just denote CRT inverse by these brackets. So we can set m to be the CRT inverse of these m i's, m 1 up to m s. And we just multiply it by a challenge x and add some random masking value. So this is an encoding of each of these things in different CRT slots. And so this is so far so good. But how do we do interslot operations? So for example, if I want to add an encoding of m 1 plus an encoding of m 2, how do I do this? So if you just extract, so if you compute f mod p i for each i, so you can do this because p i's are public. So you will get x i times m i plus rho i. So this is an encoding of m i under x i. And these rho i's and x i are the CRT slots of these random masking value and the challenge. So now you have lost the homomorphic properties because m i is encoded under x i and m j is encoded under x j. So you may not be able to do additions or multiplications unless x i is equal to x j. So if you want to do interslot operations, our idea is to choose the challenge x so that it has the same polynomial in all the CRT slots. So it's just x and x in all the CRT slots. So this means that x mod p i is equal to x prime for all the i's. And the degree of the challenge must be smaller than d over s. And now the extracted encodings, which we denote by f i here, will be f mod p i's and mod q as well. And each of them will be an encoding under x. And you can do these interslot operations as the way you wish. And this is computable by anyone that has access to f and these p i's and which are all in the public. Okay, and we have an application of this to arrange proof. So the general idea for arrange proof is to prove knowledge of, for example, if you are doing this on 64 bits, you prove knowledge of 64 bits of these b i's and you show that these bits that you know construct some integer value. Okay, and you use the homomorphic properties to do this. So in this case, we need to send these encodings, 64 encodings for each b i's. And we can see that each of them since this f i is in this r q, each of them will cause something like d times log q bits. So from a shortness perspective, you want to choose a small d because each encoding is a factor of d, has a factor of d. And from a computation perspective, you want to choose d large so that you can use this ring structure to do efficient computation. So our idea is to first, you find d prime to optimize the proof length. And then you find d required for the security. And then you use a CRT packing techniques, this CRT packing techniques with s slots where s is equal to d over d prime. Okay, so this means that you will work in this degree d polynomial so you can do fast computation. And you will use challenges of degree d prime and you can pack s bits in a single encoding and each one will cause d log q over and if you divide it by s because we have s slots, it will get d prime, effectively d prime times log q as the effective cost for each encoding of the bit. Okay, so this is optimal in terms of the storage cost and we get overall faster computation because we are working in a larger degree ring. And this is a comparison between the range proofs. So this CRT packed one is our results. And if you compare it to a hypothetical scheme which is optimized for proof length, you can see that the proof lengths are very close. So the gap is only because the ring size in our case needs to be a multiple of 512. That's why we have a gap, we have a small gap between the proof sizes. But if you look at the degree d and also for the number of FFT levels supported, we have much faster computation overall. And some open questions is of course, one thing is applications of these new techniques to other zero knowledge proofs and also their application to higher level constructions and also other ways of exploiting this CRT technique. And we also have more technical ones in the paper. So I will stop here and just to mention that we also have an application of these advanced zero knowledge proofs to a post quantum confidential transactions protocol which will hopefully be coming up soon. Thank you. If anybody has any question, please come to the microphone now. There are no questions from the audience. I will ask my own question. So zero knowledge has been attracting a lot of attention lately and considered for standardization. And these protocols you presented that seem to be potentially very efficient. Is the zero knowledge standard process standard that you've been looking into to see if your work may be of interest in that context? Yeah, I think it may definitely be of interest, yeah. And one more question. In the talk you were mostly focusing on the power of two cyclotomics. Is there anything specific that you were using or is something that adapts easily to arbitrary cyclotomic things? I think it works for NED, but you need to optimize it depending on the protocol that you are using. I think we use something like 512 for the actual parameters. So let's thanks the speaker again.