 Hello, I'm going to present our paper Practical Product Proofs for Letters Commitments. And this is joint work with Thomas Atema and Wadi Mjoboszewski and my name is Gregor Zyla. Okay, so let me start with a simple example that shows that product proofs are useful. And this example is about range proofs. In a range proof, the goal is to be able to commit to some vector and then prove that the vector is binary, so all the coefficients are either 0 or 1, and also that the integer that is encoded by the vector lies in a certain interval, which means that only the first k coefficients are really 0 or 1, then the remaining n minus k coefficients are all 0, which implies that the integer that is encoded lies in the interval between 0 and 2 to the k. And then, therefore, this is called a range proof. This problem can be solved with a product proof, where one is able to prove product relations on individual coefficients of the vector m. So precisely, we want to be able to prove that the first k coefficients are 0 or 1, so we prove that they fulfill the relations mi times 1 minus mi equals 0, and then we want to prove that the other coefficients are 0, which we show by proving the relation mi squared equals 0. Now in our paper, one of the main results is an improved product proof for a certain letter space commitment scheme, and then, in turn, these product proofs allow us to construct a range proof, which for 1,024-bit range has a proof size of 31 kilobytes. But apart from this result, we also give new technical contributions that are interesting outside of product or range proofs. OK, with this introduction, let me start with the algebraic setup that we will be using, and as is usually the case in efficient letter space constructions, we will be working over some polynomial ring. Now it has emerged in the last years that it's usually best to choose the smallest ring that is sufficient for a task, and therefore, in all our protocols, we will use this standard power of 2 cyclotomic ring of rank 128. The rank 128 is because we aim for 128 bits of security, and then the modulus we are using will be a prime number, but I'm saying more about this in a second, so the modulus cube. OK, so I said our product proof is proving product relations for a particular lattice-based commitment scheme, so let me introduce this commitment scheme now, and we will be using the BD-LOP commitment scheme. This was presented at SCN 2018. In this scheme, there are the following public parameters. There is matrix B0 over the ring RQ, and then there are potentially many row vectors, B1, B2, and so on. With these public information, one can commit to arbitrary polynomials in the ring RQ, let's call them M1, M2, and so on, by sampling a short vector R and computing the following expressions. So one computes B0 times R, and then for every message polynomial MI, BIR plus MI. And then all these polynomials TI together with the vector T0, this gives the full commitment to all the polynomials MI. This is really a proper commitment scheme, so it is hiding and binding, and it's very easy to see this. So I'm quickly recalling this fact. The scheme is hiding because every polynomial in T0, but also the polynomials TI, they contain an additive term that is an independent module LWE sample, therefore all the polynomials look uniformly random under the module LWE assumption, and the scheme is binding because if one would be able to choose or to change one of the messages MI without changing the corresponding commitment TI, then one would also need to be able to change the randomness vector R, but this first equation T0 equals B0R kind of serves to authenticate this vector R. So if one would be able to give a second vector that still gives the same T0 second randomness vector, then one would have found a module C solution, which we assume is not possible. Okay, so now essentially all the efficient zero-knowledge proofs about this commitment scheme, they have an underlying building block that is just in a so-called approximate proof for the first equation. And we call this building block the opening proof because this proof of the first equation on its own essentially shows that one knows an opening for the commitment. And then starting from this building blocks, usually your knowledge proof systems about this commitment scheme work by adding additional features to this opening proof, so basically extending this opening proofs and our product proof is no example. But since we also give a new analysis of this opening proof that is very crucial for our product proof, I will now start with the opening proof. As I said, it's an approximate proof of the first equation, which is essentially modeled after schnoir proofs in the discrete block world, and it works in the following way. The prover samples a short so-called masking vector Y from some narrow distribution, then computes W which is B0Y, sends this to the verifier, the verifier samples a challenge C which is a very short polynomial that usually consists of trinary coefficients in minus 101, then the verifier sends this polynomial to the prover who computes what we call a masked opening. And we call this mask opening Z and it is of the following form, it is the masking vector Y plus C times the randomness vector. And then in the lattice world, there is a technical complication, the prover cannot just send this Z, otherwise the protocol would not be zero knowledge because the Z would reveal secret information, but there's by now a standard technique how to avoid this by basically aborting if Z reveals secret information. If the prover is able to send the Z, then the verifier checks that it is short and some verification equation. So this is the approximate proof for the first equation which shows that one knows an opening. Now I give an example how this can be extended to prove additional statements. And the simplest extension for example is if the prover just wanted to prove that one of the message polynomials say M1 is zero. In this case the prover would also compute B1Y and send this to the verifier within checks and additional verification equations. So this is essentially a general pattern that when uses this building block and adds new polynomials that are sent and new verification equations for the verifier and then one gets basically more advanced protocols. If one then analyzes this proof of commitment to zero that I've presented on the previous slide and then one finds that all one can extract is the following. So the thing one can extract is one message M1 and a so-called challenge difference C bar that is the difference of two challenges into accepting transcripts such that the C bar times M1 equals zero. Now this is of course only sufficient to prove that M1 is zero if C bar is invertible. And for this technical reasons all previous papers about zero knowledge proofs for the BD-LOP commitment scheme they have restricted to the case where C bar is always known to be invertible. So this is possible by either choosing the ring in a suitable way or by restricting the challenge set. So that one knows that the difference of all challenge polynomials is always invertible. And with this I can explain that our first basically improvement for the opening proof is that we drop this assumption that C bar is invertible and show how to work with non-invertible challenges. I explain this in the following way. So we need first a better characterization what it means for an element in our ring to be invertible and the best characterization for this works by the Chinese remainder theorem. So essentially depending on how much Q splits in the cyclo-tomic ring our ring RQ is the product of smaller fields. So this is nothing else than that the polynomial x to the 128 plus 1 factors modulo Q into smaller polynomials and then by the Chinese remainder theorem this ring is a product of the rings CQ x modulo these factor polynomials. For simplicity we usually call the fields that emerge the CRT slots, CRT for Chinese remainder theorem. Now for the torque I decided to focus on a particular simple case for our protocol where the ring only splits into CRT slots of degree 4. So this is why basically all the factors of x to the 128 plus 1 are of the form x to the 4 minus R. This removes a lot of the complications in our paper but it simplifies the torque. So now with this basically splitting of our ring we see that some element for example C bar is invertible even only if it is non-zero modulo all the factors x to the 4 minus Rj. And if you remember this equation C bar times m1 equals zero then we see this is not used less in the case where C bar is non-invertible because it still proves that m1 is zero modulo all the x to the 4 minus Rj where C is really non-zero. It's just that there might be a couple of entity slots where C bar is zero and then in these entity slots we don't learn anything about m1. The idea for our improved protocol is that we don't try to be able in the extraction to get a challenge different C bar that is really invertible so non-zero everywhere but we want to set up the scheme in such a way that the extractor can obtain many different C bar j's with a property that for every basically for every factor x to the 4 minus Rj there is one C bar j that is non-zero there and then if we also have all these equations C bar j times m1 equals zero then we can piece them all together and deduce that m1 is really completely zero everywhere. Okay so how do we implement this idea? Essentially all we have to do is we have to bound the probability that C bar is zero modulo these factors x to the 4 minus R because if this probability is very small then intuitively the prover has small cheating probability so basically all the prover can do to get away with proving that m1 is zero if it is not zero modulo one of the x to the 4 minus R is to hope that C bar is always zero modulo this factor and if this probability series is small then then his success probability is small. So how do we compute this probability? If we write down a challenge polynomial C with coefficients Ci that are sampled from the set minus one zero one then we can slightly rewrite this polynomial by grouping together coefficients whose index is or has the same remainder mod four so I've done this on the bottom of the slide and in this representation of C we see that the reduction of C mod x to the 4 minus R this is a polynomial with four coefficients and all the coefficients are evaluations of completely independent polynomials at R so therefore the coefficients of C mod x to the 4 minus R are independent and in the paper we compute the distribution and not really the full distribution but the maximum probability of this distribution over zq and find that for suitable parameters this maximum probability is not much larger than one over q and then basically following from this we can deduce that the probability that C bar mod x to the 4 minus R is zero which means that all the four coefficients are zero this happens only with probability essentially q to the minus 4 and if q is in the order of 2 to the 32 then this q to the minus 4 is negligibly small so around 2 to the minus 128. Of course modulo other factors so I've focused on this first factor on this slide but the probability or the maximum probability modulo the other factors is not independent but this is also not needed because we are not trying to get a C bar which is nonzero everywhere and this is enough that basically for each factor x to the 4 minus Rj this cheating probability is small individually because then the extractor basically is successively revinding and can get a C bar modulo all the factors where it is not zero. Okay so this concludes our first improvement of the opening proof now I come to the second improvement that is directly applied in our product proof. The second improvement is that we are able to show that the reply z so this mask opening sent by the prover in the opening proof and this is at least when the prover has sufficiently high success probability this is always precisely of the form in the honest transcript and the honest execution so by this I mean that in the extraction we are able to extract vectors y and r such that z can be written as y plus cr and these y r are fixed in the sense that if we remind the prover and send a new c then we of course get a new reply z but it will still be of the form y plus cr with the same with the same y and r as before so the prover really is committed to y and r and it's not only this but these vectors y are fulfilled the equations that we expect this in particular means that that the r vector is a useful is a valid randomness vector for the commitment scheme which means that the commitment polynomials ti they can be written as bi r plus mi I should note that it is not necessarily true that y and r are short anymore but this is also not needed here okay this basically new analysis of the opening proof makes it now much easier to work with more complicated verification equations that we knew how to handle before and in particular verification equations that are non-linear in the commitments and basically the first example for this is our product proof which works in the following way now so as I said before we know that that the mask openings that can be written as y plus cr with a fixed y and r that are independent of c and t1 is b1 r plus m1 and this allows us to do the following so we can let the verifier compute the polynomial which is defined by b1 z minus ct1 and if you look at this expression and just substitute the two equations from above then we find that f can be written as b1 y minus cm1 if you now look at this last equation then we see that this is really just a mask opening of m1 where the masking polynomial is b1 y and the secret is the message m1 and yeah so this is something which in previous product proofs would have been sent by the by the provas and previous product proofs the prover would have sent a mask opening of m1 and now we see that we can let the verifier compute this from data he has received in the opening proof without further communication cost and in previous protocols where the prover has sent such a mask opening it was also necessary that the prover proved that f is really well formed so it's really a correct mask opening of m1 and in our case this is not necessary anymore because the verifier computed f in a controlled way and therefore is convinced that that f is correctly formed and these two facts together decrease the communication cost of our product proof and then after we have established that the verifier can get hold of a mask opening the protocol proceeds in the standard way so we construct the following quadratic equation in f which is f squared minus cf and if we evaluate this and group together coefficients that involve the same power of c so that we get a quadratic polynomial c then we get a constant and linear term that are not important which we call the garbage terms and leading terms of the quadratic term which involves the coefficient m1 squared minus m1 and then also in the standard way if one proves that this leading coefficient vanishes then this proves the relation we want to prove and this works by essentially committing to the garbage terms subtracting them and then proving that the resulting polynomial is the zero polynomial. Now I said before that in the talk I focus on the case where our ring splits into crt slots of degree four and in this case the proof I've just basically explained on a high level this proof already has negligible soundless error because the soundless error of this approach is essentially determined by the size of the crt slots. In many applications it is advantages to to let the ring further split so in for example fully split into linear factors and then this approach would only have the soundness basically one over q which is non-negligible so then in this case the question arises how can one boost the soundness and is there maybe a better way than just repeating the protocol several times and this is another technical basically contribution of our paper which can also be used in in other protocols not just in our product proof but since it is quite technical I'm not going into much detail here and just giving a very high level idea how this works. The idea is essentially that we set up the opening proof in such a way that the proof isn't the verifier is not just able to compute one mask opening of the message m1 but several mask openings where the the message is rotated under some automorphism sigma and this is also very important all the mask openings still involve the same non-rotated challenge c and as soon as we have this we can basically construct the same quadratic relations on f as before but now for several f and then linear combine all of them with uniformly random challenge polynomials alpha and if we do this then we arrive at still quadratic polynomial just in c but with the leading term that is a random linear combination of the rotations of m1 squared minus m1 and then by doing the same as before proving that this term vanishes we actually prove the relation with a negligible sound error. The advantage of this approach over just repeating the the the the the basic protocol several times is that we still only have a quadratic polynomial c in the end and which means that there's still only two garbage terms and so basically where as when we would have repeated the product several times then we would have two garbage terms per repetition which costs which involves costs in the form of commitments to these garbage terms. Okay with this I finished my presentation and thank you very much for listening.