 Okay, thanks very much for the introduction. So today I'm going to tell you about our new lattice-based protocol I'll just start quickly by talking about the lattice assumption that we base our protocol on So our zero-knowledge argument is based on this problem the short integer solution problem In that problem The input is a random matrix a Over some finite field z key. It's a wide matrix and you can solve this is problem by finding some short So I'm short vector s which is in the kernel of a So we're going to make use of the sys problem to create a hash function or commitment scheme Where we just apply the public sys matrix to a vector s in order to hash it or commit to it So if the sys problem is hard to solve then this gives us a binding commitment scheme or a collision resistant hash This is going to give us a hiding commitment scheme by the leftover hash lemma if we choose the last part of our message that we're committing to Randomly from some distribution with enough entropy Well the commitment or hashing operation is just matrix multiplication, so this is a homomorphic commitment scheme and Lastly something that's important for us for our zero-knowledge proof is that this is a compressing commitment scheme We can make the matrix as wide as we like and we can make the message that we're committing to as long as we like And we'll still get really short succinct commitments Okay, so now on to the the zero-knowledge part of the title In zero-knowledge proofs we have two parties a prover and a verifier and The prover would like to convince the verifier that some statement is true Without revealing any extra information In particular the prover might have a secret witness that they don't the verifier to learn about So the prover and verifier will interact somehow and then the verifier is going to accept or reject Depending on whether they will convince that the statement is true or not And this is a ton of applications like mixed nets, e-voting, anonymous identification and verifiable computing So all zero-knowledge proofs should satisfy three basic properties. We've got completeness So if the statement is true, the verifier should always accept the proof We've got soundness So a dishonest prover trying to prove a false statement should never convince the verifier And if this is only a computational security guarantee, we get the word argument We get a zero-knowledge argument just like the title We can strengthen this a bit to knowledge soundness meaning that the prover actually has to know a witness in order to convince the verifier then we get a proof or argument of knowledge and The last property is zero-knowledge The verifier or anybody else who sees the proof can't learn anything about the prover's witness. They just learned that the statement was true Okay, so the last part of the title arithmetic circuits Arithmetic circuit is a generalization of a Boolean circuit that uses gates and Compute some statements over a finite field. Let's say ZP and As part of our zero-knowledge proofs will be targeting arithmetic circuit satisfiability is the statement so the statement is going to be an arithmetic circuit and Some output values for the circuit which are find out field values The proof is witness is going to be some input values to the circuit and some field elements Which give the correct output to the circuit and this is an attractive target because deciding whether or not this witness exists is An empty complete problem. So if we come up with zero-knowledge proofs for arithmetic circuit satisfiability Then we can target all sorts of interesting statements and there are other practical reasons why circuits are a good choice Namely these these compilers from other formats into arithmetic circuits So as part of this work We're largely focused on the communication cost of the protocol If we're dealing with an arithmetic circuit with end gates Then we want the size of the zero-knowledge proof to be sublinear Compared to the number of gates in the circuit We care about the cryptographic assumption we're using we explicitly chose the the CIS assumption a lattice assumption for our Zero-knowledge proof because we wanted to have a protocol which was post quantum secure Lastly there are lots of other lattice based zero-knowledge proofs that don't target arithmetic circuits But more restricted statements. We wanted something that could deal with the full generality of arithmetic circuits But we do also get efficient proof of computation and verify computation in our protocols So here's a quick summary of our results So if there's an arithmetic circuit with end gates, then the prove it can prove to the verify that the circuit satisfiable using roughly Square it and Square M bits, so that's a square root cost relative to the size of the circuit and the proof of complexity and the verify complexity Are just a quasi linear size of the circuit and so this beats previous works, which all had linear communication complexity Okay, so how does argument work? Well, there's a typical strategy for this sort of thing and that's to take the arithmetic circuit Turn it into a collection of matrix equations then some polynomial equations Commits a various coefficients of these polynomials and this gives rise to the zero-knowledge protocol in the end Now the first part of this process This is not new at all. This is featured in lots of prior works lots of discrete logarithm based protocols and different information theoretic proofs But when we tried to take the existing discrete logarithm based protocols and just translate them into the lattice sets We ran into various problems. So we had to add some new stuff in We had to do some stuff with finite field extensions and we had to come up with a new lattice-based proof of knowledge So these bits are really the interesting and novel parts of our work Okay, so first I'll talk about our new proof of knowledge and how it improves on existing lattice-based zero-knowledge proofs of knowledge So here we have a cis hash function We've got this public matrix A and a hash T Those are going to be the the public parts Those are going to be the statement the proof of proofs The proof is going to demonstrate to the verifier that they know Cis pre-image without leaking any information about the pre-image Actually, we won't do this for just one cis pre-image at once we'll do this for lots of cis pre-images at the same time So you can see this is a proof of knowledge for lots of cis pre-images Or alternatively the way we'll view it as a component of part of our arithmetic circuit argument later on is will take roughly a square root n pre-images and Each pre-image will be a vector of length square root of n This will prove that the proven-nose n small hashed integers are related to the arithmetic circuit So this is going to be better than previous works because often previous works needed O of lambda squared pre-images in order to get Some good asymptotic efficiency for their protocols. They needed to wait some time before Amortization benefits kicked in but our protocols are actually efficient as long as you have at least Lambda pre-images that you're proving to just the security parameter Some other advantages that we have so typically letters-based zero-knowledge Proofs of knowledge for pre-images have some gap between the completeness properties and the sinus properties So first of all if the proven-nose some pre-image s where all the entries of s are less than some beta The sinus guarantee of the protocol only proves the proven new some vector Where the entries were less than k times beta? So we call this k the sinus slack so like other protocols We do get some sinus slack, but this is not too much just polynomial in the security parameter Some other protocols They actually fail to extract pre-images to the original hash instead you can get something like a pre-image to twice the original hash We don't have anything like this. We managed to extract exact pre-images For the for the cis hashes, but if we do allow a multiple like this then we can get more efficient protocols So last of all Last of all previous zero-knowledge proofs in this setting If you wanted to prove pre-images of m hashes you required o of m size of proofs Actually, how is there a knowledge proofs of knowledge scale very differently more like O of lambda the security parameter? So this can be a lot less So to show you how our protocol works I'm going to stop by showing you a very simplistic protocol and showing you how it can be Transforming to ours by some simple extensions So the proof is going to choose a random blinding vector y Hash that and send it over to the verifier The verifier is going to respond with a bit and then depending on whether the bit was zero one The proof is going to add a secret cis pre-image onto the the blinding value Y or not send this value to the verifier and the verifier is going to Hash said and check the value of z against the hash is it already new T and W So that's a very simplistic Zero knowledge proof of knowledge of a cis pre-image So our board school actually we want to prove that we know lots and lots of pre-images at the same time So what should the Prover do? Well The Prover does the obvious thing Instead of just having one challenge see which determines whether the secret is included in said or not We just use lots of challenge bits which determine whether a particular hat cis pre-image should be included in the sum I Won't talk about completeness and the zero knowledge property Because then what the main difficulties in the proof But you can sort of you can sort of see intuitively that this protocol is going to have knowledge scientists because if the if the Prover could send a good response to the verifier which included s1 and had some other values for C2 to cm some other different bits If the Prover could also send some z prime which didn't include s1 I had the same values for all the other random challenges then as part of the security proof we could subtract one of these values from the other we could recover s1 and This is basically enough to show that the protocol is knowledge sound if we apply the same idea to all of these differences pre-images As part of the security proof We can guarantee that we're going to be able to get responses like this from the Prover using some kind of probabilistic averaging argument Okay But that approach doesn't need to very good Soundness in the end we get a terrible soundness error. So the simple way around this is Just to repeat the protocol about a security parameter number of times. So this means we use random challenge Vectors of length about lambda instead of random challenge bits So if we measure the communication cost of the protocol we get in the end Against the number of pre-images that we're proving on actually the communication cost of the proof scale Logarithmically in the number of pre-images rather than linearly which is a big advantage over previous protocols Now when we use this as a small component of our arithmetic circuit argument Then we want to we want to minimize the the total size of all the commitments or hashes Plus the total size of the proof. So when we do this The entire zero knowledge proof of knowledge is going to cost about oh Square root of n when we use it as a component of our circuit protocol So there's a quick comparison with previous works for us Our commute the communication cost of our proof of knowledge scale linearly in lambda and logarithmically in M the number of pre-images So particularly when you have a large number of pre-images, this can be much better than the previous proofs Okay, so now I'll move on and talk a little bit about how our arithmetic circuit argument works so I'll start just by giving a few details on these matrix equations and Polynomials and how the circuits actually encoded as part of the argument so the high-level structure of the argument takes an arithmetic circuit like the toy example on the right and Looks at all the wire values for the arithmetic circuit and splits everything up depending on whether the wire value is a left input a Right input or an output of a particular gate So you can see the the three columns there corresponding to left right and output and Then to verify all the multiplication gates We can check that the entry-wise product of the two matrices at the top is equal to the third one To check all of the addition gates. We can check that the sum of the two matrices at the bottom is equal to the third one Now of course, you always have some output wires from some gates feeding into the inputs of other gates So we also need to check that various values across these matrices are equal to one another So this whole thing gives rise to a way of checking the circuit where we check some multiplication relations for all of the multiplication gates and for the consistency checks across the matrix and The addition checks at the bottom. We have some linear consistency constraints for a larger circuit we just do much the same thing with larger matrices and similar consistency checks and To get the best efficiency of the end In the end it turns out you want to choose matrices which are roughly a square root n by a square root n So the approach to giving a zero-knowledge proof then for arithmetic circuit satisfiability is the same in lots of previous arguments the prover commits to some vectors Receives a random challenge x from the verifier and then computes various different linear combinations of the vectors using the challenge x and once the verifier receives those from the prover the verifier essentially conducts a Polynomial identity test which has arithmetic circuit satisfiability embedded into the coefficients of the polynomial So that's a specially designed polynomial So with that in mind Here's a quick overview of what our protocol looks like In the first step of the protocol the proof is going to commit to all of the wire values from the matrices we saw earlier and Send all the committed values to the verifier After receiving a random challenge from the verifier the proof is going to commit to the coefficients of some polynomial Used in the verifiers polynomial identity test Then we have another step where the prover commits to some mod B correction factors I'll get into that in a moment and Lastly the prover computes some linear combinations of their the committed vectors does some rejection sampling on the result Runs a proof of knowledge for all the commitments in the protocol and then sends the results to the verifier Okay, so these mod p correction factors. What exactly are they used for? well Let's say we're computing a zero knowledge proof We're working in a ring zq in which the in which we have assist instance which we're using for hashing and committing We might be doing arithmetic circuit satisfiability modulo p for a much smaller p and When you commit to stuff using a assist based commitment scheme essentially everything you're committing to is really small You can treat all the calculations you do on those values as calculations over the integers So if you're trying to prove something like arithmetic circuit satisfiability mod p Then the approval will need to commit to some extra mod p correction factors to turn this integer like computation into a computation mod p to check some kind of condition mod p Okay so at the end of the protocol the verifier is going to check some some size bands on the information they receive and Check that all the linear combinations they received from the prover were correctly made up in terms of all of the proof is committed values So in terms of the efficiency of the protocol and where the hard work is the proof of commits to About a square root n vectors retaining a square root n y values as part of the first step of the protocol Then about the same number of polynomial coefficients in the second step There are just a constant number of vectors Which make up these mod p correction factors and finally the proof is going to send like a Constant number of vectors to the verify these linear combinations of commitment openings So this diagram just gives some intuition about how we choose parameters to make sure our protocol is secure So at the bottom we have p. We might be trying to verify an arithmetic circuit modulo p Since the values p are much smaller than the binding space of the commitment scheme Once the proof is some calculations on these values the maximum size of the values that the proven needs to commit to as parts of the protocol are a bit bigger and Then due to this sound a slack appearing in an earlier slide the maximum size of the openings that you can guarantee Through knowledge sadness are a bit bigger than that in order for our protocol to be secure We need the binding space for the system in the scheme to be a little larger still and so the Modulus q for the assist instance has to be even bigger than that But luckily there's just a polynomial size gap between these two So some back of the envelope calculations revealed that maybe q had to be about p to the power 5 or p to the power 6 something like that Okay, there are some additional issues to take care of too. So This kind of protocol in the discrete logarithm setting uses polynomial identity testing to and then Since the prime that you're using in a discrete logarithm setting is is very large The probability that something will go wrong with the polynomial identity test is very small But when we work in assist base setting the primes involved a much smaller. So this isn't the case to get around this we adapt some funnet field extension techniques from previous work and We also have some clever ways of embedding base field operations into Operations on field extension elements. This is basically a trick to get much better soundness as part of the polynomial identity test so This protocol achieves a square communication complexity in the end In the number of gates in the arithmetic circuit But it's a translation of discrete logarithm based protocols and the best discrete logarithm based protocol achieves logarithmic communication cost So this is a good result. I think but we still have some work to do Thanks very much