 Okay. Great. All right, so this is joint work on zero knowledge IPs with linear time prover and polylogic time verifier and joint work with Alessandro Keezer at EPFL and Sikilu from UC Berkeley. Good to the next slide. Not using the pointer. Okay. So this talk is about zero knowledge proofs and arguments. In the setting of zero knowledge proofs, of course, we have a prover and a verifier, and the prover wants to convince the verifier that statement is true. So in this case, we're going to consider MP statements, arithmetic circuits over a finite field, and the prover wants to convince the verifier that these circuits are satisfiable. So the prover knows a secret witness which feeds into this circuit and gives the stated output. So the prover and the verifier are going to interact, exchange a lot of messages in a conversation, and at the end of the conversation, the verifier is going to accept if they were convinced that the prover really knows a witness and the circuit is satisfiable, and if not, they're going to reject. All zero knowledge proofs should have three important properties. We've got completeness, which asks that if the prover is telling the truth and they really know a witness to the circuit, then the verifier should accept. On the other hand, sadness, so if the prover is cheating and the circuit is not satisfiable, then the verifier should reject. And finally, zero knowledge. So throughout the course of this conversation, the verifier shouldn't learn anything more than the fact that the circuit is satisfiable, and in particular, nothing about the witness. So we can ask, what's the ideal zero knowledge proof system in terms of efficiency properties? What's the best that we could possibly do? We'd want a zero knowledge proof which worked for circuits over any finite field, F, and we'd want proving costs for the zero knowledge proof to be roughly the same as the costs of computing your way through the circuit. So it costs n field operations to compute your way through this arithmetic circuit. The prover should cost o of n field operations as well, ideally. And if proving is as fast as computing, we want verification costs and communication costs to be just minimal. So we can set targets for polylog n bits of communication between the prover and the verifier, and polylog n field operations for the verifier. Now, of course, you could try and do better maybe with constant size proofs, but those require really strong assumptions necessarily. So polylog n is a good target under standard assumptions. Now, there's one thing here. The verifier actually has to read the statement that is being verified in full. So to achieve this polylogarithmic verification time, we're also going to add an indexer who just preprocesses the entire circuit once for the verifier so that the verifier doesn't have to read the whole thing. And after this preprocessing has been done, we allow like linear time preprocessing again, then the verifier can run in, say, polylogarithmic time. So this would be our target. Now, people have done lots of work on reducing the proof size and the verification time for zero knowledge proofs. And the really difficult thing has been to get o of n field operations for the prover. And this has been really difficult because lots of popular proof systems use tools like fast Fourier transforms, large fast Fourier transforms with polynomial multiplication for large polynomials, o of n degree polynomials where n is the number of wires in the circuit. And so proof systems like Frey using big read solenoid encodings, other proof systems like what's 16 using like large polynomial multiplications, which means o of n log n field operations from fast Fourier transform algorithms. On the other hand, maybe some other proof systems like bullet proofs are using algebraic commitments, like big medicine commitments where you do a group exponentiation for each wire value in the circuit. And o of n group exponentiation is o of lambda n field operations. So, you know, a lot more than the o of n field operations that we're aiming for. There are some prior works which do achieve this o of n prover time. There are a few of them and I'm just going to show the one with the best verified complexity and proof size here. So, so far, before this work, when you had to achieve o of n field operations prover complexity, and for any epsilon, we could achieve n to the epsilon verified complexity and proof size so you could have any sublinear verified complexity and proof size that you wanted. But this is still only sublinear. It falls short of our target and this proof system didn't achieve zero knowledge either. And, you know, notably, there's only one strategy so far for constructing linear time zero knowledge proofs of this kind. So, this argument system came from an information theoretic proof system, an IOP with very similar complexity parameters, and then was converted into an argument by applying a hashing transformation using some special hash functions which only incur constant computational overhead. So, you can hash n things using o of n field operations. So far, this is the only strategy for constructing linear time zero knowledge proofs to go via this information theoretic and hashing path. So, then the question for us becomes, you know, can we look at the information theoretic IOP construction at the bottom and can we improve the verified complexity, query complexity and add zero knowledge so that we can apply this hashing transformation and get an argument of the type that we would like. So, that was our main challenge in this work, reducing the query complexity, verify complexity and adding zero knowledge. So, here are our results. Of course, I'm talking to you now. So, we managed it. So, we successfully constructed a zero knowledge IOP with a linear time prover. We reduced the verified complexity to poly logarithmic and the number of queries to logarithmic. And, you know, via the same hashing transformation, this gave us an argument with similar complexity parameters. And then, subsequently to our work, people have optimized a little more. Our work is very focused on achieving these goals asymptotically and then, subsequently, people have come up with some more concretely efficient proof systems with the same asymptotics, but something that you will be happy to actually run on your computer. So, let me talk a little bit about how we achieved our result. First, I'll tell you what interactive oracle proofs are. These are very similar to normal zero knowledge proofs or interactive proofs, except that, instead of the verify reading the prover's messages in their entirety, the prover sends proof oracles to the verifier, which you can think of like committed data. And then, the verifier makes queries to all of those proof messages instead of reading them. Now, other than that, the prover and verifier just interact exactly the same as before. And depending on what type of IOP you'd like to construct, we can allow different types of queries on all of the prover's messages. So, our main result is a point query IOP in which the verifier reads different positions of the prover's messages. But as a stepping stone in the construction, we also use tensor query IOPs in which the verifier can request structured linear combinations of the prover's messages. And these are a special case of linear query IOPs where the verifier can request any linear combination. Okay. So prior work used an approach where there was a tensor query IOP with a linear time prover and a somewhat good sublinear verifier. That's the thing you can see in green on the left hand side of the slide. And then, this had to be converted into a point query IOP with a sublinear complexity. So that transformation from a tensor query IOP to a point query IOP makes use of a linear error correcting code. And a special consistency test which checks that, you know, when the tensor query IOP is converted into a point query IOP, you know, nothing really goes wrong in this conversion process. And then after we had the point query IOP, we could just apply this hashing transformation that I mentioned before to get an efficient zero-knowledge argument. Now, the tensor query IOP was pretty good. That wasn't the bottleneck in the final result. But unfortunately, the sublinear verifier complexity in this consistency test in this compiler process led to the sublinear verification costs in our point query IOP. So everything else was fine. Like the proof of time was good enough. There are good enough error correcting codes to make this transformation efficient enough for the prover. But we had to improve the verifier complexity and the query complexity of this point query IOP. So we did that by applying some some proof composition techniques to improve the efficiency of this consistency test. And then once we've had this new improved consistency test into our compiler, we were able to get the verifier complexity that we wanted. We also had to add zero-knowledge to the ingredients in our construction in order to make the final result in zero-knowledge. So we had to make the original tensor query IOP zero-knowledge. And the codes that fed into this compiler procedure, we had to construct some special zero knowledge linear error correcting codes with the right efficiency properties. And this was enough to make our final construction zero-knowledge. So for the rest of the talk, I'm not going to go into much detail about how the underlying ingredients from the previous paperwork only as much as necessary. But I am going to describe how we added zero-knowledge to the tensor query IOP and the error correcting code. And a little about how we applied these proof composition techniques to improve efficiency. So first I'll talk about the tensor IOPs. So in this prior work, we constructed tensor IOPs for circuit satisfiability over a field via the ONCS problem. So this is an MP complete problem that's used to argue about circuit satisfiability in lots of, say, practical zero-knowledge proofs. It involves three matrices, A, B and C, which encode the wiring information from the arithmetic circuit and a witness vector Z, which basically encodes the wire values from the arithmetic circuit. And this circuit satisfiability problem is transformed into a set of linear algebraic conditions with Z and some other vectors ZA, ZB and ZC. So in this tensor query IOP for prior work, the most important thing about this construction is that the proof starts by sending all the witness values from the ONCS instance to the verify who gets to make tensor query access to those. So the first step is pretty basic. And then afterwards, there's some other stuff going on that we don't really care about for the purposes of this talk. Now, if you look at what a tensor query is to any of these witness elements, it's a linear combination of elements and the witness. So obviously a linear combination of witness elements gives you some information about the witness. So the tensor query IOP as it was in this previous work just trivially leaked all kinds of information about the witness. The way we solved this was by making all of the tensor queries just look uniformly random. So our first step was to pad the ONCS instance and witness with randomness. And we did this in a careful way so that after padding, we still had something which satisfied a new augmented ONCS instance. And then after that, we could just run the same protocol as before with minimal changes because we'd maintained this ONCS circuit or circuit satisfiability. So to pad carefully, we introduced some ONCS gadgets, so some tiny ONCS instances, very simple for which we could sample random solutions by sampling AMB uniformly at random. And then you can repeat this many ONCS instance s times according to the number of queries that would have been made to the ONCS witnesses. And then we can pad the original ONCS instance and the original witnesses with vectors that come from these ONCS gadgets. And now, the presence of this peak concatenation on the witness and the PA that you can see on the slide mean that there's a random component to the ONCS witness. So every tensor query that you make looks uniformly random. And if we chose an s large enough, there are enough random parts of this augmented ONCS witness to make all the tensor queries independently and uniformly random. So that was zero knowledge for the tensor query IP. And next I'll talk about modifications that we had to make to the error correcting codes in our construction. So prior work involves this code based compiler where you take the tensor query IP that we had before and then you convert it into a point query IP. So we had to do this by encoding every single message in the tensor query IP using a suitable code. And after this, after we've converted to a point query IP, the verifier is no longer allowed to make the tensor queries that they made before. They can only make point queries. So to simulate the entire tensor query IP protocol, the verifier addresses the tensor queries directly to the prover who just responds with the answers to these queries. So now we have two new problems. Firstly, we have to trust that the prover encoded all the tensor IP proof messages correctly. And secondly, we have to trust that the prover actually provided the correct tensor query answers now that they're sending them by themselves. So in prior work, this is done via a special consistency check like another point query IP. And this was the problem point query IP whose query complexity and verify complexity was too large. But before going on to that, we have to look carefully at these encodings. So what choice of encoding did we use in prior work? We chose a tensor code. You construct a tensor encoding by taking a linear code and then if the original tensor IP proof message was arranged in a hypercube, you could encode each dimension of the tensor IP proof message in this hypercube like one at a time using a base code in every different dimension. So with an error correcting code C. And then we'd get a collection of tensor code words and on the slide, you can see like a collection of tensor code words of rank two, which have been encoded in two different directions. And as part of this consistency check, the verifier would query this new encoded cube in a stripe. And then the efficiency properties of the protocol are directly inherited from the side length of whatever hypercube we chose to begin with. So O of n cubed on the slide, we would get O of n cubed query complexity and verify complexity. So if the original tensor query IP ran in linear time, we want this simulated point query IP where everything's encoded to run in linear time as well. So we need this tensor code to be encodable in linear time. We also want zero knowledge. So queries to this encoding shouldn't leak any information. Now the first requirement is pretty easy to satisfy. If we have a linear time encodable code for which we know constructions by Spielmann or Drucker-Schei, then if t is a constant, this tensor code C to the t is going to be encodable in linear time as well. But the zero knowledge property that queries to the encoding don't leak any information, that part wasn't so obvious. We fixed this by constructing some linear time zero knowledge tensor codes, starting with a zero knowledge base code where if we encoded a message with some appended randomness, then up to B queries of this encoded base code word would just look uniformly random. So starting with a linear time zero knowledge base code, we showed that in a tensor code where the message is arranged in some kind of square or cube, if you arrange the randomness in the right way, then zero knowledge of the code is actually preserved under tensor products. So you can make the same number of queries to the tensor code word that you would have made to the base code and they still look uniformly random. We proved this by investigating some characterizations of what it means for a code to be to be zero knowledge. And as a result of studying these characterizations, we also came up with a new construction of linear time encodable, zero knowledge base codes based on this druckish code. Okay, last, I'm going to talk about how we reduce the query complexity of the consistency test from prior work to get a more efficient verifier and smaller proof size. We've already seen that if you arrange the tensor query, the tensor op messages in our hypercube, then the verifier complexity and communication complexity is just the same as the side length of this cube. And we can only choose a constant dimension for this cube, otherwise we run into problems with with proof of time and your other issues inside the proof. So if we choose to your constant, we can get any sublinear verifier time, but this is still too much. We want logarithmic or polylogarithmic. So what do we do with proof composition? We use another proof system to show that the would have accepted in the original consistency test. And this is going to be more more efficient and use fuel queries and have a more efficient verifier. So in particular, we run the BCG 20 consistency check minus this problematic sublinear number of point queries. Our new instance is the verification equations from from this consistency check. The new witness is the answers to those queries that the verify would have made. And we have a new prover and a new verifier, and they run a very powerful construction of point query IOP or even PCP from 2009, which only makes a constant number of point queries. We have to be a little bit careful here because we know that the original protocols we were dealing with, the original consistency check and tends to query IOP run in linear time. But if we look at the new instance that we have, we only know that the prover time for this new constant query IOP that we were using that runs in polynomial time and not necessarily linear time. So we had to be a little careful here and note that we can choose the size of the new instance to be any constant root of the number of wires in the circuit. And we could choose this small enough so that it counteracts any polynomial blow up caused by invoking this new proof system, which runs in poly time. So in that way we could get, we could maintain a linear time prover. We also showed some extra results that we needed like the fact that the zero knowledge property is preserved under proof composition, some basic facts that might be useful in future in different works that try and use proof composition of IOPs. So let me summarize the contributions of our work. We reduced the query complexity and verify complexity of linear time prover IOPs from any sublinear value to logarithmic and poly logarithmic and added zero knowledge. And this gave us a succinct zero knowledge argument with very similar properties using hashing transformation. We also developed some new tools which should be useful in future works aiming to construct zero knowledge IOPs such as these own CS gadgets. The fact that zero knowledge codes are preserved under tensor products, which is going to be useful because at the moment some kind of tensor codes are present in all constructions of linear time IOPs. And finally we investigated the properties of zero knowledge under proof composition. So that's the end of my talk and I'll be happy to take any questions. Thank you very much. Awesome. Could you come up to the mic? So for the people on Zoom, can hear you? Right over there. Does it work? Oh, perfect. So one of the properties you claimed as the whole grail at the beginning was that you would like to be able to do proofs over any field. But in this result, for instance, you showed that your field size needs to be of the same order as the circuit size. Can you indicate where the bottleneck for that is in the proof? Is it like in the IOP, the original IOP that you use? And how you could potentially try to get around it? Sure, the bottleneck there actually comes from the very beginning in the tensor query IOP before most of the compiler stuff. The conversion from tensor query to point query IOP I think it worked just fine over small fields. But in the tensor query IOP, we actually use some high degree polynomials of degree N and the Schwartz simple lemma as part of the proof. And this gives us soundness errors like N over the size of the field. And that's the reason. So the way to improve that is to try and use fewer high degree polynomials in the proof system. Okay, thank you very much. Awesome. So I think we're running a bit behind schedule, so maybe we'll take the rest of the questions offline and thank Jonathan again. And I believe our next talk is via zoom slash video. So it might take a minute to get set up. So I'll say the title while they're while they're working on it. It's Noninteractive Zero Knowledge Proofs with Fine-Grain Security by Yu Yu Wang and Josh N Peng.