 Thank you for the introduction. So the key aim in this work was to try and find a better method than trusted setup for trying to generate the public parameters for zero-knowledge snacks. Zero-knowledge snacks are a particular type of zero-knowledge proof that have been receiving a lot of interest lately, both from academia and from industry due to their application to scalable systems. However, they come with one big flaw. Namely, they have a trap door that can be used to generate false proofs, even when you shouldn't be able to. Previously, this has been tackled at the implementation layer, and they've been trying to come up with ways to do multi-party computations in order to get around this fact. However, we feel like it would be better to try and address this issue by building a better trust model at the theory layer. And that's exactly what we're trying to do in this paper. What is zero-knowledge snacks? They are zero-knowledge succinct, non-interactive arguments of knowledge. And essentially, their two key properties is that they have very small proofs and very fast verification time. On the downside, they require trusted setup. And this is what we mean by there being a trap door. And also, they use knowledge of exponent assumptions. We managed to build a zero-knowledge snack in our updatable and universal setting. And we managed to keep the efficiency of previous schemes. However, we are still using knowledge of exponent assumptions. When would you use snacks? They really work best when you're trying to prove the same problem over and over and over and over and over again, so in the immortal setting. Also, if you need small proofs and fast verifiers, and this is exactly their two key properties, and this is great for blockchains. Because in blockchains, all information has to be stored forever, so you need things to be small. And also, where proofs are generated by one party, they have to be verified by every single party in the network. So if you have large verification costs, this is really going to be a problem. However, where they're not so good for blockchains is, of course, this trap door. And the trap door cannot be used to break zero-knowledge, but it can be used to break integrity. When snacks were originally introduced, they were more designed for verifiable computation. So this wasn't so much a problem, because you can rely on the verifier to get rid of the trap door. But in a distributed system, where the entire philosophy is that there shouldn't be, like, a trusted third party anywhere in the system, it's less ideal. So our contributions. We introduced the updatable trust model, which I would say is something of a compromise between trusted setup and untrusted setup. We show that it's very feasible by building an efficient use snark in it. We also, as a byproduct, I would say, achieve universality. And that might be an even bigger argument for using our scheme. And in order to build our snark, we introduce a null space argument. I'm not actually going to explain what zero-knowledge is, but I'm just going to say that the key thing here is that there is a trap door embedded in a common reference string which is shared by both the prover and the verifier. And unlike in other zero-knowledge proving systems, it's very hard to argue in snarks why this trap door would not be leaked to any party. So a couple of years ago, there was a paper that showed that it's impossible to have a zero-knowledge, non-interactive zero-knowledge system, which is both subversion sound and subversion knowledge. But you can have something that is subversion zero-knowledge, i.e. a verifier who does have the trap door, cannot tell anything about the witness. Because it's possible we want this, and we do achieve this for our scheme. However, they did show it's not possible to have it be subversion sound at the same time. So there's always going to be a trap door that a prover could use. However, what we try and do is we try and argue why it might be difficult to actually get hold of this trap door. So there are Nizic systems out there where this can be done in the random oracle model, and it's not difficult. And the reason it's not difficult is because they have no structure. But for snarks, pretty much all of their efficiency is gained because you do a lot, a lot of pre-computation, and you stick it into the common reference string. So if you were to then get rid of all of that pre-computation, we don't know how to build systems without doing this, basically. So as an example, as to what a trusted setup is, there was one that was around a couple of years ago and a more recent one by Zcache. And they essentially ran a multi-party computation in order to generate the public parameters that are used in Zcache systems so they can use snarks. And provided that there was a single honest user amongst the people that run the multi-party computation, the system should be secure. However, there's no way of knowing whether there was, whether the setup was compromised or not. And moreover, this is something that they've run twice so far. And in the second time they ran it, it was probably a good thing they ran it because they were able to introduce more parties. However, if you think about it, if you have to run a new trusted setup every single time you want to make a security improvement, or if maybe you're a less well-known application where it's very hard to get a high level of participation in the trusted setup, this becomes more and more of a problem. So what we want is we want something where you can use the same trusted setup or updatable model in our case for every single application. We don't want to be doing one per application. So now I'm going to explain a notion of updateability. Okay, so here's a theory wild. There's a perfectly trusted being who generates your common reference string and it's secure because you trust them. You get around this at the implementation layer by running a multi-party computation. It's secure provided, one of them is secure. But this process has to have an endpoint. There has to be some point when you decide that you want to go live with your system and these are the public parameters that you use. In our system, we do not need an endpoint. So for example, I could generate the first loss of public parameters and then broadcast them. Then someone else could come along. They could update my parameters, provide a proof of update and then provided myself or the updater was honest. The common reference string is secure. And someone else can come along and update that and provided either myself or the first updater or the second updater was honest. The system is secure. And you can just keep doing this and keep doing this and keep doing this even after a system goes live. Although of course you probably want a good few updates before the system goes live, but nonetheless. So in our eyes, this is no longer really a setup because you can always have the party that's actually using the scheme take part in the trusted setup if you want. So just to highlight the two key differences in a trusted setup, the setup has to be completed before the system goes live and it's secure provided there was a single honest user. For an updatable CRS, the parameters can be updated at any point in time and it is secure at any time after a single honest user has participated. How do we do this? Well, I was talking before about SNARC common reference strings having structure. Essentially, their structure is coming from there being secrets in the exponent. And that is a secret polynomial evaluations for known polynomials. And when the polynomials in question happen to be monomials, this is very easy to do. So for example, say your monomials are simplest possible one due to the X. And the first person would just calculate due to the X1, provide a proof of knowledge. The second person would then update that so they would calculate due to the power of X1, X2, which they can do because they have due to the X1, provide a proof of knowledge of X2 and you can keep doing this. And then if you had due to the X squared, say you could be able to check consistency because we're working in powering based groups. And this is great because there are schemes in the literature that use only monomials. Namely the first two SNARCs that were introduced one by Grutt and the other by Littmer. The trouble is, is that these works have quadratic proof of time. And we have improved SNARCs a lot since then, mostly due to a breakthrough work by Gennaro and others, which managed to put the circuit dependent hidden polynomial in the common reference string. And what this did is it meant that you could have a lot of the information related to that particular circuit pre-computed. So they were able to take the proof of time down from quadratic to cosi-linear, which is why we're actually able to use these things in practice now where we weren't before. And updating polynomials is really hard because you have two things which are correlated and if you then want to take them apart and update them, then you don't have the original secrets to do this. That's a whole kind of point of updateability. And by hard, we don't just mean that we can't think of a way to do it, we mean it is hard. So for example, suppose you have due to the power of f of x times delta. Any adversary which is able to update this is able to extract the monomials due to the delta, due to the x delta, due to the x squared delta, et cetera, every single monomial that was used to calculate f of x. And this is something we've proven in the paper. So the moral of the story here is that we cannot rely on hidden polynomial evaluations in our common reference string when we're building an updateable zero-knowledge snack. And this is highly related to why we get to universal setup because for the updateable setup, we cannot rely on circuit dependent things in the common reference string. And for a universal setup, which means that you have the same setup for every single circuit, you also cannot rely on circuit dependent things in the common reference string. So essentially in solving one problem, we have also solved the other problem. So roughly how do we do this? We start with a global common reference string that contains just monomials. It's completely independent of the circuit and it is not used by either the prover or the verifier. It is only used for updating. We then have a publicly ran derived algorithm and all this needs is a global common reference string. It doesn't need any secrets. Anyone can run it. And from it, we output a circuit dependent derived common reference string. And these we are not able to update. However, we can generate proofs and we can verify proofs just by using the derived common reference string. So essentially, each of these derived common reference strings is equivalent to the output of one of the trusted setups in any of the previous schemes. Roughly how much does this cost? The global common reference string is quadratic and I will explain why. Although one cabinet output there is you don't need to store the whole chain. You only need to store the most recent two. The update proofs are very small. However, you do need to store every single one of these and moreover, they have to be stored sequentially. If you swap the order then things will no longer verify. The derived algorithm costs a cubic number of multiplications due to Gaussian elimination. However, you can run multiple updates between each time you derive a new derived common reference string. You don't have to run it all the time. And the derived common reference string is linear sized and this is all the prover and the verifier need to store. So now we have a derived common reference string that we can use for proving and verifying but how do we get our proof of security to go through when we have to consider the fact that the prover has access to all of the monomials in the global common reference string. The answer is that we use an L-space argument. So we start with a common reference string that contains monomials in two variables. Actually in three variables but for here I assume it's two variables. And the prover needs to show that they have evaluated some component which is in the span of a known polynomial. Evaluated at a hidden point. They want to keep exactly what the weight is secret. So we now use the rank nullity theorem from linear algebra. And this says that the span of any matrix is orthogonal to the null space. Meaning that if we have that A, if A is in the correct span then when you dot product it with something that's in the null space it will equal zero. And if it dot products to equal zero with every single element in the null space then it's in the correct span. And this is something the verify can check because it's got pairings. However it does have to separate out the different null space vectors because it needs to be equal to zero at all of them. And this is why we need lots of different values of Z. And then for the actual scheme you have more than one polynomial but this is fine because it doesn't really affect things too much. What does affect things is the fact that our null space is actually quite big. It's linear sized. And the reason for this is when you take our polynomial f and you lay it out in a matrix it's wider than it is long. The width is roughly equal to three times the number of multiplication gates and the length is equal to the number of wires which because we're working with fan in two gates is roughly equal to two times the number of gates. So rank nullity theorem again you have the size of the null space is equal to the width of f minus the rank of f and the rank of f is bounded by whichever is smaller the width or the length of f which in our case is the length. So what we get is that the null space is D. So open problem for you. If you can get this f to be more square then we might be able to do something about that quadratic cost. I'm just gonna finish up by saying what our snark looks like. First of all we have to check the null space argument and this we do by checking a pairing. Second we need proof of knowledge and this we do by replicating the thing that's in the correct span onto the other side of the group and checking that with a pairing. And third we need to actually check that the circuit has been evaluated correctly and I haven't actually described how this works but the details are all in the paper so feel free to check that out. Overall our proof size just has three group elements and this is exactly matching the state of the art. We also require a linear number of group exponentiations also matching the state of the art possibly not in terms of constants but certainly in terms of asymptotics. And our scheme requires five pairings for the verifier unlike the state of the art which requires four pairings so we're really very close. So overall our key contribution is that we have introduced this notion of updateability and we have shown that it is possible to build an updatable and universal common reference string from which you can build a zero knowledge snark which is matching the state of the art. This I've mostly included because I like the picture. Thank you very much.