 Oedd am oeddiad yn bod ni wedi'u gwneud y bydd, ond rydw i'n treulio arddangos. Felly dyfodaeth gwnaeth y CTO yng nghyroddiad ar gyfer amser hynod, rydw i'n gyflym ar yllydig mewn hyffordd, felly mae gennym eich greadnod o bobl ydyn ni i'n gwybod ar y lleigiau a ddiddordeb yn ymgyrchio cymryd ac mae'r ddiddor gyda'r ddiddor llwydd ar y mewngyn. A mae'n rŵn i'n gwael yng Nghymru, is a new piece of technology that we developed in collaboration with critical labs in new universal snar. Oh, that might be a good idea. As much as it's linked around your... Thanks for that, yeah. So, as a bit of an outline then, kind of a high level, I'd like to kind of just give it a bit of an overview about why snars are useful, why they're important, what we're trying to do with them, and then go into a little bit about, rather, some low level depths about plonk, this new universal snar will be constructed, and trying, I guess, to spell some of the mysticism behind snars, and actually provide some insights and intuitions behind how they work. So, what's the fuss about snars? I imagine everyone here is kind of relatively familiar with the high level stuff, that you can use snars to effectively make computation free on Ethereum by just serving a proof of computation instead of actually requiring every note to perform the computation themselves. In addition to that, you can also add privacy because with a snark, you can hide the inputs and outputs to your proof. The apotheosis of this is not just if you hide the inputs and outputs to your proof, but also if you hide the identities of the people who are sending transactions, and if you hide the actual smart contract code that has been executed. And that can all be achieved with varying degrees of practicality. So, how does plonk fit with this landscape? It's been a rather snarky summer, what can you say? There's been a lot of new technologies that have been put out, and yeah, plonk is a small part of this landscape of tech. And so, I just kind of wanted to talk about, yeah, so the intuitions behind it, what it does, why we think it's useful. So, it's a universal snark that was kind of designed from the ground up by us to be viable on Ethereum, but I mean, ensuring that the verification run time was fast enough to be a gas efficient on Ethereum, and for proofs to be able to be constructed for relatively complicated circuits on average consumer grade laptops, basically. And yes, inspired by its predecessors, Sonic, like the first universal Sonic that required that was practical, that didn't require a quadratic-sized reference string, and if generated on previous works by the problem and by it. So, we published plonk about a month and a half ago, and we've been very busy building an implementation of a plonk-proof and verifier algorithm, and this is also a very, very proof of concept of stuff, but it's in advance in our state that we can publish the best ones. So, this is over the BN254 curve, so look at that, and the relevant graph is the top left. This is the proof of construction times for a selection of circuits with very number of gates, and I guess the takeaway is that over a million gates, a proof of construction on my service probe is under 23 seconds, and when you get down to things actually to the 17 gates, the proof of construction is in three seconds. It means this was practical for day-to-day use on Ethereum. We estimate that a typical private transaction of Sonic will run you at about 128,000 to 512,000 plonk gates. This is the logitech of the library that we've been building. It's a good source of information. It's very welcome. Feel free to tinker around with it. So, that's the high-level stuff. So, why don't we kind of get into the guts of how smarts work, and what's going on with them. Sorry, it's a bit loud. One second. So, we use polynomial commitment schemes at the clock. So, we have this in common with Sonic and Marlon, which was a contemporary proving system. The reason why we use polynomial commitments is because it's a very efficient way of achieving the sickness of your Sonic system. Effectively, we know how to succinctly commit to polynomial so you can take a polynomial with a very large degree, which is an enormous amount of information, and represent it within a single elliptic curve point. If you can represent your SNARC circuit via a polynomial identity involving the polynomial commitments, then a verification algorithm can test this polynomial identity by testing it at a random evaluation point and have confidence that there will be a whirlwind probability if your SNARC description holds it at a random point if it holds it at every point. This also interestingly divorces Planck from the underlying cryptographic primitives that are used in the sense of the polynomial commitment scheme. You can run Planck not just on the elliptic curves, just how you published it, but you could also essentially do the commitment scheme that uses intractor covalent proofs, or what you can pass proofs to judging by the latest research by the proofs of his team. If you want to make a list of wish list of what you need to make a SNARC, obviously we want a way of constructing, turning a program into a proof of knowledge, so this is typically done with addition, multiplication, gates that are stitched together to what was said. This is a relatively natural way of representing problem statements when you're actually testing these proofs of knowledge using cryptosystems that are useful in a field arithmetic. We want them to verify to run in an effective and constant time, and we want them to be universal, so we want that to be a new one just to set up to meet that kind of bootstrap system, and then you can encode arbitrary programs as SNARC circuits without running additional just to set ups. In a perfect world, you wouldn't have any just to set ups, but then you do get a small cost for this moving with proof sizes, which is a little bit of a problem for Ethereum. Can I just ask, who doesn't, is not familiar with the concept of a gate in the context of SNARC? Awesome. Basically, a gate has input and output wires, so a gate, basically, is the primitive arithmetic operations that you want to composite into your program. In the same way that you can take any computer program and decompose it into a sequence of NAND gates, because that's what's inside your CPU chip, you can also take a computer program and decompose it into a sequence of additions in multiple locations. That's what we're doing when we create SNARC circuits, where it's effectively taking these primitive gates and stitching them together to create a representation of a more complicated program. And we have additions to the multiplication gates, and they all have two powers and are now the one. And you can make a circuit by stitching these gates together, where the ordering and sequencing of where these wires get will be completely non-linear and rather chaotic and probably created by a compiler and with a human. And you then want to prove that you have a satisfying assignment to your circuits. You want to prove that all the wire values are what they ought to be if you're acting in a different way. So, with PoC, you then, when you're stitching together all of these gates, you have to kind of, whenever you make a connection between your gates, you're effectively stitching two wires together. You can point this as multiple wires together because you can have multiple, like an output wire, a piece of multiple and the wires. And then you've got to kind of validate that if you serve a different knowledge that you've actually run the program correctly, that you've evaluated the circuit, and all of the wires have the correct values that they should have if you were acting honestly. So, fundamentally, if you want to represent a program as one of these proofs of knowledge, you really have to do two things. You have to perform a local arithmetic check. So you need to check that the arithmetic relationship for every single gate has been satisfied that the two input wires, either summed or multiplied together, is equal to the output wire. And then you need to perform a global consistency check. You need to validate that the wires are correctly signed between all of the gates so that if one output wire feeds into several input wires of other gates, that that's actually being done correctly. The first check is reasonably easy to do. The second check has been the vein of the universal snark system for quite some time. But this is something which we think about. We've made some rather huge leap, some cracker. So, I'll start with a local check, because that's the relatively easy part. One second. And I like to think about snark circuits and using vectors instead of codonicals, because polynomial is an abstract and complicated, a little bit unintuitive. Vectors are much more simple to work with, and then we can map between vectors and codonicals later on. So, in plonk, we have one, we want one arithmetic expression for all of our gates, whether they be multiplication or addition gates, because it simplifies the verification runtime. And so you can do this for the concept of selector vectors. So, if WL.QR.WO of vector sec, codonicals, abstract and output wires, for all of the gates in your circuits, then by taking dot products with these vectors and selective vectors that are kind of defined when you compile your circuits, so they're created by the circuit program and not the improved constructor. Then you can either turn on or off a multiplication gate or an addition gate for one of these indices. So, to kind of clarify, if you wanted one of these rows to be a multiplication gate, you can make sure that the QL value and the QR value of the set upon value is set to zero. Did I get multiplications and stability of additions? You would set the multiplication gate set up to zero. You can also use these selector vectors to scale up or down the values of the wires feeding into these gates if necessary, and you also have a number of constant coefficients into your gates. So, this is a kind of an example toyserff gate that Vitalik put down in his blog article, and I thought it would be helpful to go through to kind of just elucidate a little bit what it looks like to actually represent the program as a circuit. So, this is a small example where you have a circuit where you have some input value x, and you want to check that the output wire is x cubed plus x plus five. So, you have the all first multiplication gates, and the left and right wires are going to be feeding x into the gate, so the output should be x squared, and then in the second gate, you'll then want to find x squared by x, where I'll get x cubed, but then adding that together with x, and then finally adding the output at five. Just one small caveat, actually, you don't need that last multiplication gate because you can feed five into constant to this gate, but there's a general office for that, and then the output should be 35, and then you don't have to verify it, just to check that you have actually supported the correct y values, like the values that all these wires should take if x is three. So, now we're getting to something a little bit more offbeat. So, I'll be talking about vectors, but we actually want to verify our proof of knowledge using polynomial arithmetic. So, how on earth do we map our vectors to polynomials? Well, the solution is to use something called a Lagrange base, where effectively a Lagrange polynomial is a special set of points on the x-axis. So, if you have a vector of 10 elements, 10 elements will pick 10 points on your x-axis. So, we use the risk of unity for a range of reasons, but they don't need to be this, for example. And a Lagrange polynomial will be a polynomial where at one of those special points, it will evaluate to one, and it will rest it will evaluate to zero. It's effectively a delta function, and you can use these to encode, to turn your vector into polynomials, because you take, you map each vector index to one of the special x points that you're using to define a Lagrange polynomial. And then you take a linear sum, so you multiply each vector element by a situation of Lagrange polynomial, and then you sum them together, and you will have a polynomial where at those special x coordinates that you've set to know in advance, y y values will be the values of your vector elements. This is a relatively convenient way of representing vectors as polynomials, because it preserves vector arithmetic, and that's the really important thing. So this is a Celfreded look here, so this is a form of a constructed Lagrange polynomial, where omega here is the root of unity, and if you run anathletics this will be equal to, if you add the i root of unity, li, x, or p1, and the jth root and any other root of unity will be equal to zero. So we can now take four more vectors and turn them into polynomials. So taking those linear sums I mentioned earlier, we can take vector elements and run a branch polynomial in a particular sum. You do this for the selector polynomials, and that uniquely defines a circuit to your SMARC, and then you do that for the y values, which defines a satisfying assignment to your SMARC. The value of this is that you can take all the vector arithmetic that you used to describe your gate equations and directly map it to polynomial arithmetic, because if you have a vector identity which is equal to zero everywhere, then the resulting polynomial identity will be equal to zero if you want to do this sort of vanishing polynomial to give some intuition about why this is. If we consider one of the special points that we used to encode these polynomials, so say at the first, like the second element of these vectors, if we view the resulting polynomials that you're creating, that means that for the second root of unity, your polynomials will have the... will be... will evaluate to the values in your vectors, which means that if you take a vector identity which is equal to zero, then the resulting polynomial identity or the result of polynomial identity will be zero, which means that if the vector identity holds, then the resulting polynomial will be zero at all of the result of unity, special x points that you picked in the past, and so the vanishing polynomial is a lowest degree polynomial that is also equal to zero at all of the result of unity. So you know that if you have a satisfied assignment to your circuit, then this polynomial identity will always be divisible by that vanishing polynomial, and you can test for that to check that you've correctly... you've correctly assigned your y as on to your circuit in the proper way. Well, that's how we do a local check. So I was going to stop that, but it doesn't happen so many times. By the way, feel free to ask shout questions, hackle or anything. If you have any questions, I suspect you not that you want to. So how do you actually do the global check now? Now we've talked about great. You can check that every gate in your circuit is being correctly evaluated, but how do you check that you've actually searched all the wires together? Well, that's a little bit tricky because what you need to do effectively is to check that there's a little bit of wire from one gate that feeds into multiple input wires in the bunch of other gates. You need to effectively check that that wire value has been copied correctly. That's so that your, for example, your left wire is the first to fit in the right gate is equal to the output wire over the fourth gate, something like that, something a bit odd and non-linear. And the way that we achieve this plot is with a permutation check. And so permutation is a simple word that hides a rather terrifying nest in my mess, but it dents down the intuition behind why we use permutations. Effectively what we're doing is if you take all of the wire of the values in your circuit, you can map them into a kind of like a 2D coordinate space. You take the value of the wire and you map it to like a unique a unique value on the right axis. And the result of that is you have a set of points, one for each of your wire values. Then perform the same mapping, but instead you map your wire values to different points. But in such a way that, for example, if you have three wire values that need to be the same because they're copied to each other, then you want to effectively map those three wires into the same coordinate space, but in a different order. And what I mean by this is, for example, imagine you have three wires in your circuit and they need to be the same values. Then you can kind of create two vectors where you're basically your first vector is you map your wires into this coordinate space and then your second vector is you're mapping the same wires into so if you're mapping like u1, v2, v3, like three wires into like a turbines. So let's say that we assign if you have three wires, imagine they need to be the same value. Then you can assign indices to these wires, so you can say, okay fine, I'm going to map the first wire to one and the second wire to two, the third wire to three. Then you're also then going to do a permutation, so you're saying, okay, now I'm going to map my first wire to coordinate three, my second wire to coordinate two and my first wire to coordinate one. You now have two pairs, you have two collections of coordinates where those three sets, they need to be if the wires are genuine copies of each other then those coordinates will be the same but just in a different order and that's why we call it a permutation. The reason we do this rather confidently to me to check the copy relationships is because we know how to do a permutation succinctly and efficiently, that was one of the innovations in the plan. Effectively, if you want to choose the two vectors right at the top of the permutation what you do is you inject some randomness into your vector elements so to take a random meter for example and you then multiply by the index of your vector so you take a1, you add meter to a, you add two meter to a two meter to a, three meter to a, three. You then take your B ends and you multiply you add three meter to be one meter to be two, two meter to be three so what this is trying to capture is that those meters they are they appear exactly once in both of these three sets but in different orders. So for example the only way a1 plus meter and b2 plus meter, effectively a1 plus meter and b2 plus meter need to be the same and that will only be the case if a1 equals b2 and you can then go around this plan effectively if these three sets of coordinates are equal to the other birds so the b coordinates then that can be the case if all of the a will be equal to the same value so the T of the R if b is not going to convey them with almost certainty the set of a coordinates once you've added those around the meter terms will not be the same as the b coordinates and then so here the final step is to prove to you is that if you, with just adding one around an element into your kind of a coordinate tuples that will then be able to prove that you have two sets of coordinates that have the same value in a different order but you've not yet proven that that those coordinate tuples are in the correct order that you need to genuinely prove co-operative relationship so we have a third a second random coordinate camera to validate this effectively we take each one of those coordinates tuples that you created and we add random coordinates to it and then we take the product of them and so the point of this is effectively we've mapped we've mapped our y values in the two different coordinate spaces where the the set of coordinates need to be the same but they will be in a very different order and so the way we check that is well we just take the product on there and then we have it mapping that means that your wise have beenî peace that you've correctly stitched it up till the wise biz to give it a yes so it should be should it be right okay i wth me to do just lots of things so I Evet Toysbeth yr unrhyw deillwn y dyma, oherwydd dyna'r cofynthralthy gallai, ein bod yw pethau bod ddifedigau hynny. Mae'n ddadifynach yn y cysylltu, dwi ddim yn ddififadau a'r ddifufol yn ddifadau, dwi ddim yn meddyl a'r dda ni'n mewn i'r gwylio cyfoesu a chymdeithas yn y lleennod, dwi'n wedi ddifade a ei wneud effeithio ar y cysylltu i'r ddifydd dnewid, ac yw'r ddigon o'r gwaith cyffredinolio? Rwy'n nhw hi'n dweuduno i gyd yn mynd i ddweud yn braf, ac mae gyda'n mynd i'n amser gyda eu cyntaf, a ydych chi genno i'r eich hunain a Gwtlu'i'r tyllion, a gydda'i beth sy'n cywethaf i gyd ar ei ddeuwer o'r ddweud oherwydd i weithio ar y bwrdd, yn rhaid i ni'n dweud ei beth oherwydd wedi'u ddefnyddio'r llunyddนiniol ar gyfer hwn o'i cyfnod oed, oherwydd mae hi'n ddechrau i ddechrau i ddechrau i ddechrau, ond mae'n ddiwylliant i'n ei ddweud ei gweithio, a'r adeiladau yn gweithio. Yn ystafell, oherwydd, rydyn ni'n cyflwybu cyfan sydd ymgwrdd o'r cyflwybu'r cyflwybu, mae'n rhaid i'ch gweithio'n ddond o'r rhaid i'r adeiladau yma yn ystafell, mae'n cyflwybu eich cyflwybu'r cyflwybu sydd yn cyflwybu'r cyflwybu, cyflwybu yma er yw'r adeiladau. Ond mae'n rhaid i'n gweithio'u cyflwybu yma, mae'n iawn i'r adeiladau yma, y gallai perthynatwr yn ychydig y gallwn iawn, ond y ddefnyddio'r sall gynllun oedd o'i'r greu'r cyffwiliadau hynny, ac ydych chi'n meddwl i fynd i'r ffiannig, i ddweud yma eich gwneud cyfaint yn rhai rydy Friends ac mae wir yn elu i'r rhai Rhiannol Cymru. Rym ni'n ystafell ar chi gwaith y Rhiannol Cymru. Felly mae'r Rhiannol Cymru yn ystafell wrth fynd yn gwyb digon nifemi, felly mae'r Rhiannol Cymru yn ystyried gyda'r rhai Rhiannol Cymru, elsefer. That was to instantiate these succinct proving systems for the reduces. And so this is kind of just a dashboard for what it would look like. You have a front end that you can tune into to see who is performing the ceremony in a given time. And you can see your theory just in the queue if you sign on and you sign up. So you have to leave. That is it. Thank you very much. Get to be fun. Thank you.