 Ie ddim yn gweithio eu gweithio eich snarckau, eich snarckau ei fod yn erbyn awn i'n gwahodd o'r pryd, iddyn nhw'n ddiweddfa ar gyflos gan yr hyn a rydyn ni wrth gwrs pa ddach yn gallu bod oed rotate'r hyfford. Elci o'r sullaid o gyn spontaneous, ac mae'n bywyd yn ei bwysig, ac sydd wedi cymdeithas i chi gylwности i gyd ym ysgolion. Rydyn yn ar 가�dd o'r gwaith. Dweud, gallwch chi'n gweithio i chi'n gweithio yma? Dwe. Rwy'n meddwl i'n meddwl y cyflawn i'w snes gan bach ffordd profos teulu traf south o ganfod y byd. Mae'r bydd am ffordd gynnig sy'n cyffred i'r cyfrifiad yn meddwl i'w snes o'r gyffred i'r cyfrifiad yn meddwl i'w byd, ac mae'r cyfrifiad yn meddwl i'r cyfrifiad ar gyfer cael cwymaint oddi, mae o hyn sy'n meddwl i'r cyfrifiad am ei fod yn meddwl i'w bydd yn meddwl i'w bron, that you have run some protocol correctly. And precisely what that protocol is, it might be a way of saying that you have done your tax return correctly, it might be a way of saying that you have encrypted a message correctly, it doesn't really matter. What's important is that there is a protocol and you have run the protocol exactly as it is specified. And then what a verifiable will do is it will just take your proof and say, Yes, I agree, or no I don't agree. The other nice thing about them is that they allow the user to keep some of their inputs I have encrypted a message example, you could do that without revealing what the message actually was. The example we've just seen, the roll-up example, is a classic thing with SN out is good because the verification is cheaper than the actual computation itself. Ond am fath gwirionedd, mae'r cyhoirau bod eitch chi'n ddweud y cyhoirau a eitch chi'n ddweud. On bod yn dda i, mae'r cyhoirau o hoffa'u cofnod o'r cyhoirau, a ble mae'r cyhoirau o hoffa'u hoffa'u cofnod dechrau. Oherwydd mae'r cyhoirau o hoffa i'n ddweud o hoffa'u credu swegau oherwydd mae'n bwysig o agol, ac sefnodd o'r cyhoirau i'r ddweud o hoffa'u cyhoirau ac mae'r cefnodd oherwydd roedd yn frofwynt. Ond, ddim, roedd yn bwysig y tîm o'r oeddi'r gwaith i'r maen nhw, rydych chi'n hynny o'r cydwyr i'r ffordd yn gyfrofiadau, rydych chi'n gofynio'r gwaith, ac mae'n dweud yn gweithio'r gwaith i'r gwir. Mae'n oeddi gyda'r gwaith ffalu. Y ddiddordeb gweithio'r gwaith i'r ffalu, yw'r ddweud i'r ddweud i'r cyfwyr gyda'r cyfwyr. I ar ddweud i'r cyfwyr sydd yn gallu'r gwaith o'r tîm o'r gwaith. ond yw'n cael ei bod yn digwydd yn ystod. Mae'n gwybod i'r dda wedi bod yn ystod, ond y byddai'n gyda'r cyfnodol, yn y cyfnodol y byddai'n cymdeithasol, ac ydych chi'n gweithio i'r ddweud i'r ffordd i'r cwmpetidau i'r lluniau, i'r ddweud i'r ddweud, mae'r ddweud i'r ddweud i'r ddweud i'r ddweud i'r ddweud i'r ddweud, Ac mae'r gweithio i'r ddweud i'r siŵn eich cwmpetidau ar theid, mae'n hawdd superbu nawr yn gweithio i'r ddweud i'r ddweud i'r cyfrNa The thing is, you now need to additionally prove that you actually do have 10 apples to send by AI because otherwise this would be something which would completely break the integrity of the blockchain and that would be a problem. So you run a snack, the snack hides precisely what merchandise you're sending and all you can see is that yes I do actually have the merchandise that I'm sending in this transaction. Felly, fel yr ydyn ni, gallai hynny'n yw'r gweithio er mwynhau'r anodau, fel y gallwn gweithio, eto'r ei g MC. Fel dydych chi'n sgwrnog gyfnogaeth dyfnog a chyfodwch ar gyd yn ôl. Mae'n bwysig i cadw y gallwch chi'n ardal i'w golygu'r gwneud â'r panhau amgylcheddol. Mae'n gweithio, ond ni'n ymddindu chi'n healthrath, ac mae'r gweithio hefyd yn byddog yn ychydighowb ar y cyfartu gan unaill cyfartu o'r cyfartu a sgwrnog. I can just take that as a given. Can you speak out more? Oh sorry. So just to summarize what these things are, they are an interaction between a perifer and a verifier where the perifer is showing that they follow the protocol and the verifier is saying where or not they believe you. They need to be correct. An honest periver can always convince an honest verifier. They need to be zero knowledge. The verifier learns nothing from the periver apart from the fact that the statement is true. Most importantly, and actually most difficult to achieve, zero knowledge is quite easy, soundness is not easy. It's been 30 years trying to get soundness sufficient. This is the idea that only a periver who has actually followed the protocol can convince an honest verifier. Before I run through a perinbase snarks, which is sort of what I'm mostly focusing on, I'm going to talk about some situations where you would perhaps not want to use a perinbase snark where there would be better solutions in the literature that you could use. The first one being if you're only intending to run your computation once. So the thing with snarks is kind of the way they work, kind of the way they get their efficiency, is they have this big preprocessing phase right at the start where the verifier is going to do a ton of work just to get a string of information which it can later refer to in order to massively speed up its verification. And this means that if you're later verifying 100 proofs or 1000 proofs, it's really good. But if you're only going to use it once, then that is a total waste of time, you'd be better off using something else. And in particular things that are good for this tend to be the gyro, snarks, aria, aurora. These are designed for one-off computations and another nice benefit of them is that they have typically quantum secure. Also they have very fast groovers. Another situation where you would not use snarks is the program that you're trying to prove is very, very small. So a typical example of this would be range proofs. If you're trying to prove that something is between zero and two to 64, we can represent that program very efficiently. It costs 120 gates or something like that. So in this situation, a snark is actually going to give you some concrete overhead which means that the great asymptotics that you would be benefiting from is a large computation. You wouldn't see the benefits of this in that situation if you're better off using something like bullet proofs, which, while asymptotically, it's perhaps best good for very small statements that actual concrete overhead is so small that that's not going to come into effect. And one last situation is when the thing that you're approving is a very specific computation for which we have a very specific solution. So snarks are quite powerful. They cover MP. You can do general purpose computations with them. But sometimes you don't need that. Sometimes, for example, with snark proofs, if you're trying to say, I know a secret key, you could run a snark proof and that would be far better than running a snark. The time when you do want to use a snark is when you are approving the same program, the same application, the same set of constraints many, many, many, many times. Because this is really the situation where that preprocessing stage that you did at the start is going to benefit to you or going to group the rewards. So the benefits, they tend to have very small proofs sizes and by small I mean a couple of hundred bytes. Likewise, a verification tends to be really fast. But they do often need trusted setup, which is a downside. Also they rely on some funky assumptions and by funky assumptions they do not mean wrong assumptions. We can't break them. What they mean is that we don't understand them very well and that makes us uncomfortable. And finally, and this is also a pretty important point, the proofers are not cheap. The proofers can be a big barrier to actually using these things in practice. So now to run through a few schemes. The first one, if all you need is speed. Grot 16 is the fastest snark in the literature. It has the smallest proof size, the fastest verification time, the best proof of time. It's generally great. It wasn't just sort of made up out of thin air, it's actually the result of a huge line of works starting from Jinera and others who found a really nice way to encode the programs that we're trying to prove, which was later then people found a way to turn that into rank one constraint systems which are the general standard that people tend to use now or encoding constraints. And then there was a line of papers that were each optimizing the programs that weren't needed, getting rid of verification processes that weren't needed, finishing with grot scheme which is in the Jinera Green model. The downside here is that there is a trusted setup. And I'm going to do a very, very quick explanation of what they mean by that, but if you don't understand me then don't worry because it will be quick, I promise. So before the prover and the verifier can run, I was saying that there's this expensive trusted setup process, sorry, this expensive pre-processing step, but in pairing based facts it's actually worse than that because the step we do at the start some of the inputs to this pre-processing have to be secret in order for you to have a secure scheme at the end. And worse than that, the secrets are very structured and we actually don't know how to generate them without somebody knowing what they are. So we can get around this but largely the way we get around this is we have many, many participants all having a little bit of the secret and working together to output the parameters that the actual prover and verifier use. And this is kind of nice in the sense that if all of the parties collude, then sure, you don't get any guarantees whatsoever, but if a single one of those parties is honest then sure there could be other points of failure but you're not going to have the point of failure being that the trusted setup had colluding parties. And sometimes this is enough. For example, if you have a closed set of participants where you don't have people joining and leaving at any point in time but you just say have 100 people that are always the same and they run the trusted setup process themselves and because they have run it been part of the process, they know that it's okay they know that they themselves are honest and that's great and then they can use Grot 16 which is faster scheming literature. Likewise, if you have a way to actually verify that any given computation that has been proven is wrong and at least you would know something bad has happened and this can be the case, for example there are things called sharks which is where you have your proof being like a bullet proof thing which doesn't have trusted setup but then you also run a snark on top of it in order to get the verification costs down but you still have the original proof so you can still at least check that the original proof holds. Also, very rare situation for decentralized technologies but if you do have a central trusted party then no problem. One nice thing about snarks is that there's a work called Dizik which explains exactly how to do this not just for the group experimentations but also for the vast Fourier transforms so if you're looking to paralyse snarks then I definitely recommend taking that one out. Another really cool thing you can do with snarks you can have a snark of a snark of a snark of a snark that a snark verifies and every single time you run another step in the recursion run another snark you get something which is smaller meaning that the full system will be really small and really fast to verify. This isn't as simple as it sounds in particular the security assumptions which you're basing your snarks on can blow up in size if you do this wrong so there have been works which have looked into precisely how you should lay out the format of your recursion such that your security assumptions still hold. Also important in the case of snarks where you have this specific way of representing your programme you need to be able to represent your snark verifier and this is what Ben Sassan and others did when they introduced something called MNT curves which are basically a specific type of pairing which you are able to represent inside a snark which means that if you want to do layers of recursion you can. Another situation you might come across is if you cannot do a setup I mean this is you're still going to need to have some kind of setup process to be honest but you can have a situation where you have an updateable setup where at any point in time a new person can come along add some new randomness to the system and then be part of that setup process so it doesn't need to be a fixed thing with a fixed cut-up point and this basically means that it's much easier to actually run a trusted setup process which is secure much easier to audit much easier to have people taking part much easier to just generally manage but probably more importantly you can have a universal setup so typically when you're doing snark setups you would have one setup per application so if you're doing a range proof you need a setup if you're doing a Zcash you need a setup if you're setup you do your setup and then you happen to learn that something you did in your protocol is actually wrong, maybe you missed a minus sign then you have to do the whole process again whereas with an updateable setup you don't need to do that you have just a single setup pixel and this is quite nice because it means that you can coordinate it you can make sure that there's lots of companies lots of participants, lots of governments lots of individual people taking part in that one setup and that would be the only setup that you would need to audit so this is what sonic proofs do they design zero knowledge proofs which are very nice if you want a universal setup the catch with these is that they are actually only efficient if you're in a position where you can aggregate so if you have an aggregation party who can put all the proofs together and help out other verifiers then they're really efficient and they're great and these have also been improved by Gatson if you cannot aggregate then there have been a couple of works recently one called Plonk the other called Marlin which do not need any aggregation we do have the downside that the proof sizes are a bit larger we're not looking like a kilobyte rather than a couple of hundred bytes but they don't need an aggregator if you're willing to have your proofs be a bit bigger there's also quite a nice line of works which is also universal and updatable which is called Libra not the Facebook currency and the nice thing about this I mean what they were focusing on is getting down the proof of computation but what I think is really cool about it is that they do not need very large fast forward transforms it can get away with just very little ones and this means that if you're actually trying to represent the program then you can use a field which doesn't need to have a large divisor of two and this gives us a lot more flexibility when we're actually using it so to summarise pairing based snacks can be small and fast to verify they can be paralysed they can be quickly, universal and updatable and they can be made quick to prove but slower to verify if that's what you want thank you very much very good so this one has larger proof smaller proof of computation but slightly larger proofs you mentioned shuffle arguments when you said that Jen's graph and the accumulators who wrote that paper there's been quite a lot of shuffle arguments it's been visited by a Jullo Seelent research I can't answer that so one recent one was well if you use a bullet proof in a product argument then you get a polynomial commitment scheme and this is actually what Hayley that work released by Zcash does they use a bullet proof in a product argument which has a downside that you have in verification if you just run it as a run-off but if you're aggregating then you have a situation where you can have the linear verification be a run-off cost and then just have a logarithmic factor on top of that for each bit of proof which means that you can get something that's quite fast in that situation but there's those are the two that I think are just really used