 Ychydig wrth gynhyrchu yno o'i sgwrdd o ydyn nhw'n dweud i'r dŵr ymgyrchu yn Teglaesol. Dydw'ch gynhyrchu'n gwybod yng Nghymru yn 5 o'n 10 o'u collin�u i'w ddweud. Mae angen yn dweud i'n ddweud a rai'n weithio haf I'n dweud i'n dweud. So, yr eich ddweud yna o ddweud i'r ddweud i'u ddweud i'u ddweud i'u ddweud. Felly, yn fwy o'n gweithio, mae'n dweud o bocs yn y syniadau cyfath. Felly, mae'r cyfath o'r cyfath o'r cyfath, fel y teoriad y 1.0 o bitcoin, mae'r cyfath yn ymdrych a'r bocs yn ymddangos, mae'r cyfath yn ymddangos. Felly, mae'n gweithio'r bocs, mae'n gweithio'r bocs, mae'n gofio'r bocs, mae'n gweithio'r bocs, mae'n gweithio'r bocs, mae'n gweithio'r bocs, mae'n gweithio'r bocs yn ymddangos, mae'n gweithio'r bocs yn 51% mae'n 51% gyda'r bocs yn gweithio, dwi'n gweithio'r bocs yn ymddangos. Mae'r bocs yn ymddangos, ac mae'n gweithio'r bocs yn ymddangos y cwm o'n gyfo. Felly, y gallwn yn ysgwrdd sy'n gyfrifwyr, fel ff2.0, fel Polka Dot, ddim ddod o'r nifer o y bwysig yn ei bwysig, mae'r set rhai ffanyddol yn ddod i'r cyfle a'r ddod i'r cyflwyngau yn fwyaf, mae eich ffanyddol yn fwyaf, eich ffanyddol yn fwyaf, oedd y ffanyddol yn fwyaf. Ac rydyn ni wedi fy huncwys llawer o'i cwyddiad hwn o'r cyffredinol, mae'r cwyddiad hwn o'r cyffredinol. Mae'r cwyddiad hwn o'r cyffredinol. Mae'n bwysig o'i cyffredinol ac mae'r cyffredinol yn y ddaeth cyllid. Mae'n dweud bod y cwyddiad hwn o'i cyffredinol, Brittany Collator produces it and there is some random subset of validators at all the set of validators and we divide them between parachains and this random subset checks the block says it is valid and available and they sign this header so the header goes on to the relay chain and then it is in the relay chain to account it is happening in particular when the relay chain is finalised which we do finalise by a finalised that you get Byzantine Agreement ac yn ei bod yn fflaen. Felly, yw'n gwneud y cyhofnwyswydKeith Ff2.0. Hwnaw hynny yw… Dyna'r blo'r rhan o'r opeth yn ymgyrch. Rydym i'r holl Brynion Llyfginwyr Cymru, sydd wedi bod ni dweud cyhofn yma ar ardal Y Shards. Rhywb yn gweld hynny'n c lywodau Croslink. Yn meddwl, roi byddiwch bod roedd yn ei wneud cyhofnwyr Croslink. Mae ydych wedi gweld bod y blo'r holl hwn, a'r bobl heb ac yn byw. ac mae'r bwysig i fynd i'w cyfrifio'n cyfrifio'r ysgol ei wneud. Rwy'n amlwg, mae'n cyfrifio'n cyfrifio'n cyfrifio'n gweithio'n gweithio, ac mae'n ffynaldig gennyn nhw'n gwneud y Casper FFG. Felly mae'n rhaid i gydag i ddweud. Mae'r ddweud i fi ddweud i gydag i ddweud. Fel y byddai'r ddweud i ddweud i gydag i ddweud. Mae'r ddweud i ddweud i ddweud i ddweud i ddweud. Ieidwch chi'n cymryd. Felly, y croslins yn ymdriol. Mae'r cwmno ar gyfer y Pocodot. Ond y dweud y cyfnod ymlaen i'r ddweud yw'r ddweud. Mae'n dechrau ar gyfer y ddweud. Y ffordd o Ethereum 2.0, mae'r ddweud o 100 o 1000 o ddweud o'r Sardd. ac yn Pocodoc we'r anodd yw'r gweithio'r te. Ac yna'r problemau lle'r gwahanol. A bai'n gweithio'r gweithio'r gweithio, o'r gweithio'r gweithio'r gweithio, ydych chi'n gweithio'r bobl yn ddiwedd ac mae'n gallu gwahanol. Mae'r gwahanol yn ddau'r gwahanol. Ac mae'r gwahanol yn ddau'r gwahanol. So, how do we guarantee this? Well, this is the reason why we wanted 1,000 validators per shard. So, what F2.0 plans is to have 16,000 initially to a million validators, all of which have every 32 units of validator. And it's like if we say three quarters of those are honest and we select say 100, then with everwhelmingly high probability, one third of those will be honest and say if two thirds of people sign to say it's valid and available, some honest person will keep it valid and available, probably a whole load of them and then it will carry on being available and valid. But it's still a sort of small fraction of the entire system that's giving us this guarantee. And sort of the question is will we be able to get a million validators? I don't know. So, if I have, well, maybe, but only if some people run a lot of validators, right? So, if I have a thousand there, then I should be running 30 validators. But what we really, you know, sort of a design goal was to make these runnable by someone on a laptop in their bedroom. But that doesn't really work so well if I have a thousand there, you see? And I'm not sure how all this is going to scale. But there's like sort of, there's network and computing limitations. And the networking is probably a problem. So, like how many validators can I run on one machine? Well, it kind of depends on how many peer-to-peer networks I can connect to. Can I connect to 30 networks? Instead of to be robust, maybe I need 30 connections each, a thousand connections. I can do that with a server in a data centre, probably if the code is good. But probably not in my bedroom. It would crash my router. And this is why we're kind of worried in Polkadot that we won't get this many validators so we're trying to design a system with less. But I'd like to sort of point out that because F2.0 has a very similar architecture, almost all parts of our solution would just apply. So you could use any of them. So what does Polkadot do? So they said before we only have 10 validators per power chain. It means we only need sort of five to seven signatures on this block. And the basic idea for validity was to rely on fishermen. So every collater will be a fisherman. There will be no shortage of fishermen. Every full node can just put up some state. They don't have to lock anything beforehand. They can just run it and then when they discover a block is invalid, that they're running, they're reported. And then people check it. And if we discover someone was lying, one of these five to seven signatures in a valid block, we'll slash them. We'll take away their stake. And we can allow sort of fishermen to stake themselves. We can ban them together if fishermen lie there. They get slashed. It's like if you're in there, a fisherman, and you want to convince all your friends to join in and say this is true, and this will ensure that we check it enough. And if you're correct, you will get a reward. But the problem is you can't validate a block you never see. And validators don't know that the full nodes never saw it. And even if the full nodes say they didn't see it, and it's sort of subjective, we know we're verifying that. All we can do is after the data ourselves, at which point if everyone has the data, we're not scalable anymore. And maybe when the validator is asked for it, it shows up. And it's correct. But we don't know we can't slash anyone. It's a kind of... The reason this wasn't a problem with Ethereum 1.0, with Bitcoin, was because the people responsible for the entire set, the consensus, were the same people who kept the data available. And at least all the miners keep the data available, as do all other full nodes. So what we really would like is to sort of do that again. And we can kind of do this. There is a solution. Everyone kind of... It's an old solution for robustly storing data. And that is you do use of razor coding. So you sort of had some redundancy to your data. And then you divide it into little pieces. And even if a large number of these pieces go missing, you can still reconstruct it from the remainder. And so we can use this to make the whole set of validators responsible for the availability of every data, with all the data without overloading any of them. So the way we do this, power chain validators, is to send the pieces of the data to every other validator. And one third of validators, pieces along to a third of validators are enough to reconstruct. And then we make voting on the consensus, particular in the finality gadget, which is the Byzantine agreement algorithm. We need two thirds of people to sign, conditional on having all the pieces you're supposed to have. And what this means is that if we finalise a block, and we're two thirds honest, then the block is available in principle. Right? We can reconstruct it. So the data, if we are now back to two thirds honest, and we'd really like that to be rational, but we can do that as well, then it's available. But there is a problem here, which if you launch an attack, we're going to catch you in the end. If these 10 validators sign in the valid block, we're going to catch them, we're going to slash them. Ideally that would be a hundredth of the stake in the entire system. But that isn't good enough. So what can happen is that these guys can send, sign in valid block, they can send a transaction to a bitcoin bridge that owns a lot of external things, bitcoin, and tells them to pay the bitcoin. And this means that if that's more than a hundredth of the stake in the system, they make a profit. And we can't revert it on our chain, because we're not bitcoin. But we can get round this if we have a protocol to catch people quickly. So if 99% of the time we catch this before they pay you the bitcoin, only 1% of the time you succeed, they expect to cost an attack, under the stake 1% of the time, they're all the stake in the entire system. It's cheaper to attack the main chain and the entire system than it is any one component. But we need a protocol for this. So what happens is that we take reports of invalidity or this time unavailability, and we get extra people to check it. We choose the extra people at random. This is important. We actually use a very viable random function. The people validating the shard were chosen at random, but everyone knows who they are. So the bad guys can just wait until they own enough, attack the rest, and they're fine. But if random people check it, they don't know. On the other hand of VRF, everyone knows who's responsible. We know who should be checking this after the fact. We know which reports we should pay attention to, and we can even pay them. And then if anyone claims it's invalid, everyone downloads it in checks. Now, there are other things you could do. You could do random texts without reports. And there's this great idea of doing fraud proofs. And there's this paper by the staffer, Abbasan, Metallic and Sonio, at which I saw what my staff are talking about at DevCon last year, where we do some erasure coding, but we sort of erasure code subblocks, and then we can check only a subblock, and it gives us a small proof. And if we have a small proof, we don't need this escalation. Or we could use succinct proofs to come into every one of the validity, and we'd use the only problem then is availability. And so any of these could be used in F2.0, if you want to do the same thing. So one of the nice things about finality gadgets is that you don't have to finalise things straight away. So what we do is we delay on voting on finality until we have all the erasure coded pieces, and until we release some time to get reports, and if they don't show up, if they do show up then we have to wait for checks. And then we never finalise an invalid or unavailable block, except with small probability as long as we get enough reports or we had enough random checks. And that doesn't slow block production. There's a bit of an issue with networking. We can't gossip all these pieces, but never mind that. We can solve that. And the result is that we can secure Polkadot with only 10 validators for parity and still have it as secure. And I think Ethereum could use a similar thing to produce a number of validators. Thank you.