 To answer that first let's look at what you actually need for a full note at the very least It's a raspberry pi eight gigabytes of RAM two terabytes of storage one terabyte a month's internet and A like client is basically anything that doesn't support those specs Like a browser wallet to the left side metamask or in the center the status mobile or on the right hardware wallet So let's look at what those like clients can actually do let's assume that You own one of those this metamask wallet with the four point seventy five ETH And you are doing great work at this conference and your work is so great that you're approached by investors Here someone approaches you he wants to donate point five ETH to you So sounds good point five ETH a lot of money you give them the QR code they send the money and they send you the transaction and Seems good, but when you check it you see ah, they accidentally send five ETH so we need to refund four point five of those and Luckily we brought our hardware wallet, but as you know the Wi-Fi is not too great here So let's just ask can use their laptop And they agree we connect the ledger to their laptop And we see here the nine point seventy five ETH so the plus five and We signed the transaction for four point five The ledger protects our keys so they cannot be extracted We see here the amount is correct the address is correct and we confirm and We thank them for the donation So like plants very powerful, but there is a problem when you go back to the hotel You actually realized that this transaction of the five ETH is missing So what went wrong? We confirmed that everything is correct, but One key piece is missing there the actual balance was four point seventy five ETH So they lied to us with the nine point seventy five This is actually the same problem with traditional banking when I create a transaction to an unknown Destination they only asked me if the amount correct and the address But they don't verify that the intent why I am sending the transaction is really why Yeah, so where is metamask putting this data from? The answer is there is a setting in there and you just put it to a server by default That's in FURA, but in this case the scammer just put a custom URL there that lies about the balance So how is that possible? Isn't that more secure? Actually, it's just this is the call that metamask is doing is get balance You pass it the address the block number and in the response you just get there are balance no security Not so ever just trust me, bro so In this case the attacker simply added the five ETH if it's our address This is the complete attack code the thing I tried with a real metamask So now the question is how can we make this more secure? To understand that we need to understand we need to see what are what is actually stored in ethereum and all of these things Anyone of you probably interacted with at least one of them NFT ownership game characters tokenized assets exchange rates or even just the plain ETH balance that we have seen before all of that is just data stored as part of the ethereum state and Because it's data. It's just a bunch of bytes and we can arrange it as we want for example here we can order it and Then what we can do is we can compute the hash function on pairs of each of them So a hash function is just the one-way function. It's a cryptographic checksum The idea is that if any of these change for example if someone modifies to be then the hash also Changes and we can apply this to the entire data and as you can see here We now have only four of them then we do it again. We only have two of them until we have a single state root hash So how is this useful? If you have our balance down here the 4.75 We can see the root hash contains information about all of it So whenever someone changes that for example to 9.75 to lie to us Everything changes up to the truth hash. So if we know the correct truth hash, we can know that Whether something was tampered with or not So how do we actually? Send a proof that the 4.75 ETH was used as part of this hash And for this we need to walk this entire path to the top Let's start with this note here To compute that we also need this one and then we go to the next Note here and for that one we need the A and B and Then from there we go all the way to the top. What is missing is EFG and H So with just sending those three additional values, we can prove that the 4.75 is part of the root hash We can cut away all of the rest. So those proofs are really tiny Of course in reality, it's not just a binary Merkle tree. It's a bit more complex, but the general principle is the same So how do we obtain those proofs? The answer is ETH get proof The interface looks surprisingly similar to the get balance But instead of just a balance in the response, we also receive the Merkle proof And this Merkle proof can be used to verify that the balance that we get is indeed part of that root hash so Now the question is how do we obtain the root hash? This is where the like line protocol comes in If we look at the beacon chain It's a series of blocks that point to the parent block with a address that's part of the block it forms a chain and Since the merge actually The root hash is just part of those blocks So the question now is how do we obtain that latest block with the correct root hash? And a full note does this by just following all the signatures and verifying them We have the proposer of each block and then all the attestations But to verify that it's like nearly half a million of validator keys And it takes multiple gigabytes to verify what is the latest block So that is not very practical for like clients But a year ago Altair launched and it added this notion of a sync committee a Sync committee as a set of just five hundred and twelve validators That the like line can keep track of and they also sign each block They sign whatever is the latest block and if you know those keys you verify this signature and more than two-third of it Agreed on the same block you can trust that is the same That is correct so How do you get these sync committees and the answer here is it's from the previous in committee every day the sync committee changes and The previous in committee signs a message that passes on this power of signing the latest block to the next in committee and The previous in committee. How do you get that one? It's from the previous previous in committee, of course, but at some point you just need to agree on a trusted block route for example the merge transition block and If you start from there with the initial soon committee You can get that with a Merkel proof as a bootstrap object and from there you can Continue and continue one day at a time to the next in committee and download those public keys verify the message That signs the next soon committee until you are at the present one and then you obtain the latest block check the signature And you know the root hash This data is really small. It's just about 25 kilobytes each and to obtain the final Root hash. It's just about 300 bytes One thing for certain applications that can be done if you are really offline for a long time those committee messages can be combined into a CK proof and then you can Essentially jump from any point in time to the present in constant time with a very small proof Those APIs Are available to download the data It's on rest and leapy-to-peats already standardized part of the official specs on the portal network. There is a PR and Load star and Nimbus are currently implementing the leapy-to-pea and rest APIs. So if you want to try those Feel free to do so Then security how secure is it? There is some research that shows that with a few minor modifications to the protocol it can be made So that you can actually only sync every four months. So this really opens up a lot of applications Such as IOT devices where you are not connected to the internet So as long as you sync once every four months should be secure Then let's bring it together This was very started. We had our wallet Just get balance to in Fuhrer and it returned us the four point seventy five ETH Not very secure, but It's it's the best that could be done before the merge now we can obtain the latest root hash from the beacon API and Then use the get proof endpoint to actually get the four point seventy five plus a proof that it is actually part of the root hash that we obtained and This essentially means that if you think that the beacon API provider could be the same as the web three API provider Why not in Fuhrer could essentially just provide that data and You no longer have to trust them for the correctness of the data But just for the availability of the data that is quite huge If you don't want to modify MetaMask, you can also put a proxy in between that does this translation So MetaMask is unmodified. It still does this old get balance call But the verifying proxy it keeps track of the latest root hash using the light client data and it translates the get balance call to the get proof and Verifies that the return data is correct You can alert you if it was tampered with like in our original case This verifying proxy is available today from Nimbus It's it's a part of the Nimbus it one repo in the LC underscore proxy subfolder and Yesterday someone announced on discord is R&D in the light clients channel a product called Kevlar that also does this Including proof verification for NFT ownership and token balances So one piece is still missing for the ledger device How can we get it to show us the current balance? Of course that needs a modification to the ledger software but it could be done in a way where we just dump all of this data to the ledger and It can actually just verify that it's correct It uses the light client data to update to the latest public keys Then obtains the latest true touch Then verifies that the balance that we give it is correct using the Merkel proof and then it can show this It can show the balance with the timestamp and you can verify that this transaction is really something you want to do So what else can we do with this protocol? For full notes right now, it's always a bit tricky. Where do you obtain that initial state from? Usually you just go to in fewer and scrap states finalized And then maybe you go to beacon chain and check that it's a correct one But with this protocol you can just hard code the merge transition block for example into the client and Then use the like land protocol to jump to the latest state download that state and then use that as your bootstrap checkpoint without having to validate it against beacon chain or Another use case a decentralized wallet that doesn't need to talk to any server guess has a mode called LES currently it's sort of a thing that you have to enable but The key point is that it doesn't require a huge database so what the like line protocol gives you is access to the block headers and Inside of the execution payload header. There is a field called locks bloom and with that field you can filter all the incoming transactions and See whether a block contains a transaction that is interesting for your wallet So you just need like the little like line on the consensus side 25 kilobytes a day about 20 bytes per second to follow the blocks continuously and Then you can pass the headers over to guess LES and it can then Filter for the blocks that contain interesting transactions and just download those few blocks It's not every block for sure So if we go a bit farther to the future Layer twos are getting more important and for them the problem right now is they get hacked all the time Like they usually get operated by oracle notes that are trusted There is a multi-seq maybe five out of nine four out of seven But it turns out that one party owns four of those When they get compromised everything gets hacked and if you add this like line protocol there It can sort of act as an additional safety net so that the oracle notes cannot just send the bogus information So for example when you make a deposit into a bridge you can create a Merkel proof that you made that deposit in the ethereum state and If there is a light client deployed to that layer two You can then use this as a as an endpoint like if you if you submit the Merkel proof to that bridge It can verify using that light client that it's a valid deposit and can transfer that tokens to the layer two And also Internet of Things devices You can put a rental pass for bicycles in your wallet on chain like a weekly pass and You can just send this data to the bicycle lock like we have seen with the ledger device before and It can actually verify that you own that rental pass and are allowed to open that lock Or for example When you have like an electric car charger at home and you want your friends to be able to use it But not anyone to be able to use it. Just put a light client on there doesn't need to be connected to the internet So, yeah That's all from my side. This is the latest updates about the light client protocols Feel free to contribute to this discord channel from East R&D This is where we discuss the light clients. We also have time for questions if anyone has a question Hello, thank you for the great talk I see a lot of fud from Bitcoin maxis who say that it's putting too much pressure on the nodes while you're part of the sync committee So I just wanted to ask what do the resources of the machine look like while you're part of a sync committee CPU RAM and bandwidth So as part of the sync committee, you are already Doing extra work. So doing this extra light client work is actually not that much extra Basically what you need to do is you need to hash the state and you need to grab a bit of static data Out of it because you already loaded the state and getting those hashes is basically instantaneous So this really just doesn't add that much for Nimbus we collect the light client data by default and No one has ever complained about any CPU spike or bandwidth spike due to that You please elaborate on what this means for wallets like metamask or Coinbase or ledger like what does this actually What's the future opportunity of using light clients and how does that change? How wallets actually operate in the future? I'm not sure if I got the question but Metamask right now it has to use the get balance endpoint because that's just what was available before the merge with this It can actually provide a secure verified Display so to say so that this so that the balance that it shows is actually correct Also for your NFT balances NFT ownership anything can be verified Is there anything the protocol can do to be even friendlier to light clients or we pretty much as good as it will get? We are still working on the protocol for example Right now getting access to the execution payload header is not that easy You need you still need to download the full block for that But we are targeting a couple improvements there for cappella that further reduce the amount of information that needs to be exchanged to be fully Synced to the latest head Okay, thanks for the talk I just have a question about the attack that you showed at the beginning with metamask and the solution that you Came up with at Nimbus as how to mitigate that attack I'm just curious and if you use my excuse my ignorance here on this question but like why It doesn't metamask or these like client wallet providers just build a solution and instantiate it themselves like what does Nimbus why does Nimbus need to be brought in to mitigate this attack? Well, the reason there is that actually those protocols just are getting standardized right now For example, the rest protocol was standardized on Monday. So that's when it got merged so there was just not enough time yet and We hope for sure that metamask and ledger will integrate those Security enhancements directly into their products or you could also imagine a scenario where Apple and Google put it into the Android and iOS Operating systems as a background service so you can just ask that one give me a secure balance and as long as you trust that background process That's fine But yeah, the answer is just there was not enough time yet to implement those changes because the protocols are really brand new It's on what are the sort of assumptions that are being made or the security implications of just trusting the data that comes into The like client headers and how do we mitigate this? so all of the security implement implications The swing committee is Sampled by random from the full validator set. So the full beacon chain currently Operates under an honest majority assumption. So as long as a majority is honest the beacon chain works so if we sample randomly 512 validators from that set that is considered to be honest majority then the same can be assumed for the smaller set and There is also this additional safety. That's why it's more than the week's objectivity period Where because Like it has to be exactly those 512 validators that are only assigned for that particular day that you need to compromise if you look at it How long does it take for? Validators to exit it takes about four months until enough validators are Exited so that they can then sign conflicting histories to compromise this so as long as you stay within the four months and as long as we Improved the protocol with those slashing methods then I think it's quite a secure way What you have to keep in mind though is that the student committee is 512 validators and each of them has 32 ease So if you combine all of these balances, it's about two million dollars So that's about the highest cost you can get even if everyone would be fully slashed down to zero Any attacker who can offer the student committee a higher amount than that can compromise this So right now for highly secure applications such as layer two bridges I would recommend it as an additional safety net So in case the oracle nodes get compromised that you can still verify that they are compromised But I would not trust this solely for high security for the wallet use cases It's already an improvement compared to just trusting in furan Hey so on the begin of the dog you mentioned these Raspberry Pi 4 with a geeks and two terabytes of disk so is that still as the Same for like a running light client could be a bit lower to a smaller devices in Linux in Constraint like a hardware resources a light client doesn't need a database. You just have to track those 512 validator public keys and the latest header So you don't need eight gigabytes of RAM You don't need two terabytes of storage because there is no database and you only need 20 bytes a second over one terabyte a month so, yeah Awesome, so please one more round of applause for you