 And I see some computers, that's good. So we want to try to do some programming work with you together. And we're into my first. So I'm Steph Cooks from Slogged, this is Simon Janic from Slogged. And today, we want to talk with you about our Intune protocol and the minimal verification like Incubed, which is the smallest client for each year. Other clock changes with us in this time available. And today, I will give you a short introduction how the system works. And then we will have time to do some real programming. Together with Simon, he will show something. And then you can also practice it on your own computer and try out the system. To be prepared for the programming, if you want to participate live, it's good to have some software available. The easiest thing would be to use the Docker image. We know the internet is not that great here. So I have some sticks. So if you want to install the Docker ish directly on your computer, you can just take one of the sticks. There's the complete image, or you can download it directly. OK, what we are doing today. So in the first, I want to talk about the Intune protocol, what it is, how it works, what are the benefits, what are the boundaries. And then we will do some programming, so different platforms, how to integrate the Intune client. And in our second part of the workshop, we will go a little bit more in detail how to prove we are using to verify the information of your client and then do one of the some programming work, how these proofs are working, how you can do it by yourself. And in the end, we will show how to install Intune node. That means a node which connects to the network or which provides services for the network. OK, first, I want to introduce you to how does Intune work. So this, when you see this, this is the vision everyone had in the beginning when Ethereum or other blockchains were constructed. We have a decent place network with nodes which are connected directly. And we have apps which interact with these clients and access the information of the blockchain. Cool things that this is the configuration which is working at the moment. At the moment, it looks mostly like this, like this. We have this nice decentralized network, but most of the depths are accessing the information of the blockchain over centralized nodes. These centralized nodes are very important for the time now to build these apps. But when you think to the future, we need a decentralized way to access the information from the blockchain. So when we talk about clients, then we have different types of clients. So this is an example of Ethereum. For other blockchains, it's almost the same. We have these full nodes or even archive nodes, which are very big, need a lot of resources. And then, even if we have a point, full node, it's way too big to run it in a browser or run it on IoT device. And even light clients are way too big to run it on the IoT device, for instance. So the minimum configuration for a light client is a computer in the size of Raspberry. But most of the IoT devices are way smaller. So when we designed the system, we had something like this in mind. You cannot see it much because this is a small microcontroller. This is a microcontroller which is built in in a dual log on other IoT devices. And of course, such a microcontroller is not able to run a light client at all. And what I'm doing the most, they are using remote clients. Meaning, we have a node running in the internet, and they have a remote connection to this. But with this approach, we are not decentralized in order. We rely on a centralized node. And we have a single point of failure. We don't know if there's maybe an attack in between. So in our solution, our approach is the in-cubed client, which is we want to have the abilities of a light client and the security of light client, but the smalls and the easiness of remote client. And I will now explain how this system works. Before I start with this, now with in-cubed, we are able to have a complete decentralized system. That means we have depths which run such in-cubed client, and these clients are connecting to a decentralized network of such in-cubed nodes, which are also nodes of the decentralized network of the blockchain. So I will use the term minimal verification client several times this time. I want to give you a short definition of what you mean with such a minimal verification client. So this is a client which is able to verify the information it receives and to validate that these information really belong to the blockchain. And then it's a very small client. It's only a small amount of resources. It has only a receive communication and which is also an interesting part of the concept that it doesn't depend on a special blockchain. Of course, we did it for Ethereum because we are coming from the Ethereum world, but it can work the same when I observe a lot of blockchains. And the very important thing is that this client is a stateless client. That means to no time it needs to synchronize with the information of the blockchain. So only when we need information, we can connect to this network and then we get the information and if we don't need information, we don't need to be online. The center of the in-cube system is the in-cube registry. That means we have some nodes. These are full nodes, maybe archive nodes with additional piece of software, the in-cube node software. And these nodes register to a smart contract, the registry contract. They give a deposit, security deposit. I will explain later why we need this and give some information. So then the clients, they can get this list and with this list, they know which nodes belong to the network and then they can interact with these nodes. So and how does it work? Let's start with one client and this client has this list of possible nodes. So in this list, it knows how to access this node, it knows the security deposit and it has kind of reputation system. So for instance, if it knows this node is not entering, then it will be blacklisted or in some way. So the client needs information. It can access this list and selects one of these nodes. In this case, it selects node B and sends obviously request. So I want to know, for instance, if I have access to this, to this do-lock. And it receives the answer. Yes or no or whatever the answer is. So this is the same thing like a remote client would receive. But additional to this, we receive the miracle code for this information and we receive the block header. So with this information, the client by itself can validate, okay, this information I received belongs to this block. But so at this point, we don't know if node B maybe did send us a manipulated block. We know this block and this information, this belongs together, but if this block really belongs to the blockchain, we don't know. And that's why we also send validation requests. So we select other nodes from the list. So this is all done by the client. The node has no influence on this. So and then we ask node B, in this case, ask node A and also node C that they validate that this block really belongs to the blockchain. And we get this validation by getting assigned block hash. So why we use the block hash? Because the block hash is the only information we can validate in the past on the blockchain. So that means remember this smart contract, the registry, all these nodes are registered in this registry. And now with this information, we can go to this registry and can check if this block hash, this type of block hash is really the correct block hash. And if you find that one of these nodes is not responding correctly, meaning that he assigns the form block hash, we can convict him or throw him out of the list and get his deposit. So this is done, we call it virtual block stocks. That means if one of these nodes sends from formation, he loses his deposit and that's why his deposit is important and he is thrown out of the list. So and with this, we have what we know exactly with which security we can trust the answer. So and now with all of these informations, we get the verbal proof, the block header and the site block hash, we can validate that information really belongs to this block and that this block is part of the blockchain. So additional to this virtual block stocks, of course, we could also run active block stocks which is an additional security property. So we can run a client which acts as he would be only a client, but he has connection to blockchain and then if he finds one node which answers or gives him the front answer, he can directly convict him and drag his chain. So why is this so important? So we want to have a security so that when I as a client ask for information, then I can rely on information. So and when I get information with all of these proofs, then I can be sure that this information is secure. Of course, I'm only secure in the amount of the deficit which these nodes did in drag history. So for instance, if these nodes made a deficit of one ether, then I can be secure. They will not risk to lose this deficit and give me a front answer for a request which has a lower worth. For instance, if I send a transaction to pay for energy which is the gross of one euro and I know the system here has a gross of 10 euro or more, very can be secure. If I would open a door to a very expensive apartment, of course I need a high deficit to be secure that these nodes are not trying to retrieve. So at this point, we see that the only thing the nodes can do are answering correctly or if they answer in front, then they lose the deficit. So this is not a good incentivation. I'm not sure that's kind of what the start to see what the design is requesting. Like how can someone else check the claim by the nodes A and C if they don't see what the client is asking for. The client will never know whether the answer is correct as it cannot check it. So for it to be able to check it, the watcher needs to see what the client is requesting. So the watchdog in this example, of course he cannot check directly. But for instance, if I have a door lock or a complete hotel with a lot of door locks, then I can install a watchdog which asks the same question as the normal door lock would do and randomly picks nodes in network and ask them. So that these nodes, they do not know if this is the client which is the door lock client or this is the watchdog. But they would have to pay for it. Yes. The watchdog is costing. Of course, it's not for free, yes. It would cost me something to make this request. Could I just ask the same question that the light client, the key client asked again afterwards with another node that is connected to here. So I double-pasked. Yes. But it's in the same node. Yes. I could do this. Yeah, it's still, I think that's still a problem. Okay, let's get to work. I think the important part here is also in the end, what it comes down to is just a block hash because everything else we can verify. Block hash is the only information we can't verify intrinsically, meaning we have to rely on signatures from the outside. And that's exactly what the watchdog is doing. He's just checking for block hash because node A and C are only signing the block hash. They don't even know the request. They have no idea what they have. We'll be simply asking, please sign the block number X. And that's why the watchdog can make sure that they will never sign something wrong because they will convict them right away and it will lose to deficit. But do they have to publish these signatures? How do they do that? Indirectly they do because they send a transaction to the contract and convict them. Oh, they do that? Yeah. Okay, I didn't know. Because that's why they use the deficit because we can convict them directly on chain. So when I, I think you can ask node B to ask node A and C what they will actually do is they will sign that block hash on chain. Right. Because obviously node B cannot fake the signature from A and C. That's for the watchdog and block hash. Yeah, exactly. It doesn't have to pay. It doesn't have to do. It's over-aggressed. That's what node A and C are signing on the block hash. Can you say if you sign it on chain? When you send the deposit on chain, then you know the public key, then you know what E and C and this signature is the public key. If you sign something, you can recover it. You can recover the public key and then you know that it is not there. But you have to know what they sign. But you know what they sign because you get the block hash, you get back some node, right? Like you just take the same block hash, you get back because it's part of the block header. So if you get, if you get from node B, the block header, you can use it, apply it to node A and C. You know what the public key is because you recover as a block hash. Signatures from these two and then you just hook up the registry. Or does the public signature correspond to the entries in my registry that I've called before? Yeah, I mean, we will do this practically then. We can look at the proof. We can give the manual node a proof there and see exactly what you're doing. Because in the end, it all comes down to block hash. Because it actually, node B will provide the block header. And by the block hash, it doesn't mean anything if you don't even show the hash, right? That's why A and Z need to sign it. So the initial list of nodes, which are one chain, how is the Qtite pulling that without having a fixed first node? So in the beginning, yes. At first, it was on the slide of the registry. So in the first instance, of course, it has a set of good nodes. And then it has asked these good nodes for the list. If he has the list, then he can ask for the good with this mechanism all the time. But seriously, these good nodes could be like, since it's not complete, it could be a fuba. Like, if I want to, I could be a fuba. Yeah, in fact, that's why, of course, there are some default nodes. Even if you start a parody or guess today, there are some default peers that you start with. But if you don't trust them, you can configure them. Okay, so now, so we discussed that we have these nodes, they have to give us the same information if they are from information, they can lose the deficit. So there's not a good incentive for people to run such nodes, right? If the only thing is to do some work and lose some money, if I do something wrong, it's not the best innovation to run such nodes. That's why we also built an incentive system, which means if the client asks for validated information, then it pays a very small amount of money to these nodes for getting this information. This edge is not really a micro payment, it's a non-payment, it's a very, very small amount of money. So we have some models. So I will not go in more detail in this session here. So by the information here, we can give you a little so that we can do some different incentivation models. Could be an infrastructure incentivation, then we give reputation for answering code, or it can be a monetary incentivation that we give such a micro payment. And with this system, we can build an ecosystem. So the nodes they run, they give good information or signed information and receive some money in exchange for this good information. So as I said in the beginning also, that this client doesn't meet or is not only able to talk with Ethereum, it can of course talk with several blockchain at the same time. Because this client never stores some state, it's not synchronized, so with the same interface, it can for instance talk to public Ethereum or to any other Ethereum base or even other blockchain. And not only this, also all data services which can give us a proof that the information is correct can be included in this protocol, for instance IPFS and also other protocols. So as I said, we have different implementations. There is a TypeScript implementation which is good to be included in websites or mobile applications. And we have also a C implementation which is small enough to run it on such a market controller. And there we have several implementations. There is the NanoEdition which can do only the validation of transaction receipts. This is the smallest addition and the fully included client it has an own EVM and can do even interactions with smart contracts so that we can run the code in our own EVM. Okay, at this point, are there some additional questions? Some of the questions we already discussed, yes? So what can the impugnant ask the node being actually so-called? Will it be give me the hash or would it be this RPC request to give me the answer? And you can ask all the RPC that you could also send to a RPC client. So basically it's like the same RPC that we had before? Yes, it is the same RPC additional if the information we need for the in-cube protocol means the validation request. But otherwise you can use the complete RPC request. Is the RPC request specific to your implementation? No, it's exactly the same. Plus additional parameters. Yeah, there's just one additional property. Anything else is just exactly the same. And this additional property is for the request it defines exactly what kind of proof, what kind of signatures you have, and also the response contains the proof. But anything else is exactly the same as the standard chase of the C as we know it. Will you go into the later part of the session over the part, like if I ask, get me a usage call, I want to say get some state from the contract how you prove that this response actually corresponds to the real status. So that's the same. How are we going to exactly these details? How we do the proof? But that's a very interesting thing to figure out, to verify each detail of the opposite response. Okay, any other questions? Otherwise, yeah? That's great, maybe you just mentioned the EVM. There's like no EVM or EVM. Like how much do you find like the EVM does not actually being, in fact, like, in that part where like, how many of the actual hardware do you not have EVMs? Is it like a big problem, is it a small problem? Do you mean how big is our EVM or? No, like the actual deployment was like a piece of hardware, like how many actually do not run any EVM. So it depends what you want to do with this kind, of course. So if you only need some information on the blockchain, you don't need EVM. The EVM you only need when you want to interact with slot contracts. But you want to read any storage data? The storage data you can read without the EVM. There's no ADI for that, right? So you can make an ad call to read anything. The ad call I only know is true is by having an EVM. Right, exactly. That's why we have the EVM implemented, that of course you can call easily at storage, right? If you know what you're doing, yeah? But for ETH call, that was the reason why we started implementing the whole EVM because the only way to verify that this is correct is if you first of all have all the storage values and then verify all the variables for them and then execute a code directly inside it in good quality. That seems very heavy to me, but I don't know what else to ask you. It's not heavy, and that is heavy as you think because at least this chip can do it. And if this chip can do it, your computer can do it easily. Yeah, so what is the chip? This chip here has 256 kilobyte of RAM. Which one is the ESP32? It's an NIF, a Nordic semiconductor. And this means we have one big white flash and this is usually the biggest limit to RAM. And usually that's why the biggest issue is getting the code inside the RAM because you need to download the code, turn it to a smart contract before you execute it. So if you have a huge contract, you might have an issue at least on very small devices. Okay, now Simon can take over. I need something to hold it. Hold it. Maybe I can hold it too. Okay. Okay, show me. Okay, maybe turn it to myself. I'm Simon. I'm the CTO from Socket and especially for the in-cube project, I'm also the lead dev, so a lot of the code was written by me and that's why it's all a piece of lead by me. So in order to get prepared for that, I don't know if you want to actually code something because that's what you want to do with it. This is the doggone image. This is a symbol. The image has all the tool chains that you need to compile some C code to a bunch of Java. A load chain has all the Java code. If you want, you can get ready or not. You can also take the USB sticks if there's a little bit too much to download. All the things I will show, I will just call doggone run and use the same doggone container so we have the same environment. So we don't have to figure out why something doesn't install on your computer. Yeah? Okay. I will take, but that's all right. I'll take care of it, everybody. Let's be prepared. Somebody else? Okay. Yeah. Okay, while you're downloading it, I would like to give you a short overview. Okay. Also nice. Because maybe to give you, where are we standing today? Just a few weeks ago, what we did is we prepared a release candidate one. It's not a final release yet because we went to a security audit and the security is not finished yet. Okay, that's why we're going to have a productive release as soon as it's finished. But we already published everything and so it's ready to be used, tested and we also would like to invite you to do this, to test it out, try it out and see how it works for you. What we have here is, if you want to go on GitHub, you will see a lot of repositories. I just want to shortly explain a little bit what you can expect, what you can see there. The first thing is, as Stefan mentioned, we have two implementations. The TypeScript implementation was the first one we did. Actually, about a little bit over a year ago, when we started with the TypeScript, first and more like a feasibility study, can we really verify everything? And we can, that's why we said, okay, it works great with TypeScript, but of course TypeScript is not done around with the microcontrollers, that's why we said we need a special implementation for the microcontrollers. Of course, we thought about what kind of language are we going to use it. I would have loved to use Rust or something, but if you look at the industry today, all the people that are building stuff on an IoT device, so microcontrollers, all the tool chains and all the environment is written for C. If you write something in C, it will run everywhere. That's why, as we did some tests, you can get very, very small binaries out of C. So that's why we ended up implementing the whole thing in C as well. That's why there's one repository, it's just i3. It should have been called i3-ts, because actually with the TypeScript, but first we released it under the name. And we have this i3-c, which is a C implementation. That's why there are two repositories. And of course, there's one repository that will represent this node. When we talked about this node, it will provide the proofs. This is the i3-server. This is what we connect to. And then we have two other repositories. The common is more like utils and default configuration levels or the default nodes. The boot nodes are configured there. And the i3-contracts, this is just one repository where we have all this mod contracts. There are just two smart contracts that really do all the work. But of course they are very crucial with all the tests and whatever. So this is the code in GitHub. We have two Docker images published. This is the one that includes client itself. So if you don't want to install it, you can run the input client directly as a Docker run command. This is sometimes very helpful if you want to replace a light client, for example. You can simply run the Docker image. And this will open up a JSON RPC port. And every request that comes into the input will then communicate with all the nodes. So it can replace that running client. And of course the other one is there. And we know which is one of these nodes. This is also what we are going to talk about later, how you set up the node. Usually you just run the stronger client. And then of course, for TypeScript, in JavaScript we have some ADM packages. We have the most important one is the input client itself. The TypeScript version. For the C version, we also compile it to Basin. That's why we have the second project that is just in Basin. At the moment it's experimental, but it seems to work very nicely. And actually it's really cool to see if you compare them. I mean they both bring the same result. But this one, the TypeScript version, has all the typical dependencies that you all know. All the ... ... all the stuff. And usually you have a lot of different node modules. So if you pack it, it comes down to the size of 2 megabyte. If you look at Basin, the Basin file is about 270 kilobyte. And everything in there, the complete EVM and all the stuff, it has no dependency at all. So which is pretty cool, especially also from the security point of view. If you think about it, having a module with no dependencies where it cannot mess around with prototypes and all the stuff, really, it's quite helpful. And it's quite fast as well. Yeah, the other one, and it's the same as here. We've covered just a lot of YouTube, it's how to serialize, what kind of synchronization and stuff like that. So this is basically what we have out there, the most important thing is. And if you want you can also look it up. In real docs we have our documentation there. We try to put it as much as we can. And as information, so hopefully in YouTube we have to find almost any answer in there. If not, let us know that we have missed something to document. That's what we have out there. When you look for information, usually a good starting point, start with the docs. It starts also explaining the content, just as Stefan explained the background and all the details and even the APIs references and all the stuff. So are you ready so far to start the document image? Or are you still downloading or installing? Maybe we can switch the cables now. Because I'm not very interested in one step especially. The one? The song is? Okay. Let me just give you all of you what we wanted to do. Maybe to give you all of you what we wanted to do is I want to first show a small example of how to use the TypeScript Cloud. Because I think the TypeScript Cloud is targeted for especially most deaf developers. Because JavaScript today is the world most used language when it comes to building use of the face. There's almost no alternative right now. That's why if you building it there and do not want to rely on an extra environment where somebody injects a free object for you then you would be very interested in figuring this. That's why the first thing we'll just do some TypeScript coding how do you type TypeScript line and then we will use the C Client and because even for the C Client there are different targets we compile into. One is just a simple executable that you can even use in a bash script. We'll play around with that a bit and show what you can do with it. It's actually quite a lot what you can just use in a command line user where we will look into C directing by the small sample application in C how to use the Incub Client there and then maybe we'll also look into Java because we all have bindings with Java. Even though the Java client itself will only use the C Client and have a G9 interface but bindings will have a natural feeling for Java. Why Java? Because especially when you want to develop native Android apps that's where you would use C Client. So you can use it directly on the platform. And then in the end we'll also talk about what I also wanted to do is then do a deep dive and actually think about switching that if you're more interested to invest a deep dive about how the verification works or do you want to do the practical things first. We're out of care. Maybe we'll keep that now because there are already a few questions about that and I think that will help to understand a little bit more how exactly the verification works. We are flexible. Okay. So as mentioned before what we do is we send RPC requests. First of all this is a one design decision. Sometimes people ask Rob, don't you use the light type protocol? Well, there are reasons why we don't. One of them is these clients they are not only stateless, some of them don't even have internet connection. Using the light type protocol means you have to be part of the peer-to-peer network. And this is a requirement we do not have. Because this gym does not have internet connection at all. But we can still use it because we can use he supports Bluetooth for example. So we can use the internet connection of your phone. That's why we abstract that the transport layer completely and obviously it's exactly perfect for that. This is also one reason why we cannot rely on the peer-to-peer network where you have the requirement to have an active internet connection. So that works also for offline devices. That's important. The other thing is of course usually what you want to do is most of these devices will sleep all the time or door lock and once you want to open the door it needs an instant answer. And if you want to let the light type protocol you will start finding your peers first that will work. So that's why you get instant interest. But in order to do this that's why we have these note lists so you know your peers beforehand. That's important. And you know exactly where to send them to. You randomly choose one and say okay, I want this proof and this answer from you. And that's usually how it looks like. You see this is the actual seat. And here you have this i3 property where we define what kind of verification do we want and the signatures which are the nodes. So then they will organize it. The response looks the same way. You see it gets resolved as usual to us. Now you see the proof down there. So there are different proofs. I'll talk about that in a minute. What these proofs exactly are. But the important part of the proof is usually the signatures where we request them. Last time this last node list this is the block 9 home where the last event happened on the registry meaning whenever a new node was added this last node list will change. And that's why the client will figure out oh, it's time to update my node list. So that's why because the client needs to always be sure that these nodes are still available. And they still have the amount positive too. That also. Exactly. If they are well convicted and get kicked out of course you want to make sure they don't ask this guy anymore. This is the question regarding this aspect. If I run it over and I deposit it at 100 how can I withdraw it? Well, you can withdraw it. There is a timeout actually before you can withdraw it because one of the attack factors would be you can give a wrong answer and then withdraw the money. So is there an expiration date for each node? Yeah, so when you register you can define and say maybe there is an expiration date for maybe one day or maybe one hour, you can define that. This is also something that the client needs to consider because if I am getting a signature from this guy I am only maybe one hour to convict him if this is wrong. For other guys I have a bit more time so this means I would probably choose somebody who has more time. The client does have less information of the smart contract of the information so he knows exactly how long he could be. Right. Okay. Now, what I am going to explain to you is a bit more on how Ethereum works internally because this is how these rules actually work. The information we get depends on what kind of information we ask. If we ask, for example, for a block that would be the easiest use case. If you call if you get a block by number you get the data of the block header, for example. The block header itself can be easily verified all you have to do is you take this data and serialize that as a block header. That is the way we do it here. So you just need to make sure that everything is in the right order and there are some small things you have to consider but if you do it right you can create the data I'll be encoded and this byte array if you hash it you get a block hash. So this means we can just take the data create the block hash and then compare to the signed block hash and if this is correct the block header data are correct. Easier as that to verify the block data. Now the next thing is transactions. If we want to make sure for example, let's say we call we get transaction by hash we have the transaction data. We want to now verify that these data are correct. What we need to do is we already have the block header. We always need to verify the block header first. And there's one field the block header is called transaction root. The transaction root is just a hash 32 byte and this transaction root is the root hash of the verbal tree. To be specific of the patrician verbal tree each transaction in the block is part of this verbal tree. And if you want to verify you need to know the path inside the verbal tree and this is the RLB encoded transaction index. Take the transaction index call RLB encode and then you have the path inside the transaction tree. Knowing this now the proof that the server will give you is just the verbal proof of this transaction. The verbal proof means we will serverize each node on the way to the transaction server. We will have to detail how this actually looks like but this way you can then create a hash and if this hash ends up exactly with the transaction root you know the data is correct. But the importance of here again the transaction data needs to be serverized in the correct way. That's the way you do it. If you state all these values the non is the gas price to put it in there, use RLB encode and you get the raw transaction. In this raw transaction you can then verify the verbal proof if this is correct or not. The transaction receipt works exactly the same way. So if I ask for a transaction receipt there's a different field in the block here that's called the receipt route and then we can verify it exactly. The only difference here is the way we serverize it you can see there's different fields we have all these events for example are all part of this transaction receipt so you need to serverize them as well and this code will give you the raw bytes of the transaction receipt which is then part of this verbal tree. Then the next thing is what you call ETHGAP balance for example or ETHGAP storage add or GAP transaction count these information they come from the account object and the account object are verified as also part of the verbal tree by the different one. This is the state route so we have this state route as part of the block header and then you have this verbal tree where all these accounts are stored. Here each account has four fields we have the nonce or transaction count we have the balance of the account the storage which is again a root hash for another verbal tree and we have the code hash in case there is code there is no code and we have always fixed value for the code hash which is the hash for an empty code. So you can now verify that the account data are correct. The interesting part about this is for example the storage hash that you would need to verify that at least last year was not really part of any RBC code. You were not able to get this information through RBC. So that was one reason why we said we need another function for that and this function that's why we created this AIP for eith get proof and I implemented protocol class for get and also for parity and then they merged it so it's in there at least for almost a year now and eith get proof is exactly the function that will give you the world proofs that you need for that. The other things for the transaction is something you can build from the existing data. You can just collect all the transaction and get proof but here you need a special function and this is now in there and I guess we just need to push a little more now to make it official because I think it's still a draft AIP but it should go through a lot of people already using it. That's why we here like two steps actually for the account object. First we verify the account object and then all the storage values that you need in this one contract are also stored in the tree and the root of this node tree is then part of that account object so you go two steps. This is how you sterilize the account object that wants to burn the storage and then you can go and ask there are some small things that you have to watch out for for example if it's in done if this doesn't exist that doesn't mean you get zero out of it usually because you get the empty hash then that's why it checks if it's in that existing want you to a different kind of proof because this was also a challenge there moving proofs are great but the challenge is how do you verify something does not exist for example if I call these get balanced and get zero how do I verify with the world proof that this is correct and it works, you can do it but you can do it with a patrician world tree even though it's a little bit tricky to figure out that there are different correlations where you can be sure that this cannot exist because there are some roof branches and some branch nodes if the branch nodes for example meet it as a zero or empty branch then you know that cannot exist there then the hardest one was actually the proof for you to call because if you're calling a function you need to execute it locally there is no way you can simply run a proof on a node and to verify a node so what we did here is the first thing is the node itself needs to collect all the proofs how does it work for parity we used actually trace calls which is quite nice meaning we asked parity give us a trace of this call and then we went to each of code and looked whenever we found something that relies on some kind of storage value then we know we need to verify this meaning we need to create a world proof for this storage value or even if there is a balance of code so at all S-load or our codes to other contracts we collected them and then created a proof out of this so then we have at least a proof for the storage values plus we needed of course the code itself this can be cached at least so the client does not need to download the code every time we need to actually verify because we have to co-attach as part of the account proof so after we collected all of this then the client needs to go through and execute the code itself this means and that's what this code kind of shows here what we did here is we created an empty worker tree we created a new EVM to verify then we went through all the account objects and after this we created a transaction and ran it in this EVM and then afterwards we just pick up the result of course it's a little bit simplified but that's basically what we do and that's what the TypeScript client is doing here for the TypeScript line we used the ETM.js implementation we don't want to reinvent the wheel but for the C-client we had to implement the EVM ourselves because this is done the same way so the client will first verify everything and then run the code and while running it also needs to check for example if all the values are needed are part of the proof there cannot be anything that's not verified can you again explain why you are getting the price because we need to out-code we need to know which out-code will be executed when you do this e-call because we need to find the S-load and balance of codes there and the e-call at least makes it easy because you get it all in one week rest and then we can simply find the ones and pick and collect the proof for parent if we get it's a little bit more a little bit more difficult because these trace functions that get us currently supporting are only for existing transactions that are already mined that doesn't help with the e-call that's why what we do at the moment it's a little bit slow because we get the code we execute the code and whenever we hit something and out-code for example then we ask Gav give us the storage value and then we collect this way but this means it's way slower because you need more than one request for getting all the data so this has basically also since the client is completely stateless you need to know which storage you need to send to the client right so that's why the local collectors it has all the information but the stateless client needs to receive all the information needed to verify it within the response that's why that's how the e-call works okay let's do some practical things there I don't know how many of you actually what I'd like to do is I would like to go to to manually verify a transaction I think it's very interesting to do it at least once I had to do it a lot of times when I was debugging all this stuff something says okay the hash doesn't match nice, why so when you go to the mergle trees and do all these things but when you do this you really try to understand how this all fits together and I think it's very interesting to do it at least once here to go through that and to actually manually go to a regular proof and see how this works okay I've got my fingers to cut okay so if you for example can start a dog you can also do it awesome so I guess you see one screen if you go to the top you can switch camera there's an icon that you're passed by it's very tough right to the left of the screen the problem is your presentation if you have to escape it you can escape the presentation more it helps so so what I just did here I started the same dog container that we mentioned here in the beginning and as part of at least the c-cliner was a small util that this all be util where you can simply give any data and it will all be decoded not like okay even though bigger will be hard because you will see a lot of numbers okay let's see if it fits in there yeah what we can do it now it's too big let's pick some data also to show you how because this is actually from our test data this is a request to get transaction by hash as you can see here and the result is the transaction data itself and this is the proof now let's just go to these proofs in detail what it is the first thing that we see here is this proof contains the block header this is the complete block header that we have here and if you were to take this I hope you can see this if I do this I just did decode of this block header and you will see here parent hash, miner, state group, transaction group all these data and also it will calculate the hash of the data itself if we look in here the response had this block hash if we compare them it matches so we know at least this is the block header and of course if we have the signature for this block hash we can also confirm this for them so now that we know that the block header is correct we can now go and look at the transaction data itself this is the worker proof for the transaction the first thing we need to know is if we want to verify the worker proof the transaction index we know the transaction index is 171 or actually it's also the same number here so now if we have the transaction index if we do this go exactly the same things and through what I explained we start with the transaction group which is this one and go now this path the path itself we can simply calculate what's the wrong hash the transaction index so what you see here this is the path inside the laboratory and now what we can do is we take the first node of the worker proof the first node these several nodes and what you see here is that's how a node in the laboratory looks like this is a branch node and each branch has 16 slots you can say and these are the six slots and this is the hash of the node the hash of the first node must be the root hash that's also in the block header if you look at B5 it's hard to do on a small screen we would have here the transaction group B5 this is the one matches so that's why we know now we looked at our path 8-1-AB so the first part would be here 8 this is the root hash of the next node and if we go here we can now take the next node this is the next node this is the hash and 8 is here it matches so I can do this now all the way if we look then at the last node the last node is a leaf node meaning this is the actual value you don't want to watch the path to it and this leaf node the value of this leaf node is here this is a broad transaction so I can take now these data from this broad transaction and in compared to the one we have here the result so we have the gas for example which is this value matches which is this one matches and that's exactly what the end time is doing very fast node approve just as we've done manually here now and then compare each data to the response to make sure it's correct and actually we have a lot of chasing tests that by probes try to fuss around, manipulate and make sure that it will detect each change okay are there any questions for that? how this works? how this I mean we did not invent this node this is part of the example but we are using exactly this to verify the data in the input client so are P messes what we have as a new row this is part of the look average or is it a complicated mess? there is actually the LP because we had implemented the LP in coping for the seed line anyways until there was a small command line that just what is the part of yours? it's a part of yours about the seed client but it was quite helpful I mean I know Gath also has an LP decode I think or something like this the only thing here is a little bit easier I put a lot of debug stuff in there because here it kind of guesses what it is looking at a number of fields and it looks like a broad header or it just looks like a transaction and so it gives you the names of it makes it easier to read this is more like an internal tool to play around it but it really helps to understand a bit what's going on inside let it stay down okay if you want you can try in the token mentioned mentioned okay maybe what I would like to do next is just to show you or not actually show you actually we'd love to if you can do it also yourself to first attach the client and then maybe you can also use the seed client how to use it, how to install technically it's not very hard because all these things that we have seen here are going on directly inside the client from a developed perspective you just want to use it you don't even have to care about them the verification is just taking place meaning you just use it and when you do this yes let's do like this let's create a very simple JavaScript I do it in JavaScript simply so it's even easier than TypeScript because you don't have the extra compiled stuff but the implementation is completely done in TypeScript so in the image itself we also download all the node modules it's all installs you don't have to call npm install but if you would start what you usually would do would call npm install 9.3 or of course you can save it like this I guess that's what everybody knows so they include node modules it's already done here that's why I don't have to do it that's why in the node modules let's write a simple demo the first thing you do is you just require and in this here we do it like this because this is the class that include class and now we can create a client in itself it's really easy and straightforward if you do it like this it will create a client with the defaults but if you want you can of course pass there's a lot of configuration meaning even you can define your boot nodes that you want to use you can define how many requests you want to send the signatures you want to have one important parameter is the chain ID which chain you want to use the moment we have deployed on girly on coven on the mainnet but we will probably add it to one chains that at least as the band how do you establish that these proofs are going through the root I'm not sure if it goes further ahead to the chain root the chain root it really belongs to the chain this block that they would the block belongs to the chain then the way we do this is by getting a signature from the other nodes they need to sign that this block hash is correct and the reason why this is this works is because this signature can then be passed to the smart contract itself they can call easy recover to make sure that's the right node and they can also verify that this block hash is correct because there's an opcode block hash opcode at least for the last 256 blocks you can directly in the smart contract find out if this is correct so then we can compare and we know either they if this was correct it was not and if it was not then he would lose his touch and even for older blocks in the beginning we had only this 256 blocks these are the like low hanging fruits, the easy one but also if you want to go even further back we created a second contract with the block hash registry where you can construct in all the blocks you need to deliver the proofs like the block headers and out of these block headers you can then prove that certain block hash is the one for this block now what happens with reorgs the nodes were not lying but in the end they returned information that is actually not true after some time exactly we've put a lot of thoughts in this as well because you're right, this is something that too many vectors don't think that much about finality at the moment there is a configuration like a mid block hide that says how many blocks do I need to wait before I actually sign the block hash because one of these nodes will just get retraced please sign block number X and then what it will do is say is this really final because if it's not final I would risk losing my deficit just because I'm on the wrong fork I mean at the moment if you look at either scan for example they have these nice stats there were max 3 blocks some reorgs in this kind of mount but at the moment there's a default for 6 blocks usually the node will not sign any block that is younger than 6 blocks so that's why if you ask for something that's younger than this you will not get any signature the latest block is not final but that basically means when I use the RPC code I ask for the latest which is kind of the default you would probably give them the latest one from latest lights so I think that this is an important thing so nobody thinks about it asking for the latest block but this is also not a secure information because here you learn a little bit in pain if you ask for the latest block you will never get a signature because no node will sign it but you might get a response just not signed because they don't care to sign because it might get wrong I mean that's exactly what happened we can test it, what's happening is the YouTube client will then simply say he will retract the response and as you said I want to have two signatures