 So, today we're going to talk about storage proofs. I want to introduce this, yes, I'm going to present you storage proofs and explain why they're cool, how to work with them, why you need tooling to work with them, and yeah, a bunch of other things, why is it even possible, all the complexities behind the tradeoffs and so on. So, a few words about storage proofs. Well, I really believe that they are cool, especially nowadays. So, my thesis is that Ethereum is pretty sharded nowadays, and with storage proofs, we can essentially read the state in an almost synchronous manner, which is a pretty, pretty nice thing to do, given the circumstances. Yeah, and maybe also let me explain why is it even possible. So, storage proof is essentially this idea that the entire state is committed in a cryptographic manner, using some data structure like Miracle trees, Miracle Patricia trees and so on, and yeah, we can essentially verify any specific piece of state at any point in time on any domain, which is pretty nice and does introduce additional trust assumptions. You just rely on the security of like the base chain. So yeah, that's like storage proofs, TLDR, where they are cool. Now, a bit of like sponsored section, sponsored section. So, what we're doing at Herbalotus. So, our goal is to make smart contracts self-aware in a way, by providing access to historical state. We, like I said, my thesis is that Ethereum is pretty sharded nowadays. We want to unshard it by using storage proofs, and we want to enable synchronous data, which because today we do not have really nice ways to make synchronous data access without introducing new state, new trust assumptions. So, yeah, that's what we do and how we achieve that. We achieve that by using obviously storage proofs. We use Snarx, Snarx and MPC. I will get why we even use all this tooling, but first a few words about storage proofs, what these are, and so on. It's so tricky actually, I need to be multitasking. Okay, so what we're going to cover in today's workshop. So, all the basics required to understand properly this primitive, how to work with it, how we can generate these proofs, why they're pretty useful, and how actually you can access these commitments. I will get later what we call a commitment in a trustless manner. And how we make smart contracts self-aware and enable historical data reads. Cool, so. That's pretty, that's pretty tricky. So, about the background that I want you to have for this workshop. So, we're going to like start from the biggest basics. So, what is a hashing function? Just a very quick reminder, I hope it will take less than a minute. Like generalized blockchain anatomy, how an Ethereum header looks like, why Ethereum? They're not like pretty only like Ethereum focused. However, I think that for the sake of this workshop, it's the best to like present on this concrete example. Miracle trees explain me like on five. I will just quickly explain the idea, how it works and what is a Miracle Patricia tree without really going too much into the details. Yeah, finally, no, not finally, the Anatomy of Ethereum state. It's pretty important to like deal with this, with this primitive and finally how to deal with the storage layout. Cool. So, hashing function. Essentially, it's this idea that I can have a function that takes some input of any size and it always always return an output of a fixed size. And now what's also important, there is no input. There are no two inputs that will generate the same output. And you cannot reverse the hashing function. So, it means that given the output, you don't know what is the input. And this is what we call like collision resistance. Pretty useful primitive like using blockchains. I will, and I think that's pretty much it. I assume that everyone is like familiar with like, yeah. Okay. Why is it important? So, generalized blockchain anatomy. So why we call it the chain? Because we have a bunch of blocks mined together, like linked together because each block contains the reference to the parent hash. And the previous header contains the reference to the parent hash, which is pretty cool. And let me remind what the parent hash or the block hash of oniteramese, it's essentially the hash of the header. Pretty important to deal with these primitives and make smart contracts self-aware. So accessing six or equal state. So just keep that in mind. Let's get to the next part. So, no, I think I'm missing one slide. No, it's the correct one. Okay. So this is an Ethereum block header. As I said, we're gonna go through the example of Ethereum concretely. So a bit of anatomy. So to access state, obviously we need the state route. What is the state route is the route of the Miracle Patricia tree of the Ethereum state. We also have the transactions route, which is pretty useful if you want to access historical transactions like their entire body and receipt route. So it's pretty useful to access any events, logs, and so on. And all of these are like route of the Miracle Patricia tree. A Miracle Patricia tree, just think of it in that way. And most importantly, we have the parent hash. And with the parent hash, we can in a way go backwards. I think that's it. Let's get to Miracle Tree. So essentially it's this idea that I can take whatever amount of data and I can commit it in a cryptographic manner by using this data structure. So on the left side, we see a standard Miracle Tree. So essentially all the data goes to the bottom and we essentially hash it. You know, what the hashing function is then we combine these two hashes together. We hash it and we keep doing that till we get to essentially one hash and this is what we call the route. Miracle Patricia tree, modified Miracle Patricia tree to be exact. This is the data structure that we use in Ethereum. What you see here, I hope you see on the top, we have the state route and essentially the state route is the route of the tree. And now how it works and how you should think of this. It's a pretty complex data structure. I don't want you to bother with it today. But essentially we have three types of like nodes. We have leaf nodes, extension nodes and branch nodes. So leaf nodes contain data, branch nodes contain data and extension nodes. Like on the high level just help us to like sort of navigating that tree. But to be honest, to deal with storage routes, you don't really need to understand this part. But to like build on the low level as we do, obviously we need to deal pretty a lot with that part. Okay, so Ethereum state, how is it constructed? Most important takeaway, it's a two level structure. So I mentioned that the state route is a commitment of the entire state but it's not really true because Ethereum is a... Does it still work? Okay, it works. It's account-based and essentially the state route is the commitment of all the accounts that exist on Ethereum and what an account is made of. It's made of a balance like the if balance. It's a non-transaction counter storage route. The storage route is like the route of another Mirko Patricia tree and this Mirko Patricia tree contains a key value database that holds like the mapping from storage key to its actual value. And finally, we have the code hash. It's essentially the hash of the byte code. So main takeaway, first we access accounts and once we have the account storage route, we can access its... Okay, cool. So to sum it up, like the background, so main takeaways, given the block state route, you can recreate any state for this specific block on this network and given an initial trusted block hash, you can essentially recreate all the previous headers which is pretty cool and important to get the ideas that I will explain pretty soon. Okay, so this is going to be a workshop. It's a short one, so I won't let you code but I will show you some concrete examples. So what I want you to go through with me today is how we can prove the ownership of a lens profile on another chain. So a bit of background. Lens profiles are represented as NFTs and lens is deployed on polygon. I think that's it. How do we get to this? So first of all, the question that we need to answer to ourselves is how does polygon commit to Ethereum L1? Because if you want to prove the ownership of a lens profile on optimism, we need to know the state route of polygon. But there is Ethereum L1 in the middle, so how do we actually access this on Ethereum L1 primarily? So polygon is a commit chain and it commits to Ethereum a bunch of things every some amount of time and essentially on L1 we do not validate the entire transition but we just verify the consensus of polygon. And these checkpoints, how they call it, essentially contain state routes. And so I mean not directly but we can access them and let's get to this part. So this is taken from polygon's documentation and this is how a checkpoint looks like. So as you can see the checkpoint is made of a proposer. So who proposed the block? Start block and block. Give me a second I'll get to this. And most importantly we have the root hush. So the root hush is essentially a Merkle tree, not a Merkle Patricia tree that contains all the headers. And which headers? The headers in the range of start block and end block. Cool. So now if we get back to the previous part, we can essentially prove with this commitment that we know the valid state route of polygon. First leave and block. Okay a bit of hands on. So we want to prove that I own a lands profile on polygon wherever. So number one we go to the contracts. We see a contract, we go through it and we see that essentially there is a bunch of logic on top of this ERC 721. This is like the basic ERC 721. As you can see it's an abstract contract and it's slightly modified. Instead of having like a standard mapping from like token ID to its owner, we have like token ID to token data. Token data is a struct. This struct is 32 bytes in total. 20 bytes is the actual owner and the remaining 12 bytes represent when the token was minted. Okay but how do I actually prove it? Oh and also very important thing when dealing with storage layout we have something that is called like slot indices. So each variable has a given slot like in the some sort of meta layout. I call it like that. It's probably the right way. Anyways this mapping has like the slot index too. I will get to this part in a second why it's too. And we have a mapping from token ID. So you need to 32 bytes of data represented as a struct but just think of it as some bytes. Okay so I guess most of you use hard hot. So I'm going to present on hard hot. There is a very very cool tool to deal with storage layout. It's called obviously hard hot storage layout. This is how you install it. It's literally yarn install hard hot storage layout. You add one comment to your hard hot config. You write a new script. It contains literally eight lines of code. You run the script and you get this weird table. And what does it really tell you? And oh by the way why this tool is pretty useful. As you see this contract is abstract. So some other contracts in. And does it still work? Yeah some contracts can inherit from it. And obviously why we inherit the storage layout. I mean does this in synthesis can get more trickier. Okay so that's it's pretty hard to coordinate like one hand with another hand. Even though I'm Italian. Okay anyways. Yeah we know this slot in this index. And that's how we get it. We have a column that is called storage slot. And as you see underscore token data is marked as two. And that's it. Okay but what do we do with it? How do we get this storage key? And yeah that's it. Let me check the time. Okay so a bit of hands on. How do we get the actual storage? It sounds scary and it's meant to be scary. So we know the slot index. The storage index. I want to prove that it's like 0x35 and and owns. With ID 3594. How do we get this storage key? We essentially do this operation. So we concatenate the slot. I mean the key in the mapping. Which is 3594 because this is the token ID. As you know we have a mapping from token ID to token data. Token data contains the. Okay so we concatenate this with the storage index. We hash it all together. This is the storage key that we have. If you're interested how to deal with it. For like more complex mappings and like. Layouts. Like the solidity documentation. It's explained pretty well. So now let's to make sure we got the proper storage key. Let's just check it. How we can check it. Super easy. Let's just make a one-eater PC call. To get this storage at some specific key. Is the if get storage at. So the parameters. We want to access the storage of what? Of the lens hub. Lens hub is the contract that essentially. Is the representation of these profiles. And its address is 0xddd4 and so on. And the slots. Oh. Is it better? Oh it's much better. And the slot. The storage key is 0x1. So essentially that's the hash that we got. And the result is 0x000. And we know that it's 32 bytes of data. Where we have 20 and 12. So let's split it into 12 and 20 bytes. And what we have. Is. Some number like you can see 0x a lot of zeros. Then 62 till D. And this looks like a small number. So apparently it is a timestamp. And the second part is like 35. 57. And it's literally our address. So we got it correct. We have the proper. Storage key. Cool. But. How do we actually get to storage proofs. So. There are standardize method in like the. Jason RPC standard for Ethereum clients. And this method is called ETH get proof. Which essentially given. The contract address. Better call it account address in this specific case. Allows us to generate a state proof. And the last argument. I mean the sorry the second argument. Is an array that contains all this storage storage keys. If we want to prove. There is another argument. Which is 0x1 a. It's essentially the block number. For which we prove the state. Yeah. Let's call this method. Oh, by the way. You might have a question. How do we deal with this method on non EVM chains. Because for example. On some specific rollouts. This method is like not supported. Actually it's not a big deal. Because if you think of it. We just need the database. And on top of this database. We can literally build this method. We just need to know how the. Storage is constructed. Okay. This is the proof. It looks scary. It is scary. This entire object is 4 kilobytes of data. And now. I mentioned before that. The state is like a two level structure. First we have a proof for the account itself. And now we have the proof. For the storage. I mean for the actual storage slot. It is scary. It's meant to be scary. One proof is like more or less 600 bytes. 700 bytes. It really depends like bigger. The storage is then bigger. The proof is. And also more accounts. We have them bigger. The account proof is. So that's a lot of cold data. If if you can imagine. And yeah, that's that's pretty bad. Why? Because we need to pause this proof on the chain. So it's a lot of cold data. But okay, let's let's try. What's going to be the cost on like an EVM chain? That's the cost. It's like 600 K of gas. That's a lot. That kills almost every single application that you want to build on top of this nice primitive. So it's pretty bad. And why is it that bad? So I explained on the high level what Merkel trees are. And Marco Patricia trees are only if you use Marco Patricia trees. And essentially there is a trade off that when using Marco Patricia trees, the proof is a slightly bigger. It's like harder to decode it because actually we need to do some a bit of decoding there. But we need to do less hashing. So this is a trade off. But depending where we actually verify this proof might be more feasible to verify. Like a proof that is based on Marco Patricia trees or Merkel trees. Okay. But there is a solution. And the solution is what if we snarkify such such a proof and we verify this proof inside the snark. Why is it cool? Because we can like let's say that I'm going to verify this proof inside a graph graph 16 circuit. And yeah, the verification cost more or less like 210 K gas. The proof is like way less than 600 bytes. So it's good. So essentially get rid of the cold data because the proof itself can be the private input to the circuit. Yeah. We can like use multiple proving system depending on the on the actual use case. And now why is it like very, very cool. So first of all, it removes cold data. Second of all, it allows us to deal with very unfriendly hashing functions or the EVM is the ones that we don't have pre-compiled for like, let's say Peterson. So it might be like super expensive to verify such a proof on the EVM because first of all, that's a lot of cold data and the hashing function is pretty like unfriendly. But what if we can like do it inside the snark and just verify snark. And yeah, so another benefits is really, really helps in obstructing the way how we verify this proofs because you don't need to have like one generalized verifier for each type of proof. But you can essentially obstruct it behind behind the behind the snark, which is, which is great. These numbers were taken from a very nice article written by a 16Z, like a bunch of a few, a few months ago. Yeah. And I think that's pretty much it. Let's get to the next slide. So syncronus cross layer state access. So how can actually a control deployed on some layer access some layer access the state of another L2 or L1? So I mentioned that we always need the state root, but because all of these systems have a native messaging system, we can send the small commitments like, for example, the blockhush to like L1. Usually it goes all for L1. And yeah, we can like unroll it or send the state through directly. And also we don't need to rely on messaging, but we can, for example, rely on the fact that Polygon is like a commit chain. And all these problems like commit from time to time, they're like batches and so on. So this is like pretty important. And we sort of can get the commitment from which we'll recreate the state directly on L1 and then send it to another. So if let's say Polygon commits on L1, I can send this commitment then to start and then start to do the actual verification. Cool. So now, how do we actually do that? So let's break the entire flow into like smallest pieces. So the flow is the following. We need to have access to the commitment, which is either a blockhush or a state root. And again, we can get it either by sending a message, relying on the fact that this chain commits. So in a sense, it's still a message. We can relate in an optimistic manner or we can go even more crazy and verify the entire consensus. Okay. So this is step number one. We need to get the commitment. Step number two, we need to somehow access the state root. So the commitments of the state from like a previous block or the actual block, because keep in mind that these commitments are only blockhushes. And with blockhushes, we can recreate headers, but we cannot access the state. Okay. So once we have the state root, we obviously need to verify this state slash storage proofs. Okay. And there are multi-pronged to do that. Just to do that, all of them come with some trade-offs and let's go through all these approaches. So approach number one, messaging. So I can send a message from, let's say, optimism to eternal one. I can get the blockhush by just calling the proper opcode and I get it. It takes some time, but still I get it. This is approach number one. So we rely on the built-in messaging system, which is, I think, fair because the security of it is equal to the security of the roll-up. And if you're deploying an application of this roll-up, it's a fair assumption to do so. Yeah. It doesn't, oh, now about the downsides. So the message must be delivered. So it introduces a significant delay, especially when dealing with the weave draw up period in the middle. And it requires we, it requires interacting with multiple layers. So first you need to send a message and then actually you need to consume it. So it's not ideal. But the trust assumptions are pretty occasion. Another approach, consensus validation. By the way, this, like, Gremlin is supposed to verify a bunch of BLS signatures. I hope it's self-explanatory. Okay, so maybe a few, a bit of an intro. Right now we have POS, as the native consensus algorithm on Ethereum, which is pretty great, because verifying the consensus is finally doable, because before verifying the hashing function, ETH hash, which was used for proof of work, was very memory-intense. So not possible to do inside the Sark, on-chain directly. So it was almost impossible to do so. So now we also have this fortress called LMD GOES, which is implementable. But doing all of this, like, directly, is pretty expensive. So we need to ideally wrap inside the Sark, but there is another downside. So a few words about the trust assumptions. You, well, you verify the consensus directly, so it's fine. Do you introduce any trust assumptions? Not really, but the biggest downside that generating the proof actually takes some time. So to be honest, this approach is feasible, but comparing to messaging quite often is almost the same. And you pay a lot of improving time, and requires having more advanced infrastructure. Okay, last approach that we actually use is something that we call an optimistic layer based on MPC. MPC stands for multi-party computation. Maybe before I explain how it works, let me explain the image. I hope it's self-explanatory. So it's an MPC protocol. We have multiple parties. These multiple parties attest something. Then we have an observer that can challenge it. And then we have finally the commitment given to a specific chain, in this case, SarkNet, once everything is fine. How does it work? So we have a set of trusted relayers, validators, however, and they attest that a specific commitment is valid. So how does it work? If we want to get the commitment, aka the block hash of block number X on SarkNet, then instead of sending a message that would be delayed, we have a slightly delayed. We can essentially make an off-train call, just get the latest one and essentially relay this message directly to SarkNet. But it comes with a few downsides, because while we introduce some trust assumptions, but still it's okay. Okay, how does it work? So it works in a way that we have a bunch of off-chain actors who essentially make these calls. And it works more or less like a multi-sig. But the reason why we have MPC is because more validators you have than obviously more securities. But more validators you have in a standard multi-sig approach, you have more signatures. So more in a way decentralized, it is then it's more expensive to verify, because you need to verify multiple signatures. And you need to like post the signatures. It's a lot of call data. Such approach is not feasible on chains where call data is expensive. So that one, optimistic roll-ups. And yeah. Okay, so how does it work? What is actually MPC part doing? The MPC part is very simple. It's essentially signing over like a specific curve, some specific payload, and the payload is the commitment itself. And that's it. Okay. So this is how we actually attest, but now how wide this approach is called optimistic and why it's still secure. So first of all, we just posted something on the actual L2. And as you may know, we can send messages from L1 to L2. And such a message can contain like the proper commitment. So essentially even if the validator set will lie, L1 will never lie. So you can just challenge such a message. And now to participate in verifying this validator, it's super easy because literally two RPC calls. One call is going to check the actual commitment on the actual chain, and the other one checks like what is the claimed commitment. If you disagree, you just send a message. It costs roughly 60 K of gas. And that's it. Everyone can do that. And again, the route proving window is pretty short because it's essentially how long it will take to generate like the proof of consensus, if it's possible, or how long does it take to deliver the message. And what is pretty cool in this approach, it's not the gas intensive. We verify just one signature. So that's about this approach. Let's make a recap and let's identify the trade-offs. So we have three approaches. The first one is messaging. The second one is validating the consensus. And the third one is having this optimistic layer. So I categorize it in four categories. The first one is latency. The second one is the gas cost. The third one is trust. And the last one is what is the off-chain computation overhead. Why do I even list it? Because if we do some sort of proving, then obviously it takes time because we need to generate the proof. So messaging. In terms of latency, we are quite sad because, well, the message needs to get delivered. So once the message gets delivered to some specific L2, L1 will be able to generate already new blocks. So we don't have access to the newest values. In terms of gas cost, it's not bad, but it's not perfect because we need to interact with two chains at the same time. So first, we need to send the message and consume it. In terms of trust, we are pretty happy because we trust the roll-up itself, and it's a fair assumption. Off-chain computation overhead, we're very happy because there is no computation to do off-chain. Verifying the consensus. So in terms of latency, we are sad because we need to generate the proof. And we've done it. It takes a bit of time. In terms of gas cost, we are outside sad because we need to verify the actual ZK proof, which is way more expensive than just consuming a message or verifying a signature. In terms of trust, we are happy because we verify the consensus itself and computation overhead, it's significant, right? Because we need to generate the proof. Final approach, this optimistic layer. So in terms of latency, we are happy because we simply make a claim and we post it on the other chain. That's it. Gas cost. We're very happy because we just verify a signature. In terms of trust, well, we are not that happy but also not that sad at the same time because it still can be challenged in an optimistic manner using a fraud proof. Off-chain computation overhead we're pretty happy because we participate like an MPC protocol. So essentially, the overhead comes mostly from communication, not computation itself. Cool. So this is part number one. These are the three approaches. So essentially, I'm not going to say which one is the best because all of them come with some trade-offs. Okay. Accessing the headers. I hope it's self-explanatory because we literally unroll something from the trusted input. And the trusted input is, again, a block hash for a specific block X. And if you follow the initial slides, it's essentially, given a block hash, you can recreate the block header we can access the parent hash and by knowing the parent hash, you can recreate the previous block header to essentially go to the Genesis block. So given this very small input, we can essentially unroll the state or whatever was present on the chain from this block to the Genesis block. Okay. So as I said, I'm going to explain everything on the example of Ethereum and today all the block headers together are like roughly 7 gigabytes of data. So it's quite a lot. But okay, this is how we actually do that. This is the high-level concept and what are the approaches. So the first one, we call it like on-chain accumulation. So essentially, we do this procedure, this computation directly on the chain. So we provide all these properly-encoded block headers inside the call data and the block hash that we might receive as like the trusted input by sending a message, relaying it in an optimistic manner or validating the consensus. And yeah, like recursively go through all these headers and verify them. But there are many, many downsides because first of all, it's very call data intensive. It's very computational intensive. And now we can store all these headers on the actual chain, but you know, even storing on an L2, storing 7 gigabytes of data is still a significant cause because the state on an L2 is reflected as call data on L1. So it's still expensive either way. But the cool thing is that I have direct access to like state routes or anything that I want to access. Next approach is on-chain compression. So we can still use the same approach as previously. So literally unroll it and process the 7 gigabytes of data, but instead of like storing them we can just update the miracle tree. It's a nice approach, but comes again with a few downsides. It's very computationally intense because if we have like millions of headers we need to perform millions of hashes on the chain. That's expensive, but at least we save on storing data. And also we need to update the miracle tree, which is another cost. Last downside is that we need to index all the headers that have been processed while we need to index them because if I want to access a specific block header I need to provide a miracle path because as we update the miracle tree and we just store a route in the contract itself then I need to know the path, right? So I need to index the data and essentially once it's the moment that I want to access I need to provide a miracle path. This approach is okay. I wouldn't say way better than the previous one, way cheaper. Last approach. So there is a very cool primitive called miracle mountain ranges. Love it. And the idea is let's do the same that we do previously inside the snark. So we can provide this tremendous amount of data as a private input to the circuit and essentially do the same computation like unrolling inside the circuit itself. And now we have a public input which is the block hash. So essentially the commitment from which we unroll it. So the trusted input, the public input can be literally asserted when we do the on-chain verification and why we unroll it we can accumulate inside a miracle tree or a miracle mountain range why a miracle mountain range is cool because let's imagine that you want to have like seven gigabytes of data processing once in a row. The proving time is going to be horrible and why would you even like prove this commitment for like the entire history? Like do you really need that? Probably not. So let's chunk it into smaller pieces and miracle mountain ranges are a pretty cool primitive that allow to do this. To give you a bit of intuition how does it work it's essentially think of it as a tree of trees. So once we do all this proving of chain we simply verify the proof on chain as you know like we find the proof is way cheaper than doing this directly on the chain and still we just provide a miracle path and that's it. We essentially have access to any sort of data we want. Let's do a recap again. So approach number one on-chain accumulation on-chain compression off-chain compression three categories prover overhead gas cost storage cost actually gas cost should be computational cost. Okay so prover overhead on-chain accumulation do we prove anything? Well not really so we are happy. On-chain compression well we still like need to update the miracle tree I think actually there is an issue here so I'll just skip this part off-chain compression you're very very sad because well we need to prove actually significant computation so the proving time is significant okay now in terms of gas cost the third approach is horrible because it just costs a lot because we do the entire computation on-chain compression well we are a bit happy because we just do a bit of computation but still it's a lot of cold data a lot of computation but lost at least not so much storage storage cost oh sorry gas cost in the second approach while we just verify a proof so it's cool okay storage cost for the first approach well seven gigabytes of data it is horrible so we are very sad on-chain compression sorry storage cost for on-chain compression we just are a root of the miracle tree so we are happy and in the second case we're even more happy because we again we just essentially keep updating a tree and we don't even need to post a lot post a lot of cold data because the cold data we post is literally just the proof so we're very very happy but again I don't want to say that all one of these approaches is the best one because as you see there are trade-offs and yeah so this part is actually pretty easy so as you know as you might not have here I was explaining like the second step when it comes to dealing with storage proof and now there is the last part which is essentially verifying the proof itself so approach number one is verifying the proof directly on the chain approach number two let's verify the proof inside the snark and then verify the snark approach number three let's verify multiple proofs inside the snark and then verify the snark we can aggregate multiple snarks together and so on but obviously there are some trade-offs especially when it comes to proving time and yeah so now why the first approach is feasible on ZK roll-ups for example on storage net cold data is very cheap and what we want to avoid in this specific proofs is cold data so this approach is for example feasible on storage net but for example if you want to verify like a proof on optimism or cold data is very expensive you want to reduce it as much as possible so for that reason you might want to use a snark and finally if you have like many slots that you want to prove why can't you just verify them inside one snark you're gonna pay improver time but you just present one proof at the end so this approach is cheaper is the cheapest one but only if you have multiple actions to take so there are trade-offs so let's identify them categories proof or overhead latency verification cost so verify the proof directly proof or overhead doesn't exist latency doesn't exist because we don't need to prove anything verification cost well it is significant because we need to post cold data and we need and we need to do the actual computation so like going through the entire path and each step in the path is one hashing function oh and also let me get back to the previous slide I forgot this is very important why wrapping inside the inside the snark is pretty important if you're like dealing with a storage layout that is using a specific hashing function let's say for example Peterson Peterson is not available like on on the EVN like you just need to implement it it's not a pre-composite it's gonna be costly but if you do it inside the snark and Peterson is pretty snark friendly then well you just verify snark on the one and you abstract it so it's gonna be way way way cheaper but again there are trade-offs let me get back to this so I went through the snark-ified proof proof or overhead it exists so we are not super happy latency we are also not happy because we actually need to spend time on proving this thing verification costs we are happy because well we just verify a proof so it's fine and snarkifying multiple proofs the proof or overhead is still there latency is still there it's even bigger because it takes a bit longer in improving time and verification costs we are super happy because essentially we can mutualize the cost of verifying multiple proofs by just verifying one single snark proof okay went through quite a lot of things let's put this all together so let's imagine we have free chase and we want to have interoperability interoperability between them so we have chainz chainx and chainy so it all starts with a message aka commitment we send the message in order to get the commitment so let's say that we send the message from chainz to chainx because on chainx we want to access the state of chainz so what do we do? once we have the commitment we literally recreate all the headers using one of the free approaches and once we recreate it the header is still the point for which I want to prove the storage I just verify a proof and again for verifying a proof there are multiple approaches but now let's say that on chainy I want to access the state of chainz and there is no direct communication between chainy and chainz so it must be routed through chainx by the way I'm like talking about this in a pretty abstract way by chainx I just mean it in a later one yeah so from chainx I'm just going to send again the commitment about chainz as a message and then simply recreate all these headers as you may notice it's pretty redundant because we perform the same computation on two different chains and we don't need to do that especially if you use like the third approach which is generating the proof on chain but now there is another problem how do you actually know what you should do like you need to be somehow aware of what is happening and for that reason we introduce an API we don't expect like developers to deal with all that complexities choosing the right approach for the direct thing essentially right now our APIs optimizes cost wise soon we'll be able to optimize latency wise and yeah and essentially that's it that's about our API I highly highly encourage you to check this out and yeah like a few final words about the API it acts as a coordinator it optimizes the costs it optimizes the cost because we can batch multiple things and once the job is done you get a notification like via web hook via an event like whatever you want essentially you're not you don't need to be like an infrastructure maintainer and you can just focus on essentially building on top of this primitive and I think that's it questions so the API essentially is a REST API for now we also have a JSON interface we have off-chain entry points so we can request the data like by making an off-chain call like calling a REST API or like calling a JSON or PC method or if your smart contracts like wants to access this data then you just submit an event we're gonna catch the event and later on like after a bit of time fit this the specific data inside this smart contract so we have like a bunch of interface and by the way speaking of like the off-chain entry points once the entire like work is done on our side you can get a notification it can be like a web hook we can like send you a bit of information like using a web socket it can be essentially whatever you want oh yeah so that's actually a great question so different chains use a different like storage I would say architecture they might commit to a Merkle Patricia 3 Merkle 3 maybe even Verkle 3 and obviously like I said having a generalized verifier is like pretty it's not a clean approach so we essentially abstract it by using a snark and inside the snark itself we just do the proper work like you know we go through the through the tree like through the through the elements of the proof and then we can like use a specific caching function so for example now Poseidon Poseidon is is pretty popular I think that scroll uses Poseidon and also ZK sync uses Poseidon on the EVM like prefer me Poseidon will be pretty expensive so for that reason you cannot verify the proof directly but what you can do you can do the entire verification inside the snark and then on the one you don't really care what the snark is like doing to just just verify it so that's how we actually deal with it if we need to have it abstracted we have it abstracted if we don't oh yeah that's that's actually a good question because I think it went super technical so actually what we do at Herodotus every two weeks we have some internal hackathons and right before the merge we build a proof of concept that we call the merge swap and essentially we allow anyone to dump their proof of work if on proof of stake and the way how it works we literally build the bridge on top of this technology and the bridge works in a way if proof of work inside a smart contract on if proof of if proof of work chain you can prove that you've done it on Ethereum proof of stake you can once you the proof is verified you can meet your C20 token and you can do whatever you want with this token and then if you want to withdraw back to Ethereum proof of work you just burn it you prove the fact that you burned on the other side and yeah that's it also in terms of other use cases I think that cross-chain collateralization is pretty cool because this is the place where you want to avoid latency as much as possible and you want to be as synchronous as much as possible and essentially that's what we do here because our latency comes only from from the proving time but again using some optimistic approaches and so on there are a lot of things we can do here I hope it answers the question okay I think that's it I have like three minutes to wrap and yeah thanks