 And now we're at our last session called Decentralized Reflections on Consensus, and this is an open Q&A, and I'll let Marco take the lead on this one. Yeah, so this doesn't have a real format, I would say, but I prepare two questions that I would like first speakers that are present. So I see Viktor Alfonso Sergio. I don't see Arno Dan Crud and Sebastian, maybe the John later. But basically, I will ask two questions. So first question, I would like to get the response from the presenters that are online, and then whoever raises the hand can chime in and basically give his view on the question. And I will ask just two of these questions, and then others can ask, hopefully this starts the conversation, and then we have questions from others and not only me. So let's try this and see how it goes. So the first question is from the talks that you heard from others today. What do you see from top of the head? What could you use in your projects, and what synergies do you see between things that are presented today? So I'll give it to Sergio because he's starting today. So let's go in the order. Yeah, thank you. So what I can see by comparing the talks that I attended to with the ones I want to deliver is mainly that we... So at least in the work, of course, what I presented is just the work we've been working on, like a few of us. So Cosmos is way more than that. But when I realized that we tended to focus below the interface of consensus. So in my talk, for instance, there's not much of transfer funds contracts. And I can see that this is something that you cannot ignore totally. And so some of the ideas that I've heard from other... Like, for instance, this speculative execution of contracts is something that probably we need to... At least I know that we are modular. So we need to keep away from tailoring our solutions below to whatever the applications do. But those are very good ideas to think about how we would be implementing in our ecosystem. So yeah. Thanks, Sergio. So I guess we have our phones. So in the order, our phones are present from the speakers. Yeah. So actually, I was excited to see a lot of the talks because I feel that, at least from our side, there are a lot of things that can be integrated. I mean, for instance, it would be cool to... I mean, we already explored the use of tendering as a consensus algorithm for subnets. It would be cool to see to what extent we can... Because we faced some limitations and with ABC, I think that they're completely fixed. Like the fact that we need to we had to do a lot of workarounds in order to send like sequential transactions and like other limitations that Sergio already mentioned, that it would be really cool to test in our catalog of consensus algorithms and subnets. It would be really cool to have ABC plus plus instead of like our current ABCI interface. Then for DanCrad, I think that it is clear that it's one of our problems in higher consensus. I mentioned it. We are exploring naive approaches right now. It would be great to see to what extent we can integrate already what Ethereum is doing. I'm picking back from all of their work. Then the next one was the... Oh, yeah, true. Like the one for... From Sebastian, I think it was one of the state channels. So this is recurring and I was discussing this with Marco a few days ago. This is a recurring question to what extent we can mix. So we have this fixed structure in our hierarchy where we have to go through all of the consensus engine and like all the subnets in order to propagate transactions, but there may be certain transactions that they can be sent through a payment channel or they can use, even the atomic execution protocol can use the same semantics that they propose in their payment channel. So I don't know, I see a lot of overlapping between all of the talks and all of the tech that folks are proposing and our friends. But again, maybe I'm biased because I'm seeing like all of these building blocks that can be integrated. I don't know what others think. Thanks, Alfonso. Víctor, would you like to say something? Yeah, sorry. Giving a talk pushes everything out of my brain, unfortunately. But I do remember it's like I had to like reintegrate things in my brain. But I was actually very interested in the Cosmos type stuff. I think that kind of the modularity of Cosmos has always been kind of very appealing to me. I didn't actually realize that the ABB, I didn't talk to, there was no conversation between the consensus layer and the application layer for the proposal. So that was kind of new to me. I'm interested to understand how that really works better. So there's lots of things like that. I didn't quite get, I had to miss a little bit of some of the talks, but on the state channel stuff sounded to me like it was definitely kind of stuff that was interesting, like state channels are sort of related to the kind of things that we're doing, right? So I think that stuff was pretty cool. As I said, I think I had to miss part of it. So unfortunately, I couldn't. Yeah, and as I said, I'm trying to integrate, remember like all the other stuff I listen to right now. But yeah, I think there's a lot to I'm going to have to go through the talks again and like listen to them again. So I'm psyched for that though. Malcolm, may I maybe give a short answer to, you know, the Cosmos reference? Is it possible? Yeah, yes, of course. So yeah, to reference of the Cosmos. So yeah, I realized that in the presentation that I did, I left a lot of a lot of material outside like of the existing things that the that doesn't mean interface or the one using Cosmos does. So, so yeah, I'm my presentation might have given the impression that today there's no conversation between the two levels. That is actually not correct. Like we have a Czech TX, for instance, so that we have we have mechanisms today to make sure that to a certain extent the transactions that we accept are valid. What we're doing now in this new work is to have strong guarantees. That's what we don't have been half so far. We to so far we didn't have a strong guarantee that given a transaction, this transaction wherever it's proposed and decided is going to be valid. This is basically the extra mile we're walking in this, you know, from ACI to ACI plus plus. So just wanted to, you know, to clarify that, yeah, of course, we have talked, you know, the interface talks today is just that is probably in a less interesting way. And that's probably why I left it out of my presentation. Yeah, so I definitely mean, I, you know, I last time I look at Cosmos was pre pandemic, I think so. So I'm definitely interested to think about those kinds of issues. Thanks. Thanks, guys. So any other participant wants to comment on this, like in any like, you know, without much, much form. Do you see any synergies between things that are presented today? And yeah. Okay, so let's let's go to my next question. It's a bit more maybe controversial, hopefully. But so when you develop your the code for whatever your project is, and of course, it relates to the ecosystems that you're working in. But are you thinking about its usability in other ecosystems? And are you like, okay, I really want to make this usable by other people? Like what's your philosophy of your projects with respect to that? Again, let's go research it. I can presume what he will answer. But it's a tough one. It's a tough one. So yeah, I mean, so I have some familiar idea with other ecosystems to the truth. So of course, our main customer is, you know, the customer's applications. That's for sure, that's basically what we focus on. So so far, the work that I presented today has been focused on customers. That's for sure. What I can say, but this is speculative. So we didn't, I mean, I have to be honest, we didn't give it a lot of thought so far, is that the fact that there is a clear, that there's a first and first of all, a specification, and then there is kind of a clear interface, which is basically what we have been doing now. You know, as long as this interface can be assumed by another ecosystem in terms of what's sitting on top of consensus, it should be doable. But of course, this is something we need probably to spend some time thinking about, and probably discussing with the, you know, with the relevant people on other ecosystems to see to our extent, we can actually make it usable for them. Thanks, Sergio. Alfonso. I mean, I think I mentioned it in the talk, like I would love for all of these building blocks that we have to be usable somewhere else. Of course, we need to target somewhere in the beginning. But if we can integrate, for instance, like the data availability from proposals from Ethereum right away in our system, and they can have an alternative implementation of the SCA actor and all of the building blocks, it would be great. I mean, it would really foster the interoperability of ecosystems. So at least, personally, that's our take. But of course, you need to target an initial platform. If I'm next, I think my answer is a little bit similar to Sergio's. I've been working at Augrand, so I'm thinking about like, what are we going to do for our stuff? It's already, you know, we're already facing challenges with that. So that's definitely where we focused on. But I do think that one of the things that we've been thinking about is to write this up. And, you know, if you think about the structure, we're saying it's a very generic structure. The main thing that it's doing is, in particular, Augrand is saying, look, there are certain features that are really helpful in Augrand that help us do certain things. And we just, because we have them, we can just rely on them. And so the main reason it won't port over to other places, I think, has to do with the fact that some other places don't have support for some of the things that we're using. And in fact, Augrand doesn't have support for some of the things we need. As I alluded to, there's some new features we need in Augrand to do that. And I think one of the virtues of our project, whether or not we do it, you know, it becomes the main story for how we do things, is that it's forcing us to basically say, list this functionality that we really need, that this is functionality we really need at Layer 1 in order to do the things that we want to do at Layer 2, not just our stuff, but other people's stuff as well. You know, the VRF checks to do certificate checking, I think this is, you know, in order for people to have elections, you know, any kind of election type things that people often want for various reasons, that's kind of support. This transaction tri-business, I think, is also something that's very useful. One thing that Augrand has that I think most blockchain I feel like should just do, because it's basically a, it's trivial for them to do, it's just kind of an interface question. It's this atomic transaction business, right? I mean, a blockchain is already basically taking a whole bunch of transactions and making them as if they were atomic. So the hard part is done, you may as well just expose that in your interface so that you can actually specify that. And then you can do atomic transfers without any, without, you know, like you can do 3, 4, 1, whatever, you know, trades all together in one thing with, you know, with no extra kind of, without having to write a program to do it. So it's kind of an interface question. So in that sense, I think, I haven't tried to think how it would work on any particular other chain, but, but I've tried to think, you know, I'm sort of an academic at heart, right? So I'm like trying to think what is the kind of the structure that we need in the underlying chain that we can do, that we make interesting applications on the above it. And in this sense, the layer two speculative smart contract is in the application for us, you know, over our layer one chain. Does that make sense? I think it is. Yeah. Any, any, any comments on this topic or any other question for, for our speakers from the participants? Please don't be shy. Do we have something on Slack? Yeah. No, no, not yet now. Yeah. Okay. I have a backup question, but then I'm really done. So actually, I have something that's kind of interesting to me, which is, I alluded to, I think one issue that I, I'd really like to understand better is storage. I think, like, is how to do third party storage? Like if I want to, if I want to like, not put all the stuff on the blockchain, right? I just want to put small amounts of stuff in the blockchain and move most of it off. What is a good way? I mean, I was kind of interested if people have ideas about how to do that well, because I think that's a huge problem. Like storage is a huge problem, right? Like, you know, how can we, how can we do that well? How can we have like a third party? It's like, it's an Oracle, but it's like huge Oracle, right? It's like Oracle with like tons of information. And I'm willing to keep, you know, I don't need, you know, I'm going to commitment on the chain, right? Like I'm going to keep the Merkle roots of things or whatever it is, you know, on the chain in order to make sure that, you know, like, so I don't have to trust you to give me, but I need you to store it for me, right? I think in some sense, Ethereum, and again, I don't mean any offense about this because I had to do something early on, right? But I think they kind of made a mistake in kind of selling storage in the way they did, because you'd really like to charge you say, look, you know, you should charge ongoing fees for storing stuff because it costs me, you know, ongoing fees to keep your stuff, right? So, so, you know, but how can we do that in a way that is, you know, kind of compatible with all the systems we're building? You know, I guess maybe Filecoin has an answer to this kind of questions. I mean, I haven't really, I feel like I need to explore this better. But that is, as I kind of mentioned, it's on my list of things to do once we get this kind of prototype going. So a suggestion I would have is maybe this project in Cosmos ecosystem called Celestia, I mentioned it in my presentation, and they are, you know, one of the strong points is that they come up with a solution for basically storage. And it's very creative. I'm not going to elaborate now on them, but just as a point, if you're interested, maybe I would advise you to, I can actually give you a link. Yeah, that'd be great. Yeah, because basically what we're doing is that they are, they're scaling the system as a two peer to peer layer in the sense that they have some erasure code so that the information is redundant. And they probabilistically, you know, like, like, they have like clients that are querying, they're, you know, pulling for, for, for data constantly. And so as the, as the network grows, so that's the probability, you know, they, I'll say this, like as the network, so they bind the amount of storage they can have to the size of the network. So it all scales all together. And of course, the guarantees they give are probabilistic, but they are very good. That's all I can, I mean, I'm actually looking, you can do it because of, you know, work related tasks that I have. And so I'm really, yeah, I'm really surprised and like how, how, how well design the thing is, and a stacking storage. And actually, I would have a question to you on, on your presentation when you were talking about off change storage. And this is like, of course, I don't need to understand the details that, you know, if you guys are doing that, try to follow doing the presentation. But my question is like, when we don't store things on chain, so we need to start off chain. And basically you, as you said, like you tend to basically store some sort of a hash so that you at least can check whether what you have is a correct thing. So what do you, what do you do with nodes that need to catch up? Because, you know, and originally when you have a node that needs to catch up, you basically rely on the blockchain itself to basically try to catch up by executing everything. So is there a way to make it more efficient? Because that means my view is that if you have external storage or storage that is not like, even if it's local at each of the nodes, that's basically what you explained. Basically, you cannot, you cannot jump, like you have to do, you know, block by block, right? So that everybody is representing the same store as all the other nodes. That my question there is like, how do you deal with this with late nodes in terms of, you know, how you deal with this, this storage is not the things that are not stored on the blockchain? Yeah, so I have a kind of a bad answer to it and a hopefully better answer to it. So the bad answer is we don't do that at all. Right now, we just say you're, you're running everything, you know, like we're, I said, we're in prototype mutation, we're not doing that at all. But a slightly better answer that is, and then I'll try and refine it after that is that, first of all, we're keeping a commitment for each contract, right? And you don't need to run the whole blockchain. If someone says, here's the current state of the world, right, if they just write out the state of the world to you, you can verify that, right? No problem, right? So, of course, if the state of the world is huge, then it's a problem. But I mean, but if the state of the world is huge, you have to get it anyway. So I mean, like, you know, I'm not sure how we can do much better. Basically, you would do better is we could ask for smaller pieces. Like I said, if my commitments were like more, that's actually what I'd like to kind of say, I'd like to say right now, we just say, if I want something, I say, please give me the whole storage for that contract, right? That seems to me, like, not a good idea. I mean, like, we're imagining that these contracts are data intensive, I mean, there's some of their competition, but if we're managing these are data intensive contracts, right, then we'd really like to say, please give me a little piece of your little piece of it, right? But that, you know, we'll do it on an individual basis, I guess we haven't kind of worked out that story. That's the storage story I would like to work out. But I don't have to catch up, right? I can just you can just give me the whatever I need, and I can verify it's the right stuff. So I don't need to run. So basically, your second answer is like the first answer I totally understand, and that's a correct way of doing it, of course. Your second answer where I understand is that you are exploiting the orthogonality of the transaction. So you don't need to care about the whole world, only that, you know, you are supposed to be understanding the transactions or at least those that are of interest to you, and you basically just follow those transactions. Is that, is that the? Okay, that's, that's part of it. That's true. That's, we're doing that from one part, but that's, that's not all of it. The other reason that you, you get better off is we're actually exploiting the fact that blockchain is underneath the coverage, right? So the reason in a blockchain, you need to run the whole blockchain is because who gets to vote is something that changes over time, right? If that was fixed, then I wouldn't need to do that. I'd just say, please tell me the guys and I'll just go there, right? But the problem is I need to go all the way along because who's, because I won't know who had the right to vote at any time without going, you know, so that's changing. And the only way I can know who had to vote is I can see who had the right to vote before them, and voted them into office, who voted the next guys into office. That's the only reason I need to run it. If I didn't need to do that, I could just say, here's the state of change, right? Like here's a, here's a hash, right? You know, and I'm claiming that this is the current ledger state, right? So if you trust that this is the, that this is the hash of the ledger state, right? Then, then I can give you the ledger state and you can see it's correct. And you don't have to look at the past, past history, right? And this is true for any blockchain, right? As we have these safe points somehow, we, you know, like, the genesis block is, in some sense, a kind of a, what is the word for that? Like a devolved, whatever, you know, like a degenerate safe point, right? But if, you know, somehow you said that year 2022, you know, this was a safe point. Everyone says, yes, that's effectively a new genesis block. We never have to go before that again, right? In our case, the blockchain is doing that for us. And we're always writing the commitments in. So we, as long as the blockchain underlying is good, you caught up with the blockchain underlying, these guys never have to look back at history. They don't care. They just want to know what this current state of the world is. So, so the, the, the part where I don't have to run the thing, that's just because of blockchain. The part where I don't need to take in the whole world, that's because we've divided up the world into these contracts. And as I said, I think we'd actually like to divide even contracts worlds into smaller worlds so I can get like part of their kind, you know, so please just verify this part of your state, you know, and Merkle tree like things as opposed to like some silly monolithic kind of commitment that we're doing now. Because just, you know, it doesn't matter for our purposes yet. We're just doing project. But I think even in version one, we'll probably have a Merkle root like thing rather than, you know, whatever. Makes a lot of sense. Yeah, thanks for that. I would also chime in myself to answer this storage question. Of course, Filecoin network is relevant for this. So basically, I don't know how much you know about Filecoin consensus, but the way it works is the civil attack protection is essentially just the reservation of storage capacity. And after that, we basically build a power table similar to proof of stake. And then it's a weighted game of similar to proof of stake, but it's not state, like no one stakes file coin in order to gain power as a valid network, but it depends actually on the amount of storage. And how fast is the how fast is the storage? Like how fast can I get stuff back from it? Basically, that that really depends, right? So Alfonso, do you have to help me with the top of the head numbers? I mean, it really depends on the girl, like not so fast, I would say. I mean, you're taking minutes, hours, seconds, no, micro seconds. What order of magnitude are we talking about? That's what I want to know. No, we are definitely talking, not talking minutes and hours, right? So we are talking something less, right? We're talking about reliability, right? Of storage. Okay, so it depends on the state of the content, because miners or service providers, they can have the content, a hot copy or a cold copy. A hot copy is unsealed and like ready to be served, and there is just the transaction through the wire. And then when it's a cold copy, which means that it's sealed inside a sector, start like with all of the all of the mechanics of Bitcoin, it may take a bit more. I don't know, like the numbers, it's been improved a lot. I don't know the numbers of concealing, George. I don't know you have the numbers from the top of your head. Yeah, I was trying to recall, I don't, it's used to be hours because sectors are very large. Like I mean, sectors are exactly irrespective of the data that you're retrieving, and you have to unseal the entire sector. So yeah, but by the way, just the ceiling notation, so sealed capacity is the one that actually counts to the power in Filecoin, right? But so miners are encouraged to keep a hot copy, so basically an equal copy of the data they store, but just in plain text, right? So not sealed. And that's a loss for instant retrieval, yes, at the cost of obviously twice the storage space. And then like, we can think of, so when you say retrieve content, you may be interested in just one point of the sectors, but memories map in sectors are seen like traditional memories, which means that you would have to unseal the whole sector in order to access the specific data. If it is the case that the service provider doesn't have the sector in a hot copy and un-cached. So you would charge people differently, whether it's in cold copy or hot copy, to get the thing, or do you just charge them so you have to know whether it's in cold or hot in order to do this? You would make a bid. So there's an ask and bids. There's an order book I see of requests, and you would make a bid. Like you would know the kind of prices that the service provider requests, but of course there's a problem, which is the ransom of data, because I could say you that I will have, I mean, I will ask only for two file coins, but then once you've come to retrieve the data, I have raised the minimum bid to, so there's still things to figure out regarding the retroability, but the data availability. So the fact that the content is stored, you can be sure because of the protocol. Then the retroability, we are working on that because there are certain edge cases that could make it hard, but yeah. But I would say this is improving the progressive in file coin, right? So this is why my first answer was depends, but then you do, you know, if you go to retrieval.market, this is basically the sub team in one of the sub teams, actually, because we are working on different retrieval incentivization and market solution. So basically, retrieval.market gives you an overview of like approaches to the questions basically that you're concerned in the file coin ecosystem. Yeah, I mean, I don't think we'd want for really hot things, like really hot things, we're keeping in storage, right? Like, you know, like the nodes that are executing are keeping in storage, right? They don't need to ask anyone else for it. It's sort of kind of cold or semi-cool things, you know, that, you know, that they don't want to keep, right? What suggests to me is that, like, I'm trying to think, what would be the right interface because, you know, it will leak out to the programming language, right? But it seems like there might be useful to have a notion of a prefetch, like to say, please warm up the stuff I need, right? Like, you know, thaw them out now so that, you know, tomorrow I'm going to come in, if it takes hours, right? For instance, you know, we might say, oh, now is the time we're going to do it, right? Let's thaw stuff out now so that they'll be ready for us when it comes. And that might be an interface that people build in because they understand that that contracts work this way. If they have third-party storage or they're saying, yes, I'm agreeing to put things over here. But we have to think for ourselves how to do that as well. That's actually the complication of storage. I think it's a lot. It's just deciding what the right interfaces for it are. It makes sense, I mean. Yeah, I completely agree. I guess our approach with Filecoin is to build capacity first. So we were capacity-oriented. And now the focus is, like, we have a 17 exabytes of capacity in the network. And now with all this capacity, we're shifting towards other things, towards usability, retail markets, and other things. And then the consensus lab works on a complementary set of problems like scalability and all the stuff that we're discussing today. And I had another comment, maybe. And please prepare questions from the audience. Please prepare questions if you have. So other comments, yes, I mean, I completely agree. So we should just store on-chain only critical data, checkpoints, commitments, and whatnot to store other data off-chain. And maybe it's like in classical systems. You're probably looking into tiered storage systems. And Victor, you said your validators are storing essentially caching in some sense the data that they need as hot data. But then you would postpone other things to cold storage to other networks. And that makes sense. Like in any storage system designed to be a tiered storage, right? Any other questions, comments? I guess people are getting tired and we are like dissipating. So whatever, what I was suggesting- Maybe use the comment or- Go ahead, Giuliano, please, yes. It's kind of a general comment, but it seems that people are more interested or working more on scaling execution rather than ordering. Is that something- Ordering? Yes. So you have the problem of ordering transactions as really in the consensus algorithm. And then you have the problem of executing them. And it seems that a lot of scaling solutions are focused on scaling execution. I don't know, is that a is it a more important problem in your view? I guess what is the issue with ordering? I'm sorry, maybe I'm just missing the- Why is that a scalability problem? Well, consensus itself ordering transactions, even if you don't execute them, could be a bottleneck, right? And you know, we've seen stuff like in the past like Narwhal, which really speeds up ordering but doesn't do anything about execution. And then maybe it's a solution to a problem if your bottleneck is execution. Yes. So the solution that I presented deals with that problem. It's not the core of it, but it deals with it in the sense that it's this part where I explained that you would simplify the decide interface. And therefore now the application is free to execute, you know, it has a whole block. So it's free to execute the transactions in the order or even in parallel as it thinks it's correct and still deterministic. I've been touched more on that because that's highly dependent on the application that you put. So the blockchain that you put on top, but that's something like some improvement with respect to the status quo. The part that we are not tackling there is ordering transactions across blocks. So you're still executing one block after the other. I got a question on pipelining, et cetera. We haven't dealt with that. So this is still up in the air. We're not going to deal with that. So you are free to order your transactions within your block when you receive it as an application. However, the part that we are not dealing with yet is transaction reordering across blocks. That's basically, you still have to execute one block after the other. And you're reordering them because you think that that will make the things go faster or you think that... That's depending on what those transactions mean. In terms of performance, the obvious example is, and we touched on this a little bit earlier before, if you remember, is this orthogonal. So if you have transactions that are also orthogonal, so basically you can just run them in parallel if possible. Sorry. Parallel execution, I totally understand. I'm just trying to understand if there's anything other than parallel execution that you're hoping to get from ordering. I mean, I can imagine a couple other things. For instance, one is coalescing. Like you say, here's a bunch of transactions that are all writing this guy. I just have to write the last one. For instance, yeah, that would be one. So that sort of obliterating stuff and also commuting stuff. But is there something else? I'm just trying to understand if there's some other kind of things beyond those two. What I was trying to say is that in our previous system, you could not do that because you were delivering one transaction at a time. So you were basically tied to the ordering that the block imposed. Now we are free to do the kind of things that you are thinking about. Other things that might matter in terms of ordering would be like if you cannot guarantee that all your transactions are valid as they are in their place in block. So we also discussed about this. Maybe at this decision time, you might find an ordering to maximize the number of transactions that are valid. This is more validity-related than performance-related, but yeah, it's something you call as long as you are deterministic and you can do. So I will say then, I'll grant we switched to a system that was more aggressive about ordering. That is basically we put them, we basically build a block in the order that we receive them, right? Like we just do it like, you know, as soon as it comes in, we either throw it out or put it, it's the next guy, we queue it up. And then when it's our time to do the block, we just chop off the front of the queue and say, these are the guys you go in. And so it's a very simple system where we used to have a more complicated system. I can't really, you know, like there was some idea that we do something more complicated, but we simplified it because that just made admission control and kind of handling the data structures easier. Now, it might be that, you know, Algorand's blockchain is not at capacity, that's to say, right? And so, you know, maybe if we were at capacity, we would have different issues, but because we're not at capacity, this is, you know, kind of simple. It does mean that whenever a block comes in, we have to basically go through the block and kind of chop out all the stuff that was, you know, like that was in our queue, right? And say, whatever, hopefully it's mostly the same. And then, and then all the stuff that kind of got our front, we have to re-check. And so I kind of alluded to earlier, but we actually distinguish between the checks that we have to redo and the checks that we don't have to redo, you know, because like for instance, if we're checking signatures, we don't have to redo that, right? If we check the signature before, it's still good, it's fine. If we don't, but with the stuff that we, the stuff that depends on the block state, blockchain state, the ledger state, all that stuff we have to rerun. And so it might be useful if there was enough stuff to kind of have them remember which things they depend on. So they can say, oh yeah, I don't need to do anything because these guys are all not dependent on anything, you know, to keep basically a dag of dependencies along the way. But, but it's, it's not obvious to me. Narwhal as I recall sort of basically only puts things in when the kind of the, the knowledge is, we know what's going to happen. So you know, it's kind of building a dependency graph along the way. And then I think I didn't entirely, I was unfortunately, I listened to a talk, they gave, they gave us a talk, which was nice of them. But I was unfortunately in transit at the time. So I had kind of not the best ability to listen to, comprehend it, let's just say. But yeah, it seemed to me that Narwhal really allowed you to order transactions at a super high rate. But, but that might not be of any use to you if then you cannot execute them at this rate. And so I was wondering what the sense is, whether the scalability bottlenecks are mostly in the ordering layer or in the execution layer. I would say they are in both places, but like, you know, we started this basically from liter based protocols, like PBFT style, you know, as you scale and grow the network, of course, you're not going to have great performance, right. And then, so you need to work on both problems in parallel, at least like, I guess there is a race and it's some problem, one problem is more important than the other problem is more important depending on what is the progress on both. It depends on your particular system. Yes. And in specification, like, how, how, you know, what's the, how much CPU do you have in your node, what's the computing power and whatnot, right. And there is this other part, which is related, okay, fine, let's assume that you do whatever the best you can do. And if you still execute on a single node, your bottleneck by whatever a single node can do, right. And at that point, you're looking at charting, partitioning the submit solutions, and you kind of need to combine all of these together to get a full answer, right. And I guess we are all working on some components of it, which is the point of an event like this to bring us together. So one thing that's interesting from Cosmos that I again learned today is that because the transactions are not validated, right, before the, they're agreed to, right, this also I think is true, it was surprising, this is true in the Narwhal slash, I can't remember what the other talk was about system. They're basically saying here's all the transactions, but some of them might not be valid. I've already decided they're in the block and you just take the block and throw them out. We don't do that. We basically only put valid things in the block. So we just throw them out preemptively if they're not valid. And then, and then we reassess, but maybe we didn't, but that means that we are, that, you know, it's got to be this, you know, the steps have to be taken order, right. And it's currently not because we're not full up. It's currently not a problem for us, I think, but maybe it will be a problem if we were sort of at our limits of what could be done. So just to answer. So yeah, I mean, today we have ways to filter invalid transactions. So like most of the time, in practical terms, Cosmos chains won't, you won't find invalid transactions in the, you know, in blockchain. So those two, those that made it to the desired stage, the problem we're trying to solve here is taking that to formal guarantees. Like, like today it's highly unlikely that you're going to find an invalid transactions in a Cosmos blockchain. With ABCA++, will you have it formally impossible if you structure your application the way we suggest you to structure it. So just to make it clear. So it's not like half of our transactions are in value or something like this. Like only in extreme cases, it's extreme, like concurrent cases. Because we have, we have something, it's something we didn't include today because this is still there. And so it didn't change. I didn't want to focus on that. But we have this Czech TX API where basically every transaction that reaches a node, first is checked with the application, whether it's that transaction in isolation makes sense. And where you check balances and stuff and also format and signatures. And this is, this is, this is true today. And if it didn't change, that's why I didn't basically talk about it. Because it didn't change. We already had it. We keep on having it. The problem with that is that that check is not done. It's not guaranteed to be done at the right place. Which means that some concurrency, some concurrency misfortunes may make you accept like, for instance, you are, you know, there is Alice Bob and Charlie and they are transferring tokens. And there is some sort of ordering there where you check them independently. They, they all look good when you finally make it to the, you know, to the blockchain, you realize that one of the, one of the transactions actually because of its ordering cannot be, cannot be executed. But this is today, today, this is the exception and not the rule. So please don't get, don't walk over the impression that, you know, like loads of transactions are embodied. That's because if I understand you correctly, the, the consensus node isn't like generally consulting with the other guy, but he's got a function ahead of time from the other guy saying, all your guys should pass this function. Right? Yeah. So it's, it's not part of the dynamic interface, but it's part of the static interface in which the application does have a way to tell the, the, the, the consensus node just filter out all the guys who don't meet this, right? Like what changed, what is changing with the work we presented is that now you are in a position to evaluate whether the transaction is valid right at its place in the blockchain. Even if we haven't started like talking about whether this is finally going to be the block or not. This is something we didn't have before. So we had formal guarantees. But before I had a way to give you a function that said, here's like a preliminary check. You can do this check. And if it fails this check, just throw it out, right? Like, yeah. And it was up to the application to see to what extent it wanted to do checks in advance. And so there are some applications, for instance, where they're doing is they are doing that, that, that check by like assuming that, that, you know, like they have kind of a copy of the state and they basically execute the actual transactions against that like tentative state in order to, for instance, screw transaction, you know, do more advanced checks. But again, what we have today formally, you cannot say that you can guarantee 100% that no invalid transaction will ever make it to the blockchain. Whereas now we can. Right. Without losing, how to say this, without losing on performance or having to stop anything. I mean, there is no big, like this is something that is coming. Like the new design is allowing us with no, with no, with no important cost or no import, no, no tradeoff to go through. Like it is actually coming. It's all good. I would be tended to, to say for free, nothing is for free. We know, but it's like the way we have restructured that allows us to now be able to do that. Whereas before we couldn't. Right. But again, again, like we were still, you know, today we're still checking transactions. And most of the time it's very hard to, it's very hard to come up, come up in, in, in production, cosmos chains, very hard to come up with, with an actual case with a transaction actually made it to the, to the blockchain. Yeah. That's good. That's good to know. I mean, like I said, in our ground, we do distinguish between checks that we can do statically and checks that we can't do statically. You know, but we don't have the problem that you guys have, which we're trying to, we're not trying to be one for everyone, right? Like, yeah, we can say, you know, I mean, that's, that's what causes you trouble, right? You have to say, please, Mr. application, tell me, you know, give me some gift, right? Exactly. This is how the price affect how much crap I give you back. Yeah, this is the price we're paying for being fully modular. If you want, like this clean limit between what we call consensus and what we call the application, which tends to be a blockchain with its own, you know, with its own. Right. Yeah. So we have some of this problem, but it's less because we have a specific guy we're talking about and we can tailor to that specific guy. Yeah. Great, great stuff. So do we have more questions from the audience? If not, I think we can wrap up 10 minutes earlier. It's been a long, but very, very interesting day. So thank you. Thanks for the speaker. Thanks a lot for organizing this. You guys will eventually post, I imagine, all the videos, right? So that we can We actually intend to post them still today. I'm not making any promises, but the goal is for them to go on YouTube still today.