 Our next speaker is Alfonso de la Rocha from Protocol Labs. Alfonso's involvement in R&D began at the University at the Polytechnic University of Madrid, where he worked on topics related to energy efficiency in data centers. Before joining PL, he was a blockchain expert at Telefonica R&D, where he was responsible for the design and development of core technology based on blockchains, distributed systems, and advanced cryptography. R&D experience also includes research into compression efficiency of video coding standards at Ericsson and projects related to inter-domain routing at KTH. His talk today is titled Hierarchical Consensus, horizontal scaling framework for blockchains. This framework is designed around the concept of subnets organized hierarchically, which can be spawned on demand with their own states, consensus, and policies. Alfonso, the floor is yours. Saying that, of course, I'm not the only one behind this work. I mean, all the consensus lab team is contributing to this, and mainly left areas George and Marco, so you can blame them for the bugs. I mean, you're probably aware of these slides. We want to target a Web3 able to accommodate the kind of use cases that we see in Web2, but if we want to do that, we see that we have a bottleneck in one of the key substrates for the Web3, which are blockchains. And this is what we are trying to tackle at consensus lab. Marco also shared all of the requirements that we are targeting, so it would be great if we can have a substrate for the Web3 that can handle internet scale throughput with fast local finality and another number of interesting features. What I'm going to talk today is about the architecture that we're coming up that we call hierarchical consensus in order to horizontally scale the blockchains in general, and we are focusing initially into following. And the idea is that hopefully this architecture will allow us to target all of these features, especially the ones that you can see inside the square. And it will allow to integrate many of the technologies. I mean, I was hearing all of your talks, and I think that many of the technologies that you've presented probably will allow us, I mean, they can be integrated into our framework. So I feel that is great. So hopefully, like after this talk, we can have a lot of discussions. And if we go through the different lines of work that we have right now at consensus lab, we could frame these hierarchical consensus into the sharding. I mean, we don't call it sharding, and you'll see in a moment why. But we can frame it here as the architecture to start like accommodating the different building blocks that will allow us to scale or hopefully scale consensus into the requirements that we've been mentioning. And like the motivation behind hierarchical consensus is quite straightforward. Of course, there are a lot of work out there that we're trying to scale blockchains. We layer twos with payment channels. We've seen a few of these proposals. But at a consensus layer, our take is that probably we can't come up with one size fits all consensus that allows us to accommodate any kind of use cases. And if we want to accommodate the like from the more decentralized like DeFi use cases to the more web 2.0 kind of use cases, we cannot come up with the right tradeoff between security and performance. Because like all of these proposals in the end, they impose some kind of tradeoff between the security guarantees and the performance of the of the substrates of the blockchain. So as we think that we cannot come up with the right and optimal tradeoff, we were thinking, okay, let's make a flexible framework that allows for users and developers to grow the blockchain substrate to fit in a way that fits their needs. So the idea is to have like an on demand horizontal scalability, as I said, initially for five going, but hopefully, generally enough to be adopted in any blockchain. And the idea is to give developers a way to like spawn new states, spawn a new blockchain substrate that fits their security guarantees, and the kind of performance that they need for the use case, but keeping the ability to interact with the rest of the networks and the rest of the hierarchies, the rest of the networks that and the subnets that we have in the system. And something that we realized when we were working on this that we have a first MVP, is that it may also foster the innovation in the consensus layer, because having this framework, which is flexible, and having the right interfaces can allow others like the talks that we've seen and the technical proposal that we've seen to integrate inside this framework and see how things work together. And before I start describing in detail what hierarchical consensus is, I'm going to just throw a few concepts there, in case they sleep, and in order to present them early. So whenever I say subnet, in the end, we'll see that a hierarchical consensus subnet is like a side change. So it's an independent network with its independent state, its own consensus algorithm, but that will be able to interact with other subnets in the hierarchy. When I say that, or I refer to a parent subnet, we'll see that new subnets are spawned from a specific network. So I spawned from a specific network that become the parents, and I am the child. So once we build the hierarchy, this will make a lot more sense. But just to, in case I start talking about parent subnets, so that you know what I'm talking about. Of course, when I say peer or node, I'm referring to full nodes, and user and clients are light nodes. So these light nodes not necessarily need to sink the full state of a subnet. Then we have the native token in order to interact and become a participant in all these subnets and be part of the hierarchical consensus. The token that we use for all these interactions is the token of the rootnet. The rootnet is the main chain, in our case, Falcon Mainnet, but the main chain from which the hierarchy starts being built with all of these subnets. And the circulating supply of a subnet is the amount of native tokens that have been injected in a subnet. So these are the native tokens that through a bunch of cross-net transactions. So transactions between different subnets were injected in a specific subnet. So yeah, cross-net messages are transaction between subnets. And then the collateral is something that I won't describe in detail, probably in this talk, because of lack of time. But in order to spawn a new subnet, users need to stake some collateral. And the reason for this is that we, I mean, we don't assume, I mean, we don't enforce any kind of security guarantee in the consensus of the child subnets. So this means that there can be certain attacks there and misbehaviors. And this collateral is just a way of covering, like making validators have skin in the game when they spawn a new subnet, so the validators of a subnet. And in order to be able to slash specific misbehaviors in a subnet. Again, hopefully I will try to introduce this slightly, but I don't think I will have time to describe it in detail. And finally, final disclaimer, I may be talking about messages when actually they're transactions, because that's the way we refer to them in the Falcon world. And then whenever I say I refer to lectures, they're in the smart contracts in Falcon. So without further ado, let's see how heretic consensus work. So as I've said, the idea is to have an on-the-pan on-the-man scaling framework to simplify like the deployment of new networks that allows the seamless interoperability between all of these chains. So we can have, for instance, a rootnet, like the Falcon mainnet that has its own consensus, its own state. We can have storage, like it's the case of the Falcon mainnet. And we could even have a parallel chain, like Bitcoin, where we are periodically checkpointing in order to protect against long-range attacks. But at one point, a subset of users of this rootnet may, I mean, they want to deploy a use case for which, I don't know, the load of Falcon mainnet is not enough of the rootnet, or maybe the finality is not enough. So with heretic consensus, they're able to spawn their own subnet. These subnets, in the end, what they're spawning is a completely independent network with its own state and its own consensus algorithm. So this user will be able to choose the consensus algorithm that fits the needs of their use case. All these subnets will be validated as they are running their own consensus algorithm. There will be validating transactions in parallel. So the consensus algorithm between these subnets, they are not connected, but subnets can interact with each other through what we call cross-net transactions. And then these child subnets, so we have the rootnet being the parent subnet for these new two child subnets, these child subnets will be anchoring their security to that of their parent, in this case the rootnet, through checkpoints. We'll talk more about checkpoints in a minute. And finally, the main requirement that we enforce in subnets regarding security is the firewall requirement. So whenever, as we don't know the consensus algorithm that subnets will run because we want to leave this open to the users, so what we can enforce through correct consensus is that the in case, I mean, if there's an attack in a subnet, the maximum impact of this attack in the amount of native tokens can be up to the circulating supply. So the amount of native tokens that are held and have been injected in a subnet. So in this way, we protect to some extent against attacks in the subnet. Of course, we have then another alternative to cover for potential misbehaviors, which is the collateral and as I said, I mean, I'm going to leave that for some other tug. And this hierarchy can be built recursively. So if a subset of users in this first subnet see that the consensus algorithm is not enough for their use case and they want to scale horizontally even further, they could also deploy new subnets from a child. And in this way, we would be building the hierarchy of subnets and users will be building the actual architecture of the system. The implementation. So hierarchical consensus works thanks to two main actors. So all of the logic is handled by two actors or smart contracts in the different networks. First, we have what we call the subnet coordinator actor, which is a system actor. So it's an actor that it's deployed in a subnet, or in this case is deployed once we do the upgrade in the Falcon Mainnet. And it's the one that implements all of the logic for the hierarchical consensus protocol. So it is the one that enforces the firewall requirement. It is the gateway from a subnet to the rest of the hierarchy. It is the one that handles all of the crossnet messages. So it's the one that handles all of the mechanisms of hierarchical consensus. And then we have the subnet actor. The subnet actor, actually what we have is a reference implementation of a subnet actor and then a subnet interface because subnet actors are user-defined actors that determine the consensus algorithm, the different governing policies and so on of a subnet. So whenever a subset of users want to deploy their own subnet, the first thing that they do is to deploy a subnet actor in the parent. So with this subnet actor, they say, hey, this is a new subnet that I'm deploying from this parent and that we'll have these policies, these governing policies, these joining policies and so on. This is the minimum stake that you need and so on. So with this, the subset of users will be able to to spawn the process for their new stack with their new network consensus and state, but they won't be able to interact with the rest of the hierarchy until they register to the hierarchy. And the way in which a subnet registers to the hierarchy is by putting this collateral threshold. So putting, staking this minimum amount of collateral to cover, I guess, potential needs behaviors. With this, once the subnet is registered, it can start anchoring its stores through checkpoints and exchanging cross-net messages with the rest of the hierarchy. At any point, if we want another subnet, the thing that we do is, again, deploy a subnet actor that determines the consensus algorithm, all of the policies and so on, and starting the subnet and registering to the hierarchy so that we can start interacting with the rest of the networks. If we look at the peer level, how is this implemented inside a peer? In the end, what we're doing is instantiating, a subnet is just an instantiation of a new stack for the blockchain. So we see that when we have a subnet and we start either syncing or we spawn a new subnet, what we are instantiating inside the node is a new transport layer in order to broadcast messages, a new message pool, a new consensus algorithm, new state tree, a new API. But we are sharing all of the semantics. And this is what allows us to interoperate with the rest of the networks because we are all running the same semantics for the state tree. We are all running the same VM. And we, thanks to this, this is why we can interoperate with the rest of the networks. And another thing that it's really interesting about this is that subnets are uniquely identified and the ID is inferred from the ID of the parent. So the root net always has the same ID, which is root. And then the way in which we infer the unique ID of a subnet, it's through the ID of the subnet actor in the parent. So in this case, we can see this child subnet that has ID root slash T01. If we had another subnet like the one to the right, we would have root slash T01 slash the ID of the subnet actor. The advantage of having this deterministic identification of subnets is that we don't need any kind of discovery service in order to be able to exchange messages or interact with subnets in the system because they follow the architecture of the hierarchy. So the IDs directly map with the architecture of the hierarchy. As I already advanced, subnets use checkpoints to interact with the higher levels of the hierarchy and to anchor their security. And we'll see that in a moment. And okay, I also mentioned this, we enforce the file requirements. So we limit the impact in amount of native tokens. Of course, once this misbehavior affects the state and not the exchange of native tokens, we will need additional things like fraud proofs and does the collateral that we are staking in order to spawn a subnet. But at least for native tokens, there's a limit in the impact, even if there's some misbehavior or an attacking subnet. So let's talk a bit about the checkpoints. As I said, checkpoints include two core pieces of information. First, we commit a proof of the state of a subnet. This proof of a state of a subnet right now, like in our reference implementation right now, what we propagate is the tip set at a certain epoch in the subnet so that this can be seen as actual checkpoints of the states of the blockchain that we can use afterwards in order to validate the state in the subnet or even create fraud proofs. But we are already exploring using more complex proofs, like for instance, a sake proof of the state of the blockchain so that we can verify it directly and simplify fraud proofs. So in checkpoints, what we have is in the end a field where the subnet through the subnet actor can choose what they want to introduce in this blob of data that can serve in order to get a proof of state committed into our parents. And then also, so this is this part of the checkpoint. And then we have, we also use checkpoints to propagate cross messages to the rest of the hierarchy. Once we talk about cross-net messages, you'll see how checkpoints help in propagating messages to the rest of the hierarchy. And if we look now to the checkpoint empirical and how it works, checkpoints through the subnet actor, checkpoints are free to choose their checkpoint in period. So how often they want to commit these checkpoints to their parents. And I like to think about these checkpoints as the clock of the system. So it's what is these checkpoints, they flow through the system and it's the key scheme that orchestrates the communication between the different subnets. So let's consider a subnet that has a checkpoint in period of 100 epochs. We differentiate two stages in the checkpointing protocol. We have first what we call the checkpointing window, which is the window of a checkpoint period in which we populate with the corresponding information the checkpoint. And then we have what we call the signing window. And in the signing window, what we do is that the validators, they agree on the checkpoint and they commit it to the parent. So let's consider this checkpoint window that goes from epoch 100 to epoch 200. In this, in this check period, what we will be doing is to populate the checkpoint template for epoch 200. So as trust messages arrive to the subnet, we will be including them. The ones that need to be routed through a checkpoint will be included in the checkpoint template. And also we will include checkpoints from children, from these subnets children, so that they can be aggregated and propagated up to the hierarchy so that they are recursively anchored to the upper layers of the hierarchy. Then once we reach epoch 200, the checkpoint template for epoch 200 closes, which means that we don't accept more cross messages and more checkpoints from children. And this epoch 200 starts the signing window for the epoch 200 and in parallel the population, the checkpointing window for the checkpointing of epoch 300. So in this signing window, validators of the subnet will take this checkpoint template, will inspect it, validate that is correct, include their, their, their proof of the state of the subnet, being this safety proof or an epoch or whatever the subnet has decided, and they will sign it and commit it to the parent. The way in which these checkpoints can be signed and verified in the parent, it's also up to the subnet actor and to the founders of the subnet. Right now, in the reference implementation, we, we expect that supermajority, so two-thirds of the votes from the validators in the subnet to commit the sign-chain point to the parent, but we could use something like, we are also working, Sarah is working on, on threshold signature schemes and so on. So we could use also that threshold signature in order to sign these checkpoints and propagate them up to the parent. And in parallel, as I've said, cross messages and other checkpoints as they arrive to the subnet, they're included in the, in the template for epoch 300. And we start all over again when it closes, there's a sign and window, there's a checkpoint. And in this way, we have this clock that populates checkpoints and propagates them up to the, to the parent for two reasons, to anchor the security through these proof of the subnet and also in order to propagate cross-net messages. And in hierarchical consensus, we have two and a half types of messages. So we have first what we call top-down messages that are messages that go from a parent to a child. So they go from an upper level of the hierarchy down. And these, so all, all validators in a subnet are required to sync with, with their parents. This means that propagating a message from the parent to the child is quite straightforward because whenever there's, and we'll see in a moment in detail how these words, whenever we see that there's a new message for a child or that needs to be routed through the child, we can pick it up from the state of the parent right away. Then we have bottom-up messages. And these are a bit harder because parents are not syncing, are not required to sync with their children. So in this case, the way in which we propagate cross-net messages to the rest of the hierarchy and like for these bottom-up messages is through checkpoints. So we include, as we saw before, the cross-messages in the checkpoint and we propagate them up. And finally, we have path messages that are the ones that we see here in pink that are a combination of bottom-up and top-down messages. So in the end, they're either, so they are propagating in a checkpoint to the closest common parent and from the closest common parent is a top-down message that goes down. As I'm running out of time, I'm going to go super quick here. So whenever we send a top-down message, low-level, how these look is that we send a top-down message to the SCA, which is the subnet coordinate director. These trigger a state change in the parent. And then subnet validators, they have two kinds of message pools. They have the plain traditional message pool that collects unverified messages within the subnet. So that needs to be, that are stored and are executed within the subnet. And then the cross-message pool, which is listening to the state of the parent and listening for cross-net messages in general that have been unverified and that need to be proposed. So whenever the cross-message pool sees that there's a new unverified cross-net message in the parent state, it proposes, I mean, it goes through the consensus engine of the subnet, it proposes in a block, and once it's validated in a block, it will be executed in the subnet. For bottom-up messages is similar, but not the same. Because in this case, what we are propagating up is just a link to the list of messages that are propagated inside the checkpoint. So in this case, in order to avoid state explosion as we propagate messages up, as cross-messages arrive, and when we close the cross-message, sorry, the checkpoint template, in the cross-messages, we don't include all of the messages, but a CID with the list of the messages that need to be propagated. Once the checkpoint arrives to the parent, there's a state change and the cross-message pool is notified about like new cross-message meta, as we call it. So new links to messages that need to be applied in this network that are available and are unverified. But in this case, the cross-message pool will have to resolve the messages behind that CID because it doesn't have the individual messages, but just the link to those messages. And in order to resolve these messages, we have a content resolution protocol that can resolve through the state of the children. So it broadcasts a message through the transport layer, requesting the resolution of the messages behind that CID. And with this, once the cross-message pool has the list of messages, it can pass them through the consensus engine. And once it's validated inside these messages inside a block, they can be executed as a plain message in the subnet. So as I mentioned, we have this cross-message pool resolution protocol that is used to not only to resolve, I mean, it's used to resolve any object in the state of a subnet that is behind a CID. We use it for cross-messages, but we can use it also for lock state for the atomic execution protocol, as we'll see in a moment. And really briefly, we have two main approaches for these. Whenever the validators of a subnet propagate a checkpoint, they proactively broadcast a message with information or the objects that are behind the CID that are included in the checkpoint for the case where the validators of the other subnets, that they're receiving subnets like Root and Sub-1 in this case, they want to cache and store locally the information to use it as soon as they need it. If this is not the case, and once the information is needed, they haven't cached this information, they have also a pull-fallback where they can pull the information behind the CID by sending a message to the search network. And another important point here is that as we see here, so here we had some CID-CA messages to the Root and CID-CB to Sub-1. As we go up in the hierarchy, we aggregate recursively the messages so that what we propagate up is just a single CID that will be resolved recursively with this protocol. In this way, we are making checkpoints as small as possible so that we don't need to propagate raw messages. All right. Okay. And up till now, we've been talking about exchanging messages between different parties. But what if we want to perform an execution, an atomic execution, using state that lives in different subnets? So for these, we came up with this atomic execution protocol, which actually is quite similar to the payment channels approach with, of course, its differences, but we use the account-based approach. We want to enforce the following properties in this protocol, atomicity, timeliness, and affordability, and of course, consistency. So we want that after the execution, the output states, we want the output states of the execution to be consistent with the history of the different subnets involved. And briefly, how these protocol works is that there's, of course, an off-chain agreement between the different parties. Here we see two parties, but we could have end parties in end subnets. They do an off-chain agreement to agree on the function to be executed, the actor or the semi-contract to execute, and the input states to use. And the first thing, once they agree on this, the first thing that they do is lock the state of their actress. So they lock it in order for new messages not to be able to change that state. When we lock the state, we create a CID, a unique CID that is used to identify the lock state and be able to gather it with this constant resolution protocol. So all the parties pick up the state, the input state that they are missing, and they perform the execution off-chain. So they perform the execution, they get the output state, and then they delegate the orchestration of the atomic execution to the common parent. They could use any other subnet, but the reason why we choose the closest common parent is because it is the, I mean, they have some shared trust assumptions over this parent because it's the closest parent for both of them. And with this, one of the parties would initiate the execution by sending the output CID, the function being executed, and the input states, and then the other parties will have to commit their output states. The parent net orchestrating the execution will check that the output states match, and if they match, what the SCA in the common parent does is send a top-down message with the output state in order to unlock the state in the corresponding subnets and merge the output state to the state of the subnet. At any moment, any of the parties, in case like the protocol blocks or there's a party that disappears, any of the parties can abort, and if the protocol is aborted, what it happens is that the SCA does a top-down message, but in this case, directly unlocking the state. And that's, I mean, we have an MVP already, we have a spec, we hope to be able to have it in the Falcon mainnet soon. I don't know what soon means, but soon. And of course, there are other escalability approaches. I really love all of your talks because there's a lot of inspiration that we can use and even integrate directly many of your solutions. And we have a bunch of open problems and things that we still have to fix in hurricane consensus. An example is data availability. I mean, you've seen this content resolution protocol. This content resolution protocol assumes that these want honest participants, so we are exploring how to relax a bit these requirements. Then there's the question of censorship resistance. We, I mean, subnets have children in the hierarchy have a lot of trust on their parents, and parents, if they go wrong, they could do a lot of sketchy things. So that's why we're coming up with fallbacks for, for instance, children to be able to tell subnets to be able to migrate to other parents if things go wrong. There's a full design of the crypto model, how gas would work here and how the checkpointing fees would work. Of course, all of this is public. So if you want to participate into these discussions or contribute, let us know. We can chat in Slack and we can point to the right resources. There's the question of poll proofs and fraud proofs because we have the firewall requirement, but then once the misbehavior impacts state, we may need additional things in order to, to report misbehavior signs slash these misbehavior so that it's economically insane to do it. And before going to production, we want to do some performance measurements. We have some theoretical idea of what would be the performance of these framework, but we need to work a bit more on this. And of course, we may be losing something and there may be more things to do. So let us know if you've realized like a blind spot or something that needs to be improved in this design, because we will love to, to hear your thoughts. And that's so, of course, I mean, we have an MVP. You can test it. I cannot demo lack of time. There's a really small paper that describes the idea and we have a working progress spec that it's looking. I mean, you would get the low level details. So it should be good enough for now at least. Thank you very much, everyone. And if there are questions, I guess I don't have time, but we can chat inside. Thanks. All right. Thank you all. So we are indeed at time for the break. There, there has been some discussion already in slack, but I will read out a question by Dave Costinato. I don't think he's in the, I don't see him in the, in the Zoom room. But the question is, is that a novel filecoin innovation having collateral pledged by validators that has a prerequisites, sorry, prerequisites to spawning a subnet? So I don't know if I got the question. So the question is, if collateral, yes, if is having collateral pledged by validators as a prerequisite to spawn a subnet and innovation, or does this happen in other systems? I don't know. That's a great question. I guess, I don't know. If someone knows, please, I mean, it's the most, it's like the proof of stake approach is the most straightforward way that we could think to incentivize well-behaving validators with such an open system. But I don't know if others are doing it. Like the inspiration gone from proof of stake more than like subnet spawning. I guess this is the question.