 Hello everybody. Let's start with our demo of the reconfiguration of state mission replication protocol. This is about reconfigurable state mission replication in Mir. So just a quick recap, what is Mir? Mir is a framework for implementing distributed protocols that we are developing at consensus lab. It focuses on consensus protocols. It is modular and flexible, and it's available under the link here. It's clickable in the version of the slides that I linked in the Google Doc. It's a part of a consensus library project about scalable consensus that you can also check out. Okay, this is our repetition. So I'll go, I went through it fast. Now, in this demo, I'll be showing reconfiguration. So what is reconfiguration of such a system? I don't know what exactly the backgrounds are of the people present. So I'll have a quick intro of what that actually means. So in a distributed system, usually under reconfiguration, we understand dynamically the means at runtime changing the set of nodes running a distributed protocol. So we have some nodes called validators in Lotus that are executing some agreement protocol. And then we want to add some nodes while the system is running without disrupting the functioning of the system. And we also want to make it possible for some other nodes to leave and seamlessly transitioning to like new configurations. The background of how that works in particular with our system. So state machine replication in our system is implemented like this. First, we have the mempool that basically stores unreliably incoming transactions. So all the transactions that are coming in from clients, they are stored in the mempool. But the mempool barely gives any guarantees about whether these transactions survive an old restart or basically that's it or some storage failures. So this mempool is kind of a best effort pool of transactions. So from here, transactions make it in batches to the availability component. And the availability component in our system is basically reliable transaction storage. So it also executes a protocol and when it receives a batch of transactions, it makes sure that enough other nodes reliably store all these transactions, such that it's sure, such that it's certain that these transactions will be available for anybody who asks for them. When this availability of the store transactions is ensured, it issues an availability certificate. So for each batch it receives it issues an availability certificate when these transactions are available. But it is not guaranteed that all nodes issue the availability certificates in the same order. That's why these availability certificates go to the ordering component of our system. And it establishes a total order of the availability certificate. So this is basically the consensus protocol, the core of the consensus protocol. This thing agrees on the order of the certificates. And then when at the output of the ordering component, we get an ordered sequence of availability certificates. They go to the execution stage. And at the execution stage, these availability certificate are transformed back to actual batches of transactions that are fetched from the availability layer. So each availability certificate corresponds to some transactions. We fetch those transactions we know that are available because it's certified by the certificate. We get the actual transactions including the payloads and we can execute them. More in detail what happens is when this availability certificate comes to the execution stage, it comes ordered with respect to other events that happen. And our system works based on what we call epochs. It is not the file coin expected consensus epochs. This is a different notion of epoch. And so basically what we get here is certificates interleaved with new epoch events. All this is totally ordered. Then as I said, we fetch the batches of transactions from the availability layer. And then we have another ordered sequence of batches and new epoch events. And here some of these transactions can be special configuration transactions. And this is what is important for the reconfiguration that I'm going to show. So some of these transactions are special ones that change the configuration of the system. They are filtered out here. And they are also ordered with respect to epochs. And then the system contains some configuration state. And whenever there is a new epoch, all the configuration transactions, they take effect. And all the components of the system receive an event that they need to reconfigure. And then that means that they need to create the connections to the new becoming nodes, maybe close the connection to the old nodes and do a bunch of other things that are required for the system to smoothly transition to the next configuration. And the transactions that are not configuration transactions, they're basically just application transactions. They are assembled into blocks and ships to actual execution. We dynamically change the set of nodes. This is what we will show in the demo. And we will show it in a chat demo application that I was using also last time to show the fault tolerance of the system. And then Dennis will show how this is integrated into Filecoin and how we can add Filecoin nodes running near consensus and still reconfigure. All right, so here I already have prepared four nodes that are running a demo chat application. If I just run it, they have a configuration. They have some initial configuration you can see here in the argument. It's a static configuration of four nodes that each of them loads to know how to connect to the others. For example, this is a simple chat demo application. I can say hello from one node and the others receive the message. If everybody says hello at the same time or let's say they say hi, everybody gets the chat message in the same order. I modified the application such that I can have special messages. When you type a special chat message, it actually is interpreted as a configuration transaction. And I need to just tell the system which node and at which address will be joined. So I have another node ready here. So here I run the chat demo. I give it a new configuration that already includes the four nodes and itself. I tell it to use the leaping to be network transport. And I tell it, I tell it that its own ID is four. So basically, each node can be at initialization can be configured with a static membership file, saying that ID node with ID for, for example, which would be this node is at this IP address at this port and so on. Let me actually copy this because this will be useful. So I'll just run the newly joining node with the new configuration. And here I can send a special message now that starts like config and node. And now I paste the ID of the newly joined node and its address. It interprets the messages, a new joining node, and it's adding the node. Now this node, it was complaining for some time that it couldn't find the other nodes because they hadn't talked to it yet. But now the newly joining node downloaded the state. The state consists of all the messages that have been sent so far and can send messages that I hear and all the other nodes. They found it. I mean, so it's integrated in the system now. So thank you very much.