 So hello, Romani. I think it's my turn now. Thanks for the previous demos. So for those who don't know me yet, I think there's quite some of you. I'm Matej. I'm from Consensus Lab. And I'm working on a consensus algorithm that is fast and scalable for being deployed in the subnets that Alfonso was just showing. So let me share my screen. I hope you see everything. So I will be talking about MIRBFT, which is a scalable consensus implementation for everyone, not just for the subnets, hopefully. And since this is a project that most of you haven't seen yet, I will first spend a few minutes on introducing it, say what it is, how it works, and how it can be used. And then I show a little demo of how it can actually be used for an application. So MIRBFT is a framework for implementing distributed protocols that has a focus on consensus protocol, but ideally should be able to implement any kind of distributed protocols, or let's say a wide variety of consensus protocols. It is available on GitHub, and it is part of the consensus lab wide view project, which is the project about scalable consensus. And you can look at more details about the project also in the link. Actually, I will post in the chat when I find the chat window. Here is the chat. I will post the link to this presentation so you can look at the presentation and click on the links. OK, just a little heads up. The name MIRBFT and the location GitHub might be updated in the very near future, so stay tuned for that. Don't focus on the naming for now. All right, so how does the implementation of the framework work? Basically, a distributed protocol always has some nodes that interact, and they send each other messages, and they collaborate to perform some task in common. So the basic abstraction is the node, and every machine that is running the protocol instantiates one node like this. And the implementation is as modular as possible, so the node basically just provides an internal mechanism for different modules of the node to communicate with each other, with each other, and for each performed to its task. So there is some application module. There's a module that actually contains the protocol logic. There is a module that takes care of the network communication, like sending actual messages on the network. There is a module that stores the payloads of the requests that are being agreed upon in the consensus protocol. And there are some other modules that are not really that important for these explanations now. And once you instantiate the module, you have three functions that you can run on it. It's that you can call. One function is run, which starts all the machinery and the processing that is necessary for the nodes to function. There is the submit request function. So for a consensus protocol, when somebody wants to, when a client wants to submit a request for ordering, for agreeing upon, they need to call the submit request function and insert a request in the node. And the status function is just for debugging purposes. We don't need to know too much about the node. Now, the node itself is just basically implementing a slightly fancier event loop that is getting all the events produced by the modules, storing them in the buffer, and then processing them and disputing those events to the modules where they should go. Each module then processes whatever events it needs to process, potentially creating more events and so on and so on. So this is just an event loop. OK, so this is the very high level architecture of it. Now, how do we use that? So this is an excerpt from the code, how MIR-BST can be actually used to implement a distributed application and to offload as much as possible from the programmer, such that the programmer can just implement the application or the protocol they need without worrying about too much more. So let me show you a few lines of the code now. So if we want to implement a simple chat application where everybody running a node will participate in a group where they can exchange messages, we need to implement the logic of the chat application. And this is a very simple one. We have a chat application here. And the only state the chat application has is an array of messages that is totally ordered from each participant. And it needs also the reference to the request store, which is a request or module so it can actually access the payloads of the messages that are being sent around. And so if you want to implement an application, if you want a distributed application with MIR-BST, you need to create an object that implements an interface that consists of only three functions, apply, which receives a batch of requests and whatever the requests are, they just get applied to the state. So in this concrete case, we just cycle through all the requests in the batch. And what we do, we create a chat message, we print it, we say client, so and so, send message, so and so. And the message is just the request data. And we append it to the list of messages that the application has. And then in order to be able to restart and catch up with the state, the application needs to be able to create a snapshot, which is simply serializing all the state in an array of bytes. And it needs to be able to restore its state from such an array of bytes, which is not that important for now. All right. So how do we actually do it? As we saw here, we have a node that has several modules. And this is exactly what it looks like in the code. So first, we create some modules, like the networking module, which the library, the Mirbrift library has sub-packages that actually provide some implementations of those modules. So we have a GRPC-based network transport module. We have a request store. For now, we just use a volatile request store also provided by the implementation itself. And we need to also tell the node which distributed protocol it actually should be executed. So which protocol logic there is. In this case, we use the only protocol that is being implemented. It's not even yet implemented. It's quite stubby, but it already can be used for the demo purpose. So it's ISS. It's a total order broadcast protocol. It's a consensus protocol. And we create some configuration for it and we create a protocol also using a library function because the ISS package is provided also by the library. And then we assemble the node the same way that was shown on the slide. We create a new node. We give it its own ID. We give it some configuration parameters and we say which modules it should be using. It will be using the net module, the request or the protocol module that we just created. And we just tell it what application should be therefore processing the agreed upon requests. The crypto module, as you can see, it also needs a crypto module. We only have a dummy crypto module implementation step but this will change soon, hopefully. And then we create some... We create some other boilerplate code for actually passing the requests to the implementation and we read messages from the command line and we submit the requests to the node. So how does it work? I already prepared for the deployment of four nodes and basically we just run the chat demo application here which is the main file I was just showing. It executes the main file that I was just showing. One is with ID0, one is with ID1, ID2, ID3. So let me start all of them. So they all initialize, they connect to each other and I press enter once more on client two. That's why everybody already sees that client two sent an empty message. But so basically when I type in some message, some low message, I press enter. What happens is that it creates a request for the total order broadcast system. It submits it to the node. The node agrees on receiving that request and all of these deliver the request to the chat application which prints it on the screen now. And given the implementation of the protocol, all these will be in total order. So if I really quickly typed something in different windows, like I would have to be very fast, manually it's not possible, then everybody would receive the messages in the same order because they're totally order. Now this is a demo application, but the same principle applies to the consensus protocol implemented in the subnet. And that's the goal for the next months to actually make this part of the subnet consensus protocol. So that's it for the first demo of this. Thank you very much. And I leave the floor for the next demo.