 You're now live, John. Okay, perfect. Thanks, David. Everyone, welcome to the first Hyperledger virtual meetup of 2022. We're excited you could join us for this great session today. And, you know, 2021 was a great year for Hyperledger and the community. But we're looking forward to another great year in 2022. And we're really excited that everyone can join us today. So one of the things I really like about Hyperledger is there's always great projects coming out of the Hyperledger labs. And today we're going to really highlight one of those great projects coming out of labs. So I have with us today, Matt J. Pavologic. And he's going to talk to us a little bit about mirror BFT and ISS and ISS and give you a little background on Matt J. here. He's a distributed systems researcher, specializing in business team fault tolerant consensus protocols. He obtained a master's degree from Vienna University. And then he's worked on the permission lock chain solution. Now he's joining protocol labs to work on. We're going to have a great presentation here about scalable consent to mute, monitor the chat. Anyone who has any questions for Matt J. as we go along here, feel free to post them in the chat and we can answer them as we move through the presentation. Otherwise, when we wrap up the full presentation, then we're going to go ahead and just do a Q&A session. So Matt J, I'm going to turn it over to you and we're looking forward to the presentation. Thank you very much. Thank you very much. Can everybody hear me? Sound great. Perfect. So I hope everybody sees the slides and everybody hears me. So thanks a lot for this opportunity to show what we are currently trying to get going. Namely, MIR-BFT, which is both a protocol for highly scalable consensus and state mission replication and a library that is implementing this protocol. So I've been working on this together with Theresa, Marco and Jason, my former colleagues from IBM who are everybody, who are partly still there, partly only more. And so let's jump right in. What maybe one more remark. If anybody has any question, I like to be actually interrupted. And if this can be interactive, please make it interactive. If you have something that you want to ask immediately, just go ahead and shout because I think in the context where something is explained, it's also better to have the questions, not like at the end and then we need to reload all the context. All right, so first a little bit of background and putting things into place in terms of consensus, TOB, meaning total order broadcasts and blockchain and state mission replication and how these terms actually fit together. So let's start with state machine replication. State machine replication is a concept. It's a method for replicating a service that is usually online for making it highly available and resilient to folks. So state machine replication is based on having several replicas that each has a copy of the whole service state and they all start with some initial state. And we just, let's go to the next slide again. And then they apply operations to this state in some specific order. And each operation that is submitted by a client would update the replica state potentially and create some response that is sent back to the client. And each replica is appending these operations to like a chain one after the other. And each replica is executing these operations in the same order. Now the clients, they have to broadcast their operations to these replicas. And since the replicas need to execute the operations in the same order, they need to be considering those operations in the very same order. So this broadcast by the client of the operations to the replicas needs to happen in some total order. That needs to be consistent across all the replicas. That's why it's total order broadcast. So the clients use total order broadcast to distribute the operations to the replicas. Now these operations, we can see them as forming a chain. Sometimes they call it a log of operations. And at each replica locally, they're executed one after the other. And since we require each operation's execution to be deterministic, each operation will produce exactly the same state at each replica. So- Do I have a question? Yes. Sorry to interrupt you. Just curious about the latency. Do these requests have to come back at the, have to be fulfilled at the same time? Or can they go off and be a synchronous and come back whenever the process is finished? Or do they have to all have to act at the same time? Well, when you're talking about real time, this is actually even impossible to achieve to be for each operation to be applied at exactly the same time. So in general, in the model we're considering, this is the so-called partially synchronous model where there's not really a strong notion of real time. And the only thing we care about here is that each replica executes the operations in the same order, but in real time, this can very well be shifted compared to each other. Does that answer the question? Thank you. Yes, thank you. Yeah, perfect. Okay, so we have such a chain of operations. And these operations can also be blocks of operations in practice. This is really just a mechanism to make it perform better. Instead of broadcasting a single operation at a time, we can say each operation can be replaced by a block of operation and append it to this chain, which makes this chain a blockchain, right? So this is where the term blockchain comes in. If we have blocks of operations, append it to each other in such a chain, that's a blockchain. Now, another way to look at it is that each position in this log, the first, the second, the third and so on, is, it needs to be agreed upon by the replicas. They need to achieve consensus on what the first operation is, what the second operation is, what the third operation is, and so on. And achieving consensus on each of these positions yields total order broadcast that can then be used to implement state machine replication. And the underlying data structure can sometimes be called blockchain. So this is just to give the, like, level out the terminology here. Now, most of these things are used interchangeably very often because they are either, one is the means to achieve the other and so on. So we talk very often about consensus, but why do we need to have a scalable consensus? I guess most of you, since you are here, you already know the soil, skip fast through this. In private blockchain systems, a classic consensus algorithm is necessary to implement such a thing for, for example, banking purposes or for supply chain management or for consortia applications and so on. In public blockchain systems, some committees sometimes need to execute such a protocol to agree on the order of operations or on transactions and so on. Very often these are tens or hundreds of mutually untrusted participants and this is where our protocol comes in and this is where it can be used. So how do we do this? How do we approach scalable consent? How do we approach the problem of a consensus protocol that scales? Well, through modularity, log segmentation, duplicate prevention and censoring prevention. So one after the other, I will actually talk in detail about those. So, but very quickly, we actually take any leader-based protocol of which there has been so many already throughout the years. Some of them are new, some of them are old, but they share many commonalities. There's many leader-based protocols known today and what we do is we look at them as just modules that order some requests or some operations and we don't care how they do it, we just take them as black boxes and we plug them in as modules. Then what we do, we partition the log, like this is the log of operations and we just partition the log into what we call segments and each segment is handled by a different instance of this leader-based protocol in parallel. This creates some problems that need to be resolved. Specifically, it's what happens if the parallel instances are appending the same operations to the log like two different segments of the log is pretty useless if because it's just wasting resources and if each instance appends the same, just wasted their time and so we solve this by partitioning also the set of operations that each instance can propose and append to the log which in turn creates another problem, namely that if only one instance and one leader is responsible for some operation then and that leader decides not to propose certain operations because that leader is malicious for example, then this operation would never be attended to the log. In which case, so how we deal with this is by rotating the assignment of these operations to the different leaders and achieving that every operation will eventually be attended. So this is just an overview and I'll talk about exactly how we do this. I will do, so the rest of this presentation divided into two parts. One is the theory, basically what I was just talking about that's a lot more in depth and then in practice how we actually implemented and where we always look for hands-on approach of some people who might want to contribute and we're always open to enthusiasts who would like to put their hands on it and help implement this. All right, so replicating a log. So this state, this log is what we basically want to have in the end. Well, each replic comes to have a copy of such a log with certain operations inserted in it. Each replica initially starts with an empty log of operations and one of the classic approaches to implement filling this log is using a leader-based protocol that I've been talking about. And most of the leader-based protocols in some way work in three conceptual, in three phases. First, the leader proposes some operation for some position in the log. Then the other replic has confirmed that yes, we were fine to commit this operation and then there's a commit phase where everybody actually commits that yes, I'm sure that this operation will be in the log. There are variations to this, but this is like the basic idea of many of those protocols. So what happens is that the leader proposes some operation one and eventually everybody confirms and commits it and then some other operation and so on. And this log is filled, not necessarily in the order of the free spaces in the log, but operations are being proposed by the leader and committed and eventually all those operations are filled in and that's when they can be also executed by the application. So whenever there's an operation such that all of the previous operations have been already committed and executed, the application can execute the next one. Now the issue is that if we have a single leader in the protocol and the leader proposes the upper, it's the only one to propose the operations. The issue is that usually this proposed phase is by far the most expensive in terms of resources spent. And then the leader becomes the bottom. And so basically everybody, all the other nodes that are just mostly sitting there waiting for the leader to propose something then they quickly confirm and commit but they don't have much to do and the leader is doing most of the work. So and the leader is just one and it doesn't, so this is why many of the protocols don't scale. So what we do is we take a leader-based protocol and we say, okay, we call this a sequence broadcast protocol it's just some abstraction saying that it has some leader and it's just filling in some operations in the log. We don't care how this is done. We say, hey, Mr. Protocol just make sure that some operations are eventually committed and we don't care. So then what we do is we segment the log. So we take this log and we actually take part of the log which we call a segment and we spin up one instance of sequence broadcast of that leader-based protocol. Then we take another segment of the log and we spin up a separate instance of the protocol. We do this for each part of the log like this. I think there's a couple of questions that maybe we wanna take a look at just real quick right at this section. So we have from DE is the question log segmentation is then something similar to sharding? Well, in a sense it's similar to sharding as a concept. However, when I hear sharding what I first think of is having really separate logs and the operations in those logs not necessarily being ordered with respect to each other or not all of them being ordered. Having basically certain operations going to some log and certain operations going to another log. Here, the difference is that to the classic notion of sharding although this is some way of sharding but the difference here is that it's a single log it's just that there are different parts of it and each segment is basically a subset of the slots and each instance of the protocol is only responsible for filling in those corresponding slots in the log. Okay, then we had another question here from Alan and the question is is the commit phase an eventual or strict consistency model? That's the first part of the question. And then do we assume that each node as part of the network have the same compute capabilities as the others and no latency consideration for compute time mismatch? Yeah, so this is a very interesting question and it's a good point. So first the first part of the question. Yes, this is strong consistency. I mean, state machine replication is a method of achieving strong consistency among the copies of the service because each two operations are ordered with respect to each other. So if I'm a node and I have executed the first two operations and I execute the third one I know that this is exactly the same sequence of operations as everybody else executes and my state is at any time completely consistent with the state of any other replica after having executed as many operations. It can happen that I'm slightly behind and I'm still executing operation three and somebody already is executing operation eight but I will catch up and the sequence of states I'm going through is exactly the same as every other replica. Now for the execution speed this is a good point because in the first approach we consider everybody to have roughly the same computing power and thus everybody progressing roughly at the same pace. If this is not the case it doesn't break the protocol in any way. It's just that maybe a leader that is let's say slow would need to have a smaller segment than some other leader that is faster that has a better network connection that can handle more requests per second. So this is definitely possible to tune but for simplicity let's stick to roughly to the assumption of a roughly the same computing and communication resources per node. But this is a valid point and it has to be addressed but it can be addressed. Any more questions? Looks like we had one more question from DE and it's how do you avoid that this protocol now are real time constraints? Example, the leader processing segment six requires that the leader of set five broadcasts its results as it requires this as input and has new current state to complete its task. Yeah, exactly. So that's a good point. So the leader of segment, well, the leader of the first segment let's say hasn't managed yet to broadcast what goes in slot five. However, the leader actually let me go a little bit forward. I'll show what exactly is happening. I have it on the slide. So basically all these segments and all the instances of the leader-based protocols are almost independent. So the one only cares about filling in these. The second one only cares about filling these and so on. So and this can go in, everybody. Yeah, this can go in parallel and we spin up these instances such that each instance has a different leader. So the load is spread among several leaders. And this leader doesn't really care what's happening in this segment and this one doesn't care what's happening here and then so on. So that's why each is proposing operations in parallel and this can happen and they independently of each other and this situation, I guess this is what you were referring to in the question. Correct me if I'm not, correct me if I'm wrong. So. And feel free to come off mute if that would be helpful to describe it. Yes, because actually, I don't see the chat and I don't know where to, where to. Yeah, just at the bottom of your screen if you kind of dismounts over it, it'll pop up a little bar that will say chat, click on that. Ah, chat, here we go, yes. At DE, if you wanna come off mute and describe your question in greater detail, feel free to do so as well. Yeah. So for now, I assume that this situation is what you were referring to, right? That we have some gaps basically in the log. There's nothing at six, but we already have operation seven. So this just means that we just need to wait. Nobody can, no replica can execute operation seven. Before operation six has been committed. As whenever there's a gap-free prefix of the log, then that's when we can execute. But what we guarantee that eventually, and this is the property of each single leader-based protocol, that eventually each of these will be filled by something. And then, and eventually it's guaranteed that all of those positions will be filled. And all those will be able to execute. All right, now, what happens when leader two, for example, crashes? And so this is what I just said. The other nodes that are not leaders in the segment, they somehow need to agree that, okay, this leader has crashed, and these two positions will never be filled and we just need to skip them. So an important point to note is that every node is participating in every instance of this leader-based protocol. It's just that it's only leader in one of them. And in all the other instances, it's still participating just as a follower and confirming and committing, but it's not proposing. And that's when leader two crashes, then the other participants, as is the case for any leader-based protocol that is live, the other participants agree that, okay, leader two crashed and we're just not committing anything here and we're skipping those. So this can happen. And in the end, everything will be filled except for these two. And the, but since everybody knows that these are, these will never be filled because the leader crashed, we can just skip those positions and execute the rest of the operations. Okay, now we have a problem with this. The problem with operation duplication. So what is operation duplication? We said that these segments are independent of each other. So what if leader one and they propose operations for their own segments, but they don't really know what's going on in the other segments. And if a client comes and wants to broadcast an operation, the client needs to send it to multiple leaders. Otherwise he risks that the leader that the client is sending the operation to is malicious and would never propose the operation. So the client needs to send the operation to many leaders. And now, but since they don't know about each other, they might happen, it might happen that everybody proposes the same operation. And like this, and if this happens consistently, which in practice it actually does, if we don't do anything about it, then we lose the benefit of having parallel leaders. Because here, after the whole thing, we only committed four different operations. So we're basically wasting resources. So how do we deal with this? We partition the operation space as well. So what does that mean? That means that for each leader, we only allow a certain set of operations to be proposed. We assign some set of operations to the first segment and instantly we assign a different set of operations to the second one and so on. Like this, the first one can only propose and fill its own operations and the second one as well and so on. We call this set a bucket. So every leader gets a bucket of operations and it only is allowed to propose these operations. If it happens to propose some operations that is not in the bucket, well, then all the other nodes that are supposed to confirm and commit these operations, they would say, no, no, no, I'm not committing this because you're not allowed to propose this. Okay, we have a question. We have a question. Your operations in the chain don't have state dependencies on prior operations. I'm confused. Parallel leaders appear to run consecutive operations in the chain that depend on the computation of the prior operation. Well, if they do depend, then the dependency would have to be executed first. So it's the client's responsibility to not submit two operations that depend on each other because before an operation is committed, it is not ordered. So if I'm a client and I submit two operations that depend on each other, I still have to count on the fact that they might be reordered in the final log. But if some operation is committed and executed and it has its place in the log, since it's executed, I see the result of it. Otherwise, if it wasn't executed, I wouldn't see the result of it. And then based on that result, I can freely submit another operation that depends on that result because I know that it will never be ordered before the operation that has been already executed. So please, let me know if there's more explanation needed. Okay, so we have these buckets and each leader only is allowed to propose operations from one bucket. Now, this in turn creates another problem, namely operation censoring. So what does censoring mean? So remember this situation where we had one leader that crashed and that couldn't propose its operations. Well, what if that leader wasn't just crashed but actually malicious? If leader two says, you know what? I'm never proposing operation six. Well, then that's kind of a problem because if operation six is only in the second leader's bucket, then the client who wants to propose operation six will be sad all the time because nobody else can propose its operation only leader two. And the operation would actually never be appended to the law. So this is a problem. And since we do not know which leader can be malicious, this is not an acceptable situation. So the solution that we propose is bucket rotation. What is bucket rotation? So we have our segments and our log. And let's say in this particular example, there's 16 positions of the log divided into four segments. So what we do is that we call this epoch one. But the log it continues, right? The log continues indefinitely when new operations are added. So we say that another part, another section of the log, again divided into four segments would be the epoch two. And the log would be divided to a succession of epochs like this. And each epoch is subdivided into segments. Now here comes the trick. In the first epoch, we had different buckets assigned to different instances of the leader-based protocol. And in the next epoch, we actually reassign these buckets differently. So here we see that bucket one has been assigned to the first segment in epoch one and the same bucket is assigned to the fourth segment in epoch two. I see a question. If bucket rotation needs to happen, then it should be dynamically updated to exclude execute operations. Yes, that is definitely true. And in fact, a bucket is just a, it's basically an infinite subset of an operation. Imagine here in this case, it's like module of four. And each operation has a class, has a bucket which is a kind of a class of operation where it belongs. And it's not like a list of operations that can be proposed. It's really a class. Imagine that you take the operation, you hash it and you take the last two bits of that hash and that assigns a bucket. Okay, I see more questions. Similar before this, introduce another real-time constraint. You can only switch epochs after all operations in the previous exactly, that is true. So we switch, we go from one epoch to the other, once the first one is filled with operations. And thus clock synchronization need to be dealt with for epoch strategy, not necessarily. Since we know that each segment separately is guaranteed to eventually fill all the slots, then we know that this eventually will happen. And we don't really, in our model, we don't really care when this will happen. In practice, of course we do. And we can tune the implementation such that it's very unlikely to happen that we would need to wait for a long time. But in theory, we don't really care. In theory, we just are guaranteed that all of these operations will be eventually, will eventually appear in the log and this will be filled. And then we switch to the next epoch. And here, if we look at operation six, for example, imagine that the second leader would not submit operation six here. Then in the next epoch, the operation six is in the first leader's bucket. So if the first leader is correct, it would propose operation six. So in EVA, at least after a few rotations of the buckets, each operation would be in a bucket assigned to some leader that is correct. Okay, I see another question. Does clock synchronization need to be dealt with for if a malicious leader does not fill the bucket? Yeah, well, if a malicious leader does not, you mean does not fill the slot in the log. So if a malicious leader doesn't fill a slot in the log, it's basically this situation. What player are we here? Is this situation where all the other nodes that are participating in that protocol would declare the leader faulty and agree that these slots would be skipped or maybe filled with other operations. Okay, so we have seen how the buckets are rotating, how we can avoid the censoring of requests. So to summarize a theory. Sorry, maybe one question to the previous slide. Yeah, so to the previous slide. So in the case when the leader was crashed, the others were able to propose something like to agree on an empty or nil operation of the slot. And for the case where the leader is simply malicious and keeps some operations, although the clients already announced that they wish to execute certain operation. In protocols like PBFT, there is a mechanism of to trigger, to kick out the malicious leader and replace it with another replica that could propose the requests that are waiting for too long. And so do you just remove this functionality when you use, let's say PBFT as a slot as a segment protocol or is it completely replaced with epoch mechanism or is there any way that the replicas that know within the epoch one that certain leader is already misbehaving doesn't propose some operation for too long, although it's known that some client already long ago requested this operation somehow to kick it out or do some replacement early on. Yes, yes. So in fact, in our implementation, we do use exactly PBFT as the single leader protocol for a segment and PBFT has the view change sub protocol when they find out that some operation has not been submitted or nothing has been committed to some slot of the law. So in this case, only base, well, for two reasons. One reason is simplicity that it is simpler to, when the view change protocol in PBFT is initiated to just declare all the uncommitted slots as empty forever and call it a day and say, okay, this leader is faulty, we're done and this is the result. Of course, we could switch the leader in PBFT and have some other know become leader of an instance of that protocol and propose some other operations. The slight issue with that is it's only a performance related thing that usually when you switch the leader in one instance that the new leader in that instance probably already is leader in some other instance and it would have to have like double work to do to propose operation for its own instance and propose operations for the other instance. That's why we just take a shortcut there and say, okay, the new leader is actually elected, but the only task of the new leader is to propose empty batches for whatever hasn't been proposed yet or for whatever has not been committed yet. And like this, the epoch finishes and the uncommitted operations that the clients already announced they want to commit, they would be dealt with by somebody else in the next epoch. And what's the condition of epoch change then? The epoch change happens periodically. Like after just fixed number of operations. Yes, after fixed number of operations. This can, of course, we can go and try to tweak this and dynamically change it and it's totally possible and the implementation would actually support it that based on whatever criteria the system can decide how long the epoch should be, but for simplicity currently, it's just after fixed number of operations. And what actually is already happening when a leader crashes, for example here, if leader two misbehaves or crashes and there are some nil values that appear in the log in the next epoch, that node will actually not become leader anymore or it will be kicked out of the leader set and then in epoch two, for example, it would only be partitioned into three segments, noting four. But this is my first simplicity, I didn't mention this, but yes, this is all possible and it actually even happens. And you can specify whatever policy you want for kicking out suspected leaders and including them back in the leader set. And then was this your question? Operations are not redistributed if a node fails to perform those operations. Yeah, the client sends the operations to multiple nodes. So if some leader fails to commit that operation, then some other leader will succeed later. So that's basically the gist of it. Okay, I'm glad that there was actually interesting questions, we were running slightly out of time so I'll need to skip over some other things. So just to summarize, we have a scalable consensus protocol that is based on modularity, namely by using any leader-based protocol as a module and we use multiple instances of it, one for each segment of the log and this happens in parallel and that's why the resources of each leader can be used optimally. This creates one problem, which is how to deal with duplicates, how to make the leaders not propose the same operations at the same time. We solve this by partitioning the operations in buckets by hashing them. Basically just look at the hash of the operation and you know in which bucket it goes and assign one bucket to one leader which in turn creates another problem with censoring where if the leader that has a certain bucket happens to be malicious, operations from that bucket couldn't be proposed by anybody else anymore. And that is resolved by periodically, namely after a certain number of operations committed in the log by periodically rotating the assignment of these buckets to leaders. So eventually each operation will be assigned to a bucket with some leader that is correct that will be proposed. All right, so that was the theory and basically out of time. So I'll just show quickly how we implement this in practice. And this is a new protocol and we just, we're just starting the effort of having a really nice production ready implementation of this and when, if you're interested in contributing to this or in looking how it works or using it, just drop me a message on my email. I think it must be somewhere, if not, I will supply that. And let me know if you are interested in actually looking in the code and contributing or maybe even using that protocol. It's in an early stage of development. It's a rather fresh hyper lecture lab. And I'm just going to show you in a few slides the architecture of the implementation. I just, so you know what I'm talking about and see how it works. So the architect, so this maybe of this also library. It's a library that provides you with one main object, which is the node which represents the replica of the system we were talking about. A node is basically a data structure and it has a few methods. You can run the node, well, you can create the node, you can run it and you can submit requests to it and you can ask it what the status is for mostly debugging purposes. So a node consists of many modules. Some of them are shown here. The most important ones probably. So one module is the application. The application is basically the, well, the service that is supposed to be replicated is the implementation of the service. Even the protocol that I've been describing now is actually a module and it can be switched for some first for another one. Then there's a module for sending and receiving messages of the network and there's a right ahead log module which is crucial for situations where for example, the node wants to restart. Now, if you restart a node, then the node needs to keep persistent, persistently track of what it has done, what messages it sent and so on, such that when it restart, it can continue where it left off or at least not do anything contradictory. And now all these modules, they produce events. For example, when a message is received from some other node of the network, it's received by the net module. As an event, it is put in some buffer and then there's a processing function that would take these events from the event buffer and distribute them to whatever modules need to process them. For example, there's a message incoming that is part of the consensus protocol. It goes in the buffer, then the processing function says, oh, we have a message, we relate to the protocol module. The protocol module processes it and generates, let's say, some new messages that need to be sent. They go in the buffer, the processing function reads them and gives them to the net module to send them to whatever other nodes they need to go. Now, all these modules, they're working in parallel. So we think that this will be very efficient, although we haven't benchmarked it yet because not all of it is implemented. So some of the modules, the most important module from the user's perspective is obviously the application module, which only has a simple interface. It has an apply function that receives a request badge and it basically tells the application, this is the request you need to process. And then the system can ask it to create a snapshot of the state of the application. For example, when the thing needs to restart and so on and then you can restore the state of the application by giving it a certified version of the state. Then the protocol module actually has a very simple interface, but it's actually probably the most complicated one because all the theory that I was talking about is actually implemented in this protocol module. It has some protocol logic, it has some state and all these events that can be submitted requests or receive messages or some ticks of some timer and so on. They're processed by the module as events and then new events are produced for other modules. Oops, this is a typo in the copy paste error in the slide here is that the writer head log module I'll skip over this, it's not that crucial and I'm running out of time. And what is also very nice about this kind of approach is that we can have a separate module that is called interceptor and all these events that are happening here can be actually intercepted and recorded and fed into some debugger program and the debugger program can then analyze exactly what happened inside the node. It can even feed those events in the node and this is very, very nice for debugging and for showing what's happening inside. For example, you'll want to run it. You run it, you record the modules and something goes wrong. You have the complete trace of the system and then you can even update your code like increase a debug level and so on and then feed those events back in and see exactly step by step what's happening. So this is all in development though but it sounds like it is a very, very nice debugging tool. So this is how we use it. When you want to actually deploy some application on each node, you basically take the MirriFT library, you create a new node, you give it some configuration parameters like its own numeric ID and you tell it where to write the log messages and then you tell it which modules it should use. Like most of these modules are planned to be or are already implemented by the library itself but the user can replace them by more fancy versions if they want to. The most crucial module for the user is the application. For example, here we have a chat application. So now I only have maybe, what's the time? Yeah, maybe I have a few minutes left, I show quickly how it works. 10 minutes left for questions or any further description or demo that you wanna do. Perfect, perfect. So I show you a little quick pass over how that actually works and then we can have some more questions if you have any. So first, what do I need to do if I want to use this library to replicate my application? And here, let's say we have a small chat demo application. Do you see the code on my screen? Yep, we can see it. Perfect. So I've created a little simple chat application where it is just a data structure that stores the state of the application and the only state of the application is the history of the messages that have been sent over the chat. And one more thing that it requires is just a reference to a key value store that stores the payloads of the requests which are disseminated separately. I didn't talk about that too much for in the interest of time but the application basically receives batches of references to requests and then it goes to a storage module where it retrieves the data from the request. The only relevant thing that I need to do is implement the application logic. So here, what the application has to do when new requests arrive. And here we model the chat as each chat message is basically a request to the application to append the new message to the chat history. So here we just basically go through all the requests in the batch. We just retrieve the data from the key value store. So request data is the actual message. We construct a nicely formatted message that we can print. We add the message to the history and we print the message, that's it. So this is the application logic, it's simple. Now, we also implement the function in restore state. So I'll skip very fast over this. This is basically, we just serialize everything in a protobuf message and we return the bytes and to restore the state. If we lie behind and you really need to catch up with the state, the system might ask us to just restore the state from scratch and this is basically just restoring the chat history and we print everything. Okay, so here's the main function of the application and I only show you what you saw on the slide. So basically, we have a new node. We create some new modules like a request store, a networking module, a write-to-head log and a protocol instance. We pass it to the new node function at creation of the node. For example, the ISIS protocol is part of the library. ISIS is the name of the protocol there was describing. I should have said that actually. And so we just create an instance of the protocol. It has a well-defined interface that is in the documentation and that you saw on the slide. And then we just pass it to the node. And then we basically, we just run the node here and we submit requests. So how does it work? I already created a VM and I basically run the demo application and this can be running on a different machine, on the network. I just run each of the nodes. I give them their own ID. They basically initialize and whenever, when I type something in, say hello, I press enter, then everybody receives the request. Hey, please add client zero, say hello to the chat history. And here I can say hi and it is appended to the other ones. So this is running on my local machine but it's communicating over the network and it can be actually distributed. So this is just a hello world application, what can be done and you can of course imagine the application being much more involved. So I took a long time but if you have any more questions, please go ahead. I'm gonna go ahead and post a link to the labs in the chat as well. So if anyone wants to get engaged with that, I put in the link just now and then if you wanna go ahead and post your email. So if anyone has any follow up questions, that'd be great. So this is my email. And let me also go ahead and post the GitHub repository too. Ah, yes. Actually the labs website is maybe not as up to date. Everything that's happening is happening on the GitHub repository. Okay, perfect. Yeah, that's what I kind of figured but I wanted to give them kind of the overview as well. Yes, yes, absolutely. So please go check out the GitHub repository. And if you have any questions, just drop my email. Okay, let's see if there's any other final questions here. I think the thing is, you know, with the labs, we always wanna get in as much engagement in the community as possible to build out a labs project. So I welcome anyone to jump in and work with the team to put new code out there. And let's see if we have any... Yeah, it looks like everybody's very excited about the presentation. Thank you very much. Thank you very much. I see one question. That we can provide to the team as well. Yes, you have a copy. I sent it to you, right? Yeah, I'll go ahead and use that presentation that you sent to me earlier. Yes, perfect. So there's one question. Is this compatible with EG Hyperledger fabric deployments? Well, it is not, but it is designed to be. So currently Hyperledger fabric has an ordering service. And I think it is a matter of time when this library will be just pluggable in Hyperledger fabric to have a proper scalable BFT support. Now it is not possible yet. And also because the library itself is in an early stage of development. So many things that I've talked about are partially stubby or not quite yet implemented. That's why we need your help. And this is supposed to be integrable integrable with Hyperledger fabric. Okay, perfect. I don't know if there's any more questions. Yeah, go ahead. I'll share my email again. I already wrote it there. Oh yeah, please share your email, yeah. I'm pretty new to the store technology itself. So I would like to participate in that. Yeah. So do you see my email address or contact me on GitHub? I think there must be some email on GitHub as well. But here's my email again. Okay, well, thank you very much for the one presentation. Everyone should have the email there and I'll share the presentation with anyone who signed up for the meetup as well. And we look forward to seeing you for our next session that should be coming up over the next month. So have a great day everyone and we'll talk to you soon. Thank you. Thank you very much. Thank you very much. Have a nice day or evening or morning or wherever you are. Okay.