 So I work for Canonical in the Lexi team, as Stefan mentioned. I'm going to do a demo of a feature that is being worked on. It's not yet a master. It will be released at the end of February. First, let me ask who has no idea what Lexi is. There was a presentation before, but it's container management for system containers. That means your container will look like a virtual machine on the street. So I'm not going to speak about that. I will assume that you know something about it. So I have three beautiful machines here. Each one is running a plain Lexi demo, just fresh from that branch build. As you know, the Lexi demo usually knows only about the containers on the machine it runs on. I will create a cluster of these three demos on these three VMs and they will appear you as one. You can use the regular Lexi API transparently against the cluster. There are also full tolerance features that I will show. The first thing I need to create the cluster that can be done with the init commander. I will ask you if you want to actually create a cluster. Yes, I will ask you some details, but it's just the name of the node, the address. If I'm drawing an existing node, no fresh password. I will create a storage pool just with the default values, which will use that fast pool back by a file. All right, this is the bootstrap node, so this is the first node of your cluster. And you can see it's been successfully configured. I can go and it's empty, so it has no container. I can go to the second node and run again. Let's delete it. Change the name of the machine and the address. I'm drawing an existing cluster now. I have to give it the address of this one. Password for the trust password. And if I set the fingerprint, yes. It will warn me that all data will be lost. And yes, I don't care right now. And asks for storage configuration that has to match the regional node, match the configuration as well. So now, if I run a list, I still have no nodes. But if I run, let's see, cluster list. Yes, well, let me adjust the font here. So you can see there are now two nodes in the cluster, and I'm going to add another one with the same sequence of commands, essentially. Yes, I have set the default address in the cluster. Okay, so this was the third one that I had this out of the first node. Yes, I want to use cluster name, p address, the dynamic system node, the address, the trust password, and the fingerprint node is k. Yes, yes. And the node now has, the cluster now has three nodes. So I can use the Lexi command line transparently across the cluster to spawn nodes, to spawn containers, for example. For example, I can launch Ubuntu 16.04 image, called C1, and I can, .04, thanks. And with this new flag, slash, slash, target, I can choose one of these three nodes. For example, I'm on node 3, .3, .5, actually. I can target, .3, to download the image and start the container over there. And once the container is started, it could be visible from all clusters, all nodes in the clusters. I can launch another one on VM2, perhaps there's, .04, yeah. Thank you. This is still working progress. But in the meantime, I can go to the force node, which is the VM3, and if I list the containers, I can see it's there, even if it was created from the fourth one. C1 already exists, of course. And also from the second node, visible also, this is created, so it's still stopped. But it should be started right now. Yes, both are running. And if, for example, I shut down my node, the first node, I shut it down. So it's not there anymore. And I list the containers, I will see that the first container is down. There's an error on C1. And also the list of clusters will show me that this particular node is offline. I can still create a cluster for the nodes that are alive. For example, I can change the configuration of a global cluster, such as the trust password. Well, let me show that it's there. Yes, there is a trust password. I can unset it, for example. It's not there anymore. So the first node doesn't know, of course, about this change. And I can even shut down this third node that I just made this modification on. And restart the first one on VM3. It will be automatically replicated. So if I run a conflict show, you can see that the trust password is not there. So even if the node is down, when it comes up, the state gets replicated. I can list containers. Again, they're still there. The node 2 was up, so it's still running. So the way this is done is by replicating the SQLite right ahead log using raft. So the point is that it's very convenient to operate. You don't need a separate full-time and distributed data storage. So in process, all you need to run is the lexical process. It will take care of failover and replication of your data. You might ask why. Probably cluster management is considered by some solved problem since there's Kubernetes. And I was wondering that myself. Probably at the beginning I asked why don't we just create a CRI plugin for Kubernetes and let people drive a single lexical node managed by Kubernetes in a cluster. And yeah, that is possible. But the drawback is that also Kubernetes is a big and difficult base to drive. So one of the use cases that also Stefan mentioned is systems applications that were not designed with cloud in mind applications that are not cloud native. And the need in often cases expect full systems that can work in a system container but not so well in an application container. So we hope that this feature cluster makes it easier to run those workloads without the operational overhead associated with say Kubernetes or cinema solutions. So that was it. I kept it short if there are questions or anything. I know I went through rather quickly so let me know. Questions? What did this work? Failover for containers. The question is how failover for containers work. If the node you are running on dies, of course your container process dies. In that case, there's nothing you can do. For example, what you could do is use SAP as storage backend for the systems of your containers. In that case, if your container dies on node, you can pick another node and start the game. It could be really the same state as it was because what's written on disk is replicated by SAP. That is an example. But like Lexity, clustering by itself does not manage container failover. That's out of band. It gives you some pre-tips to build on. You have options. One is storage application with SAP. There are others. But yeah, this is more or less it. What is the behavior? I'll show you. As I mentioned very briefly, what we've done is to create a little patch to SQLite which replicates the SQLite log using a raft algorithm. In case of a split brain, in order to make a database change, for example, an update SQL query, with this machinery you need to have a quorum of nodes that are knowledge that you wrote that write a log entry. If you don't have a quorum, the regression fails. So in case of split brain, that's just a fail. The trade-off here is that raft in the CAP spectrum, consistency, availability, partitioning, gives you consistency and partitioning, but not availability. In that case, the class is not available anymore, but you won't get inconsistency. It won't work, but it won't do harm. You need to have a better quorum of nodes. The actual quorum is, we call it as, if you type in any database itself. If there's a split brain, they all know exactly what the quorum should be regardless of how many nodes they can see. So they will just hang. You show that you can compute from every node. Is it possible to get a worker node done? Can you repeat the question? To make a worker node that is not in the confidence. So it doesn't configure the whole testing. I'm not sure I understand the question. So you have three nodes. And every node can configure the cluster. Is it possible to have a node that is configured by other nodes, but not configured? Okay. So the design is that each node is... So the question is, can you have just one single node, which is your point of reference and all configuration changes go through that node and other nodes. You don't touch them. You don't use them for changing that configuration. The answer is probably no, because the design is that each node is equal in some sense, with some details. Perhaps in the future, what will you be able to do is to set users and permissions to limit, for example, who can do what. But for now, all nodes are equal. Yeah, so just like the two things that I mentioned there. One of them is... If you've got like a 40 nodes cluster, you only have three database nodes. There's no reason for being more than three, because we get the program and the nodes just get stored and so on and so forth. So we will actually always make sure you've got three of those nodes. That's one thing. So you don't compute lexdenodes that are not active database nodes. But if you don't check on them, just go through the API. You don't actually see a difference. It's an information detail. The other thing that might cover your questions a lot is lexdenode by itself supports copying containers, copying images and all that stuff between lexdenodes even as an active cluster. So you can totally have a non cluster lexdenode and a lexdenode cluster and then copy your containers back and forth. That's perfectly fine. Just like you can have two lexdenode clusters, one for staging and one for production. You can also copy your containers and your images in between the two. So the normal lexdenode API. What will happen when I will create a container without the target? For now it creates the container on the node you are executing the command on. By the release it will pick the node with the least number of containers. Yes, so it tries to have a very simply low balance at the workload. So by default if you don't specify a target it will pick the node with the least number of containers. This is very simplistic scheduling. This is what we will be doing at the start but we'll see how it goes. Thank you.