 All right. Hey everyone, I'm Stefan Greber. I'm the project leader for Lexity at Canonical. You might usually associate Lexity with containers, that's no longer quite the case, so it goes through some of that and see how things line up now. First, we'll start with, you know, what we are kind of best at really, which is containers and system containers specifically. System containers, you've got a shared kernel, they are containers, they're the oldest type of container, that's what you might associate with, like, BSD jails or Zones, OpenVC, and these days, Lexity and Lexity. They behave like a standard-owned system in much the same way as virtual machines do. They have low overhead and there's no virtualization overhead on that stuff going on and they're pretty easy to manage, it's just a bunch of processes on the same kernel. So that's what we're mostly known for, so we've been doing for a long time. Now, on the other side, if you look at virtual machines, you've got a bit more separation going on, so virtual machines, virtual hardware to some extent, and virtual firmware, well, virtual firmware. It is hardware, it quickly requires hardware acceleration to be meaningful. You can run it completely software, but performance kind of sucks at that point. The main benefit, obviously, is you can run just about anything, there is no constraint on it being Linux of the right version and everything that containers do. In case anyone didn't know what a VM was. Now, as for Lexity itself, it's a modern system container manager and VM manager now. It is written in Go. It relies on the container path, uses libxc to talk to the kernel and drive containers. On the VM side, we're using QMU to run virtual machines. It exposes a REST API to our clients. The REST API has been designed to be quite simple with multiple bindings available. And we've got, we're really supporting a number of tools up there, Open Nebula, Ansible, our own CLI, or whatever other tools you might want to run. And it's designed so that you can run multiple Lexity servers or, like, either just on your laptop or on multiple machines and support migration and operation across multiple systems. Either just individual systems that then exchange containers and VMs or N images, or as a unified cluster that will also go through and attain a bit. Now, for where you might have seen Lexity, Lexity is used on Chromebooks. So when you install Linux on a Chromebook, for example, it effectively runs a Lexity host that then runs a Debian-based container with a lot of fancy pass-through features in place there to get you GPU, USB, whatever access, all that stuff going on. There was also a nice integration to do snapshots and backups and file transfers straight into the Chrome interface. Another place you might have seen Lexity or done on your own systems is in Travis CI. So if you run Travis and you run jobs that are non-X86. So if you run ARM64, if you run PowerPC, or if you run an IBM S390, those workloads on Travis are running inside Lexity containers right now. Okay, back to what Lexity is. So Lexity is designed to be very simple to use. It's got a clean command interface, simple REST API, and pretty clear terminology. In many ways, it acts like a small, local cloud on your system. Originally, only with containers. Now we've got multiple options there. It is fast, so it's image-based. We support multiple storage drivers with whatever copy and write feature they might have. We support those features for network migration and we support direct access to other whenever possible. We usually aim to be safe by default. So for containers, that means using all the canal security features, all the namespaces, LSMs, second capabilities, et cetera. For virtual machines, we do intend to use more of that. We've not actually done APAMO and insect component QMU yet. It's coming very soon. Right now, we do privileged dropping and true route and then the VM itself uses secure boot in the VM. And it's pretty scalable. You can go from a single install on your laptop to clusters of hundreds of nodes running tens of thousands of containers. And these days, make better machines. As far as what we can run, the picture for containers right now, we do generate about 300 daily images of different distros and releases on six different architectures daily. For VMs, we will get there. We are working on our tooling right now so that as it builds the container with VALSTEM, it then has a follow-up step of taking that VALSTEM, installing kernel bootloader and splitting that out as a VM image. So we end up having the exact same content as far as the operating system for containers and virtual machines. That's coming up soon. Right now, the only image that works out of the box is Ubuntu because we already had ready cloud images available in the right format and everything. On the clustering side, so that's an interesting aspect of LexD. We've got an extremely easy way to cluster multiple systems together. We don't have any external dependencies. It's using DQrite as a cluster database. There was a talk in the main track about an hour and a half ago on that database. That's what we use for LexD clustering. The API, when you talk to a cluster, is the same as the one you would talk to. You go just a single node, LexD, on your laptop. If you can actually take a command line or script or whatever that doesn't know what a cluster is and you run it against a cluster and it would just work. The cluster would just do some amount of balancing for you to make its best guess as to where to put things. If you are a cluster aware, you can obviously do a lot more and pick exactly what machine you want and get hardware details and stuff on the different cluster nodes. But you don't have to and just like a superset of the API, effectively. It can scale to thousands of containers on dozen of nodes. We actually have gone all the way to 100 nodes and tens of thousands of containers now. That works fine. Density for virtual machines obviously goes down and you can't quite run tens thousands, well, multiple thousands of full OS virtual machines on one system. It gets a bit tricky. But that's not really a LexD problem so much as how much your hardware can actually achieve. And we've got support for multiple architectures and mixing multiple architectures within the same cluster. Then LexD does the right thing based on what image you're using. It picks what node is actually capable of running that workload. Now, for the VM side itself. So as I mentioned briefly, we've gone pretty much legacy-free because it's new to us. We don't really need to start supporting old machine types to run DOS or Windows 95 or something. It's not something we really care about. We care about running, acting like a modern local cloud, effectively. And so we really care about modern distros, which means that for both X86 and we've gone with UEFI as the only firmware we have, it's got secure boot that you can turn off if you don't have a workload that can boot with secure boot. We do VATIO devices only and currently we're based on QMU 4.2. Technically you can go to lower QMUs but that's what we test with really. API and everything is pretty much the same as containers so far all the tools we've seen that we're interacting with containers can track with VMs and they don't really notice a difference. So there's no particular VM knowledge needed for anything that's like any tooling that storage targets LexD can target LexD VMs and it will just work the same way effectively. The VMs integrate seamlessly with existing LexD configuration you might have. So your LexD networks, your X-storage pools, your LexD projects, LexD profiles in configuration. Those are effectively shared between VM and containers on the same system. You don't need to duplicate anything which is really the main benefit for using LexD to manage your VMs. If you had two different solutions on one system it gets a bit annoying to manage but now you can just use one thing and does it all. LexD VM support was introduced in LexD 3.19 which was released mid-January. So it's pretty new. We've been working on it for maybe the past six months or so on and off. Ironically supporting VMs were very easy as far as we're concerned. Like the actual work to drive QMU instead of LXC and running control machines probably took us just a few days. The main issue we had was actually refactoring our storage layer so that we can store blocks or fire systems and handle that properly. That's what took us way longer than the VM piece ever took. Review of the LexD API and canal things are structured. So instances is obviously what most people care about that's either containers or virtual machines. So it's different instance type that indicates which one of the two it is. You can snapshot or back them up. They're created from images. Images can have nice names so you don't need to give them like a SHA-256 or whatever. It can be clustered. So we've got the cluster level part of the API that lists all the nodes and how things behave. On the... You can also manage networks that creates bridges that you can then connect your instances to. Storage pools. That's pretty obvious what they're used for but you can create multiple storage pools on different storage drivers, different block devices and then assign containers or VMs to that. You can create custom volumes as well and then attach to your instances and those custom volumes can also be snapshotted. Then we've got some amount of internal components that are mostly for authentication and tracking of what's going on and some APIs against the instance itself so for things like file transfer, executing a command inside it, attaching to the console or publishing it as an image. Alright, now for the interesting part and see if that stuff actually works. Okay. So what I'll be doing is I've got... Hoping that it works, it does. I've got two systems. We'll just do an initial install. They've never really done an electricity thing before like it's installed but never been configured. Yeah, I can try to figure it out. Yeah, it's probably about as big as it's going to get without it cutting everything afterwards. Yeah. So you can say because I've got two systems, I might as well create a cluster. Why not? So we'll just create one. I just need to enter it some IP, which is this one. We're not joining an existing cluster. We're creating one, which is a password. Yeah, storage would be nice. You can edit, pick whatever it wants. ZFS is what it prefers on this one. Default size is okay and the rest should be mostly fine all out of the box. There we go. And that failed because... Well, because it's a demo, yeah. And also it's cutting on my screen, which is not super convenient. Yeah, okay, that's annoying. What's the actual error? Oh, there we go. Did the server actually change IP? Does that be annoying? No, it didn't. Okay. Let's try that again. I probably just typed the IP address at that. Oh, was it? Hold on. Yeah, it is 46. I just swapped IPs between the two. That's nice. Because I redeployed the servers and, yeah. Okay. So, not joining an existing cluster. I want to set it up. I just hope it didn't get too far in the setup because I thought it might get a bit confused now. Let's see. Please don't be confused. Please don't be confused. Ooh, okay. All right, so I just need to update my notes because those IPs are not the way I was expecting them. Okay, so let's go on to the second one now. And hopefully it's when it's 47. It is. Sweet. Okay, so they literally just swapped IPs. Okay. Okay. Yes, we're joining an existing one this time, which is on 1646. Yes. Password. There's nothing to wipe, but sure. Okay. And, yeah, we've got the cluster working. So now if I do cluster list, we'll see we've got two systems joining there. The first one is running the database because you need quorum for the database and you can't have quorum when you have only two systems. Only one runs the database. Once you reach three, then all three run the database. So, the first thing we'll do is we're just going to edit the default config, for instances, to add some a bit of clad in its magic in there. That's normally not needed if you only run containers. Also, that the text editor is messed up, hold on. Okay. Okay. But because we're running virtual machines and we don't have any other way to get that config, that effectively just sets the new password to Ubuntu through clad in it. So just put that in place in that profile. Then let's create a container real quick. So just pulling a CentOS 8 image. Okay. There we go. All right. So, there we go. We've got CentOS 8 running in a container. Now to show the difference with running a VM, we effectively still tell it what we want and just do dash, dash, VM. So it's going to go do the same thing. So because we're running... I'm going to do things in parallel because we're running a bit out of time. So as far as other things we can do, security, secure boot, false because windows. So we're going to... That's on my own laptop. I'm just spawning another VM based on the windows image I created. Well, we forced CPUs 8 gigs of RAM in my toning of secure boot because otherwise the WH... I don't have a WHQL driver for the disk so it's not happy in doesn't boot. Okay. Back to the cluster. I was waiting for that VM to start. It's unpacking the image. Obviously, VM image is quite a bit larger than just a container image. It takes slightly longer. There we go. We need to add a config drive right now which gives it access to the CloudNet config. So that's the... We just add a disk device for that. And then we can start the VM. Just takes a tiny bit. Come on. That's running on really old hardware. It's like 10 years old service that I've got laying around in the basement so it's a bit slow. So touch to the console. Just grab. Let's booting. And get to a login prompt eventually. I didn't give it any extra CPU memory so it's running with one core and one gig of RAM right now so it might have been a bit more generous to probably have booted faster. Okay. CloudNet is running. Which means we're about to get a login prompt. Come on. You can do it. You can do it. There we go. Yeah. Oh, that's interesting. I think I probably erased something with CloudNet. Anyway, the login prompt works fine. So now for the... So right now if we look outside of... There we go. Outside of this, we'll see the VM is running. We've got its IP that's been retrieved from the DHCP server. That's fine. But if we try and look at more info on the VM, it doesn't have the list of processes of the extra detail because it's a VM that we can't just attach to it like a container would. But that's where we can do things a bit different. So we've got a 9P drive that's exposed by LexD out of the box with an install script that just adds some system to the units. Which if we then reboot the VM, it will start back up starting those units that will run a VM agent that then talks back to us. Well, that's going on. I can just show that local VM has started. Windows has been started with an IP because Windows is weird and it can do SSH these days. You can actually SSH into it. If you prefer PowerShell, you can also spawn it from there. If I can type PowerShell properly. There we go. So that's the thing. I can't remember which is which. So PowerShell exit and the other one is quit. Exit and exit, okay. Obviously, X3 RDP. So that's Windows. Now back to the Linux world. So that VM should have rebooted. Yeah, what a login prompt. Now if we look at the list, we should still see the same thing. Let's see if that works. Yep. So with that, I just spawned a shell inside the VM. That's done through our agent in a VM. So if we look at the process list, we actually see bash being a child of the agent. That's going through VSOC. So even if you look at the network, yes, it's up. But if we fix that, like we still have the shell, I think it works. The same API can be used to do file modifications. So you can pull a file through it. It comes from the agent. And just push back into the VM. So we get the shell back inside it. It's there. That also gives us more detail. So if you do info now, it gives you a number of processes. It gets you IP addresses, wallet interfaces, counters, and stats and stuff. All right. It took kind of a rush now because I'm a bit behind. So what's next? We want images on all distros. That's obviously not our priority. We want to be able to live-update a bunch of devices and configs on the VM like we do for containers. Right now, you need to restart the VM. More security. I mentioned app-armors.com. We want to put those. We've got all that generation code already for containers. We just need to wrap the VMs with it. A number of feature gaps we've got around. We've got compared to containers. We want to fail quickly. And the agent right now only works on Linux. We want to make it work on Windows. Given that there's a new VSOC driver for Windows that's been in the works, we hope to use that. LexD itself is available on Linux Mac and Windows, but the daemon only runs on Linux. But the other ones, you can use the clients to connect to a remote LexD. Contributing to LexD. It's written in Go. It's fully translatable. It's got API libraries and a number of languages. It's Apache 2.0 licensed. There's no CLA or anything. And we've got a pretty active community that can help you with issues. And that's it. I've got a bunch of stickers here if you want to grab some after this. And we might even have three minutes for questions. OK. Yes. Yep. So, yeah. Right, sorry. So the question was about the API when talking to a single node or talking to multiple nodes. Yes, it is the same API. There is like an extra field in some of the objects that just tells you where they are. So in those trucks you get from the API, it's going to be like an extra field telling you location to know where something is. But if you don't know about that, you can just ignore it. And that just, it works. OK. That's going to be hard. I think you have your hand up already. Yes. Is there any plan to integrate this in mass? OK. So the question is whether there's any plan to integrate some of this in mass. The answer is yes. Mass has KVM pods right now. That is being replaced by driving LexD. That was one of the drivers for this work. So the idea is to move away from mass talking to Ledvert and moving to mass talking to LexD. Let's go there. OK. So the question is whether we plan to back port the VM to stable channel. The answer is no. Because we've got, so we've got 19, three, 20 thing is stable races with that support. The SNAP itself is available all the way back to, say, 14 or four or even center seven or quite old distros. So we test on that and that should work fine. We've got an LTS release LexD 4.0. That's going to be coming up within the next two months. That's going to have VM support. So for those who want the five years LTS guarantees on LexD, 4.0 is what you're going to want for that. Let's go here. Sorry. Oh, to spawn which one? Yeah. So for the virtual machine the name ends up, the solution is like what's the image type that was used to spawn virtual machine. So for a lot of images, for the Ubuntu images, the exact same name are going to line up for both containers and VMs because you've got an Ubuntu 18.04 for both. And that's why we've got the dash-vm flag. That's just to tell LexD, hey, I actually want the VM version of that image and then that's what it uses. Right now it pulls that image from the Ubuntu cloud images. Can we bridge some images? Oh, OK. So the question is the actual image format in the images. OK. So the image format we use for LexD VM images is QQR2. So it's QQR2 with the LexD DAW-XZ metadata on the side for just properties and stuff. But the actual image format we support right now is QQR2. We'd probably add support for RAW because that's easy. But yeah, QQR2 is what we use. We're out of time. So if you've got any more questions, you can grab me outside. Other stickers in front there. Thank you.