 Our next talk is going to be by Stefan Graber. He's the project leader of the Lexi project and Lexi project, which incidentally has 10 years who I've reached right now. And he's going to talk about turning physical systems into containers. Me again. All right, so very briefly, I'm Stefan Graber. I'm the project leader for Lexi. And next day, as Christian said, we've just reached 10 years now for the Lexi project. So we've been doing containers on upstream Linux way before everyone else. So today I'm going to be talking about Lexi. Very briefly, Lexi main manager system containers. The definition of a system container is that you run an entire Linux distro. They are extremely similar to veteran machines with the one thing you did that they do share the counter with the host. Feature-wise, prematurely identical to veteran machines, they're super large, no visualization extension on any of that stuff is needed. A great way of making systems faster by removing a native of the head when you're running Linux on Linux. Now, for Lexi itself, well, it's been like three years we've been working on this thing now. It's effectively the next step of user experience for the Lexi project. It is a system container manager with REST API, simple command line utility. It is network-aware so you can drive multiple demons and move stuff around, as David showed earlier. It's secure in that we use every single kind of security features at the same time with defaults to using user namespaces. We've done the work to implement user namespaces back in the kernel originally. We've done a lot of kernel work here and there to add extra namespaces. John presented the work on the LSM support for Lexi to make it possible to load down apartment profiles inside the container. That's different from the host profile. We have just about every security feature you can think of. It's very scalable. It's the exact same tooling you run on your laptop as you would run on a really big machine. We've got some clustering work coming up that three we'll be talking about later this afternoon. But right now what we're interested in is moving a physical system into a container. So let me get that. That looks reasonably big. I think that should be overflowing at the bottom, but that's fine. I don't have that much text. Hopefully it's going to stay fit. So first thing first, I've got two virtual machines running VM06 and VM07. VM06 is a Ubuntu VM that's fresh and has nothing. So just install Lexi real quick or not. It's not running here for obvious reasons. So it's quite a bit because otherwise the loading would be slightly annoying. I'm going to use a default that creates a ZFS storage pool for containers that creates a new bridge for the network. Actually, I wanted to change an option. I guess I'll just do it by hand. We need to set a just password. And then tell it to please listen on the network. So you've got a Lexi that's running with no container in it, no nothing right now, but it is listening on network and I can use that password. Let's move to another system. This one's a bit different. This thing is a CentOS 7 system. The system is currently running an Apache server that you will not see. Let me just try to fix that thing a bit. There we go. That's much better. So you've got an HTTP server running here that's at the bottom of that. Pretty much nothing but your fancy. We can just touch a file just for KX. And then we get to run a command line tool I wrote called Lexi-PGC. So that's physical to container. Same concept as P2V, which was done back in the VMware days, moving all the physical systems into virtual machines. So in this case, it could actually stream your running system. You could point it at something else, but in this case, I'm just streaming slash. So I'm giving it the URL endpoint for that Lexi address setup, giving it a container name I want. It's just creating a new certificate. Now it shows me the fingerprint of the server. I'm just streaming it right. Enter that demo password. And we just wait a bit for the entire file system to be transferred. Thankfully, it's a new CentOS installed, so it's not going to take particularly long. It does some stuff to try and prevent changes. So it basically creates a new mountain space. It mounts just the parts you need. So it's not going to ask, like, your slash proc or slash sys or any of that stuff. If you've got multiple mounts, you can set that. So you could do slash and then slash home. If slash home is a different mount. And they will be stacked in a clean mountain space, and then that result will be streamed to the container. Anyway, we see it's happened. So now let's go back to our other system. And we can see we've got a new container here. It's called CentOS VM-07. It's the name we gave it. Let's start this thing. First I have to extend it a bit longer because lexity using the username space means that the file system it received wasn't shifted to the user IDs and GIDs of the username space. So it does that on the first startup. So it took, like, a second to just stream up into your file system. Now, I forgot the shell inside it. We see it's VM-07. If I look at the process list, we've got Apache running. And if I look at slash root, well, we've got my blah file in there. So that's how you do it. Why? Why would you actually do that? I mean, it's nice and cool and everything, but why would you do that? So the usual case, companies do have a large number, tend to have a large number of old systems that are running there doing something. They don't really know why, but also if they turn it off, it will probably break something. They don't want to deal with that. So quite a bit of rack space are usually probably by that power management. And so it might make sense to move them to containers because then you can slam, like, I don't know, 200 of those in one machine and save yourself, like, four racks, not being eight. But you could also use virtual machines for that. Now, why would you want to move virtual machines onto containers? Well, if your VM is CPU bound and uses all its CPU all the time, you probably don't. It's fine to do that as a virtual machine. Now, if your VM is idle 99% of the time, which is quite often the case with those kind of workloads, VMs are not very, very good at being idle. They will still trigger interrupts. They will still use CPU times. Like, did you recently run 2,000 VMs on one server? Because we can do it with containers just fine, but VMs, not so much. You usually tend to have... You can run 100 on very beefy hosts, you might be able to run 200, but that's like mostly idle VMs, just still using a lot of your resources. It's perfectly reasonable if the VM is very busy, but if you're looking at a Linux on Linux use case with a VM and it's idle, you can pack a crap ton of those into containers which you couldn't do with VMs. The other thing that came up pretty recently, because, you know, Meltdown, Spectre, all that mess, we unfortunately have a number of people that are running completely end-of-life systems. You know, they're running CentOS 3, CentOS 4, those kind of systems. With production workloads, obviously they're not going to get a Spectre or Meltdown fix for the kernel because it's been end-of-life for like two to four years. But if they do that, now that CentOS system we were seeing, if I move back here, well, it is CentOS running, but if I look at the kernel, it's an Ubuntu kernel. It's running the Ubuntu 4.4 kernel, which is patched for Meltdown and Spectre. So, say you move your CentOS 3 container over to a machine that's got a fixed kernel and your workload still works fine with that, well, you just fix yourself pretty nasty security issue in the process. It will not work for everything. Some workloads will depend on the crazy old kernel that came with the system, but for a lot of workloads, it will work just fine. To show that some more, I've got on my local system, I've got some containers running here, such as a... Oh, I didn't even have read that release back then. Whoops. So, CentOS 3, I think you can still run YUM because the archive still exists. So, yeah, you can run YUM update. There's not going to be anything, but technically it's there. You could install packages and stuff. And that is quite happily running on a 4.15 kernel right now. And the exact same is true of CentOS 4. The exact same thing. The container is running, and it's not really doing much, but it's definitely running. We did try to go as far back as we possibly could with that. It didn't work so well for us because at some point, well, we tried running it like we were one. And the problem was like we were one was that it predates the L format. And it turns out you can't actually run an A.R.T. binary format on a 64-bit Intel system. You could have a 32-bit system. So, 32-bit with the right kernel option, you can actually run an exact one when container on it. But, yeah, if you still have that, then maybe you've got other problems. Anyway, I believe I'm running out of time. And that's the second demo I just did. That's just like this written in Go. You can find it on GitHub. There's no CLA, no nothing. You can just contribute to it if you want. It's translated and all that stuff. And we might have like a minute for question, maybe. Otherwise, we've got stickers and swag and stuff in front of it. You might want to take some of that on the way out. Questions? How do we manage what we've... Hi, how are you? Sorry. If you're... If you're... If it's very busy at the time you're trying to stream it. Yeah, don't do that. Effectively. I mean, we expect the P2C tool will effectively... will let you do it against the VM image if you want at some point so that you don't have your system changing constantly while it's going on. Yeah, right now it's pretty much done. All your demons off. Then P2C works fine. If it's very busy at the time then you're going to get some snapshot of something but you might not be happy with the result. Yes. Okay, wait. Which distros do you test? So, what did we try? We tried pretty much everything that FixD supports right now and we support something like 15 distros or so. So you can do CentOS, Slackware, Gen2, Fedora, Wapensuzer, Plemo, which is like a Japanese Slackware derivative, I guess. Whatever we've got right now they all work fine. You do get some weird issues like we noticed that back on CentOS 3 a bunch of utilities were expecting that ProcMemInfo would fit in one key of memory otherwise it's a fault. Turns out it doesn't fit in one key of memory anymore. So, for those kind of things you need some tweaks. Thankfully we do have a few fast system that otherwise on top of Proc that can fake those things very easily. So we just remove all the fields that are recent and that CentOS 3 doesn't know anyway and it works fine. But there's some amount of fiddling that might be needed here and there but by and large things work probably well. And that's it. We're going to switch to the next speaker. So thank you very much. Thanks.