 Hello, okay. Here we go. So hi everyone. I'm Stefan Grabber. I work at Canonical on the Lex-D team. I'm the project leader for Lex-D, Lex-D, Lex-CFS. And today we're going to be talking about five years of providing root shells to random strangers on the internet. So what's all that about? Well, we develop a piece of software that you can install. It's a container manager. It lets you run containers on your machine. But we know that a lot of our users are not running Linux. And so to evaluate our software, see how it works and play with it before they actually start using it on some cloud instance or on some physical server, we figured it would be quite convenient if they could just go on our website and click a link and then try the software right there for a few minutes. And if they think it's useful for them, then they can go and install it in production. We've been doing that for a number of years now, as I mentioned, and that's been quite successful. We've had tens of thousands of users trying Lex-D this way, several thousands more every month. But there were a few issues with that. As a very complex system service, Lex-D runs as root. And now we had a bit of a problem of giving root shells to random strangers on the internet and trying to make this all safe because we didn't really want to have to deal with a lot of security issues and people breaking stuff left and right. So just a quick brief on what Lex-D is because it might better explain what we're running in there and also what we've been using to protect those various sessions. Lex-D is a system container and virtual machine manager. That latter part is somewhat new. We've only been managing virtual machines with it for the past four to six months now. For demo purposes, we really only focus on containers, letting random strangers on the internet run virtual machines is a bit trickier and not really something we're super keen on doing right now. So we really just focus on running containers. Lex-D, unlike some of the other container managers, lets you run system containers. So there are full Linux distributions that can install and run only normal packages and in normal workloads. We don't know what's going to run in there. And so to some extent, we can't really build like a very tight profile or restrictions around the expected workload because we've got no idea what the workload is going to be. Lex-D supports a variety of storage options and network options. We support ZFS per FS, LVM, SEF, and plain directory for storage for network. We support just about anything from SIOV for physical pass-through, Mac VLAN, normal Linux bridging, IP VLAN. We support a lot of different options around there. And we offer a lot of knobs, especially around security, to restrict what a container can do or allow a otherwise pretty safe and privileged container to do some amount of privileged actions that we deem to be safe. So that's kind of next to in a nutshell. Now, what we would like to demo through our website is the ability to create and start containers using images from an external image server. Make snapshots of those containers, copy containers around, install software, whatever software the user feels like, like we don't have no idea what they're going to be installing. Apply resource limits on those containers, copy those containers to another system. So that includes, like, two of those sessions connecting to each other and then being able to exchange data. So it's a reasonably complex demo, but it's a good base overview of what LexD can do, and that's what we've been providing in one way or another for a little while now. So doing all of this safely, well, that's where things get funny. So first of all, let's look at container security for a tiny bit. If you've been in the talk that just gave with Christian, you're going to recognize that slide. I'm not going to go in quite as much depth as we did because otherwise it would take another hour and a half. But we can quickly go through the main components of container security. So you've got the file system layer for containers. That simply means the container has its own file system. We might run a different distribution. We might be running the same as the host. We might be running, you know, Android that comes with its own file system. We need that to show up as slash inside that container. So that's done through chariot or pivot route. Pivot route being the safe variant of the two. That's what LexD does. We just don't do chariot. We use namespaces. UTS lets you do hostname, mount lets you create a new mount table. Pid namespace gets you a new process tree. IPC namespace protects things like shared memory. The network namespace gets you a clean slate as far as networking devices and firewall rules and routing. The user namespace gets you a shifted subset of UIDs and GIDs from the host. So that should you be able to escape the container somehow. You are not rude on the host. The group namespace is mostly to allow for nested containers and system container to be able to manage their own subsea groups. So that's why it's important there. And there's another namespace that was added very recently that we started supporting last week, which is the time namespace. That one lets you set an offset compared to the base system plug for your container. It's not particularly useful for most users. LexD doesn't drive it right now. Now, on the security layer, in all the examples we're going to be giving here for online demos, we will not be doing privileged containers because those are just unsafe and never something you should provide to anyone on the internet or they're going to get full root access on your system in a matter of minutes. No, we're going to be using username spaces, so on privileged containers, which means that the security items you see there are effectively an extra safety net. The username space on its own means that the container is running with the same privileges as a normal user. It is given extra privileges on resources that it creates itself, but doesn't have any rights on anything outside of that namespace. Now, we can use LSMs on top of that in our case, we'll be using a partner to prevent potential issues. Say you've got a kernel issue that lets you bypass some of the username space, a partner maybe won't be affected by that bug and might prevent you from still accessing resources you should not be able to see. Same thing with SecComp. Syscalls that are not suited for unpublished use would not be allowed because of the username space, but there's no harm in having a SecComp policy which also blocks them. Again, if there's some kind of bug in which in some situations you might be able to use a privileged system called as an privileged user, having a SecComp policy ahead of that, which rejects it anyway, can only help security. And the same can go with capabilities where if you know that some capabilities you just don't need, you can start dropping those. And same thing, even though you were not supposed to be able to use those in the first place because of the username space, now you really, really don't have them. So even if you escape, you shouldn't have any problems there. For resource control, we'll be using cgroups, which gives you some some amount of control on the resources that the container is allowed to use and can avoid a whole bunch of issues around denial of service and just bad neighbor type situations. Okay, let's go through the requirements for that service. So first of all, we wanted it to be fast. So new sessions need to start within five seconds. We wanted it to be scalable. Right now, the public services configured for 32 sessions, because we didn't have that many concurrent users, so we just removed some of the RAM and made it lighter, but we could easily move it back to 64 concurrent sessions, no problem. It needs to be safe. We don't want a user on there to start attacking other users or outside services, or be able to escape the container in some way. And it needs to be both reliable and low maintenance. We don't, like we want, this is going to be the entry point for a lot of our users. We don't want those users to have their first expense with LexD be a crash. That'd be quite bad. So we want the service to pretty much always work. We also are a very small team developing a piece of our software. We are not this admins. Our job is not to monitor this service. So we wanted to just run itself. The only time at which we should be involved to the service is to update it every time we do a new release to move it on to the next release of LexD so that people can experience that. Now, before we go into some of the challenges and problems and how we pull that off, we can go through the the existing service and just gonna show you how that all works. So let me move the mic here because I need to be closer to my other screen. So let's switch to this. So that's the entry point for our nine demo service. You've got some basic terms of service, which mostly they are a legal safety net in case someone does abuse the service or find some way around the security measures we've put in place. You get you hit the button saying I want a session takes about five seconds and you get a root shell. So yeah, we weren't kidding when we were saying we're giving good access to random people on the internet. We definitely are. Now if you're an ID, it's a running route in the root group. If you look at processes, we've we've trimmed the container as much as possible, not for security concerns just to not waste resources. So even though we do run system D, we only run just the one unit we care about, which is the LexD demon and everything else is off because it's just plain not needed. Now, the normal user would then go down there and go through some of the instructions. For example, you can click on instructions that executes the command immediately up there. There's a good instruction to create a new container so they can click on that and we'll see it pulls the image and pack it and then create a container. So now that user has created a container running on next day. Let's launch ourselves another container. Let's place this one of open to 24. Give it a few seconds. There we go. And it should be put at this point it is now let's get a shot inside there. This one we can see has a lot more processes running. But because it's a clean open to 24 image, it's not a customized SIM image or anything. And let's install a cell. So you can see I'm perfectly capable to hit the internet and actually download packages from from that environment. We can run a cell and see a nice train crossing the screen. Now that nested container. Well, that container is running now has access to all the resources of its parents. We can see it sees two CPUs and what is that 256 megs of RAM effectively. But we can we can change that. So the user while playing with the service can totally reduce the memory and apply for the memory limits on the workloads. And we see the memory has been reduced. The CPU limits don't always gets to apply without a restart. But there we go. Now, we can do some as I mentioned, we can do some fancier things there. Say I get another session here. And I started at the top, I'm given a command which is used to connect two sessions together. So I can copy that into the first one. So yes. And that means that that particular legacy client can see both itself. And it can see the other session. That's another funny aspect because we actually need networking working between them. Let's create a snapshot of that f1 container we created. And now let's move. Well, let's copy that snapshot onto the second instance as a new container called F2. So now that's doing burrfs and then receive off that container from one demo instance to another demo instance. There we go. And we can start it. And so if we just local, that's what we have. And if we see another one, that's what we have over there. And we can get a shell into that remote. And just interact with it perfectly normally. So that's the stuff that's possible. I mean, there's a lot more features that are possible. Like one of the things we've got in our example is just showing you something other distros. So in this case, I'm starting a CentOS 8 container, for example. I tell you image server is available and people can use that at this up until they hit the disk limit we've applied. CentOS is a bit larger, takes a tiny bit longer to unpack. There we go. So if I get into CentOS 8, confirm it's running right at CentOS in this case. Now for the things that are possible and not possible. So in the base session that you're the base root terminal you offered, you can totally update your package list. That's that's perfectly fine. We let you do that. All right. We even let you install packages. There's also no real problem with that. You can do it. Where's the last percent or is longer than any of the others? Okay. So let's say we install curl. Again, no problem there. We're just pulling a bunch of packages from the internet and installing them. Processing trigger. There we go. Now at last some things that do not work. Let's try to access Google. That doesn't work. Try to access some random IPv4 address. That's weird. What's going on here? Well, we don't give you any IPv4 addresses. We only have IPv6. We don't really need IPv4. So why would we give that to you? And if you try to access something about IPv6, you'll notice that same thing. We don't let you access a lot of things. So that's pretty much the state of the state of this. Let's go back to. So let's go through some of the problems with running a service like this. Possibly from most of us to least of us or from easiest to deal with to worse to deal with. The first one would be networking. So most, I mean, the most common issue on there is really people trying to attack outside services, because they feel like they've got an easy root shell. And they they're reasonably sure they're anonymous to some extent, and that they can then use that to do mass mailing or attacking servers or scanning people. So that's that's a bit of a common pattern for what we've seen as far as networking. The other thing which less people try is interfering with other containers, effectively, just making the entire experience miserable for the users, bit less rewarding, because you're not actually doing something that might own your money. You're really just annoying people. So eventually, you're probably gonna get bored of doing that. But it's, yes, there's something something has to keep in mind. You've got some solutions for that. The first one is firewall, the hell out of everything, which is exactly what we're doing. As you've seen, we can't reach Google, we can reach a lot of other things. The only services that are effectively allowed are services that we ourselves own, or that we have an agreement with the people that own them. So in this case, I do work for Canonical, we make open to so I don't really have a concern with letting someone access new moon to archive servers, because I know their static web servers anyways, so there's not much you can do there. And it's pretty easy to set up rules to just allow those very specific use cases we care about. Same thing with the image servers, we run the image servers. We don't have any concerns with people trying to attack those. We also know that they're fully static, same thing. So we just allow those. As far as attacks between containers, that's where things get a bit a bit weirder. You can restrict internal networking using filters. That doesn't always work, depending on what kind of workload you care about. So the filters index the work perfectly well. You can filter IPv4, IPv6 and max proofing, no problem. But if your workload requires bridging or getting it's a separate MAC address on the parent network, then that just can't work. So it's kind of a bit of a balancing game there. The other thing to cover are privileges. So I kind of hinted at that earlier around the username space. But container escape is a real issue with that kind of services. And you want to be careful of what you're doing there. That mostly mean no, no previous containers using only in previous containers. The other aspect to that is being able to create denial of service attacks based on shared UIDs and GIDs between containers. That's possible even between unprivileged containers. So naive implementations of unprivileged containers can get you into trouble still. You need to be quite careful with that. The solutions there for the container escape case is what are covered in security slide earlier. So use the username space, setup.com, setup.sm so that you've got your main mitigation measure being the username space with the rest as safety nets should something go wrong. For the denial of service part, what you need are isolated username spaces. That's something that takes the support. It's a configuration flag. And that means that every single container gets its own range of 65,536 UIDs and GIDs that do not overlap on the host with any other container. That does restrict the number of containers you can run, but that restricts that to 65,000 or so, which is usually not a problem. But it completely prevents the entire class of attack, like it is no longer possible to do U-limit type denial of service attacks. Resource consumption. So that's kind of your aspect of that. We need to deal with two different things. One is just load on the service as many people use it. So we obviously see that every time we do a major new release because the services use that 100% for a number of hours or days. And we don't want one user to be able to use the bulk of the resources and then give a very bad experience to everyone else. The other aspect is that there are going to be some people there to just try to annoy everyone else. So they're going to try and do it denial of service attacks. They're going to try to run for bonds. They're going to try to use all the CPU or the memory or the network. The solutions there are pretty much the same in both cases, which is C Groups. So you want to set CPU memory and network limits to just make it fair for everyone. So like set an amount of memory that everyone gets, the amount of CPU that everyone gets, other commissioning is perfectly fine. There's nothing wrong with that. But you need to kind of know what your maximum usage pattern will be and what your normal usage pattern looks like and kind of set the limit somewhere in between. Now, for the US attacks, you need some extra limits around things like process limits, so that if someone tries to run a fork bomb, all they're going to do is kill their container effectively, like they're going to run dances out of processes, but I'm not going to take out the entire system. The interesting thing here is that when they do it, they will it will look to them some extent, like if they succeeded, because like their terminal won't work anymore, if they try to spoil another terminal, it won't work either. They'll be like, hey, what took down the service? When they've just taken down themselves, which that's great. Exactly what we want. Same goes with the memory limits. They're going to get weird errors, memory related, they will trigger the art of memory killer, they will trigger a lot of like, nasty looking errors. But the treaty only affects themselves. So it's fine. You probably also want to restrict just the length of the session. In our case, we give 30 minutes slots, it just makes it a bit more fair so that everyone has a chance, also prevents people from poking around for too long. And we also restrict the number of sessions per IP because same thing, there's no, like you need to have more than one of them, because otherwise you can't do the copy demo I did. But if you've got, if you allow a lot of them, then someone might just end up using half of your slots, which would be unfortunate. Okay, the next class are kernel bugs. That tends to be the main point that a lot of people bring up around those kind of containers deployments. And it's a perfectly fair one. The next candle is not perfect. There are security issues. Not that many of them tend to apply to containers these days, that definitely wasn't sure the beginning of the username space where we had a bunch of pretty nasty security issues. These days, it's reasonably good. But still something you need to keep a close eye, like a pretty close lookout, because that can be an issue. Those that are the Niagara service attacks are obviously a problem. But not devastating in many cases. Privilege escalation is the big issue. And for those, the only thing you can really do is make sure you're up to date. So the solutions tend to be if you can live patch, if you're this will support live patch in this quick at releasing live patches, then do that. On top of that, or as an alternative to that, you should be extremely aggressive to on applying updates and rebooting immediately after the kernel security fix. That's definitely what we've been doing. We take a blank security updates above everything else. If someone gets disconnected, and they've got to get a fresh session, you know, a minute later when the service has restarted, that's fine with us. And well, hardware can't really be trusted anymore. So that's also a bit of a problem. Spectre and Meldon are real things that have caused a bit of havoc around that. So for, for those kind of things, you can have two options. One is, it's a dedicated physical server, and you don't care because nothing else is running on it. So people being able to potentially eavesdrop is not to being concerned to you. If the system is shared in some way, then you're going to need to deal with things like CPU pinning. So effectively making sure that you dedicate a number of see a physical CPU cores to a service like that. And you never put anything else on those CPU cores. That way, even if you've got multi-threading enabled on those, it's not really an issue. Your users might be able to eavesdrop on each other within the demo service, but they're not going to be able to do anything much more than that. If that's not an option for you, then you should have the very least disabled SMT. So turn off hyper-threading effectively. And an alternative to that, which has been worked on quite a bit recently, would be core scheduling. So effectively configuring the kernel so that it knows that those specific workloads could not be scheduled on the same core as some other workloads. At which point that guarantees that there's no hyper-thread type attacks possible, because the only time where the two threads would be used at the same time would be if it's the demo service using both of them at the same time, but effectively avoids those kind of problems. There are a few trade-offs and additions we did for the actual production service. I did mention network filtering and how we can prevent IPv4, IPv6, and MAC address spoofing. Unfortunately, because our own use case is to run a container manager, we can't do that since we are using nested containers that are connected to the same network as their parent. That means that those containers each put a new MAC address on a network as well as a new IPv4 and IPv6 address. That effectively prevents filtering. But if you're using something like this for anything other than running a container runtime inside, then you should totally do that, because that will save you a lot of potential headaches from people trying to be annoying. The other thing is, and kind of the same issue to some extent, we are using Brfs for storage rather than using something with stronger quota enforcement like ZFS. The reason for that is also that Brfs is the only file system that lets you do proper container nesting without wasting disk space. In this case, we've decided to, you know, bit the bullet and use Brfs knowing that some of our users might try and bypass the limit rather than use ZFS, which would have guaranteed they wouldn't, but also would have significantly increased the disk usage per container. That was kind of a compromise we had to do on that one. We're still hoping for ZFS to eventually support proper container, proper use in containers, but it's not there yet. We've made a few additions to on top of everything as it was discussed. In our case, the host that we use is actually based on Ubuntu Core 18, which is an image-based, effectively read-only transactional system. So that does help a bit in that even in the worst case scenario, as someone manages to escape, there's not much they can actually write to or do on the host, because pretty much everything is read-only and checked through hashing whenever it's installed or updated. We also do automatic updates every 15 minutes, so that usually tends to get any kernel update applied pretty much immediately. And we've got kernel live patching enabled on top of all of that, too. So, now for another bit of demo. I'm just going to show you how you can run your own search service, because we've made that pretty easy. So, I've got a system here that's just a random development machine I've got in my basement. And I'm going to be installing by installing LexD. So, install LexD. Then we'll be installing a second package called LexD demo server. Now we configure LexD. So, we don't really need any of the fancy features or anything at this point. So, we're going to just hit Enter to everything that makes it pick ZFS by default for storage. That's okay for this particular use case. There we go. Then let's create a new container using, let's say, Google 20.04 and we'll call that myDemo. Okay, so grab the image and plug it and start the container. Now, let's go into that container and install, again, SL why not? Let's do that. Okay, let's just make sure it works, but I'm pretty sure it does. Yep, all good. So, our container is now ready. Let's run LexD demo server configure. It does into a text editor. In there, you've got the choice of either creating your demo sessions from an existing, from an image that's available in the store or available from some server you manage yourself, or by using an existing container. So, in our case, we'll do that later. So, we're going to commence that image line and we're going to be using the myDemo container. We'll use default profile. That's fine. The command, we're going to customize that so that the new demo session just starts SL. We can allow people living feedback. That's all right. One CPU, 200 processes, 256 megs of RAM, two sessions per user. That's fine. Half an hour sessions. And that's the disk quota, because why not? Let's do one gig. The rest, you can customize what port it binds to and list of IP addresses that are banned from using the service because they broke the rules. You can also configure things like whether you're on IPv6 only or not. If the service is currently under maintenance and there's an API to retrieve that user feedback that it can give at the end. So, that's what those tokens are for. And you can write the terminal service so down in there. So, we'll just save that and we'll start the service. Okay, now let me switch back to a web browser. And that's what the user is going to get when they go on that web server. They see the terminal service. You can hit the button to accept them. And the container is created and the command is run at which point the session will disconnect. And that's it. If they click on Reconnect here, they're just gonna be attached to the exact same session again, which is gonna just spawn the command again and get to see the train another time. So, that makes it pretty simple. If you wrote a command line based piece of software, you can effectively make the demo session then straight in the piece of software when they close it, that resets the session. Now, we can see, if I switch back to the terminal here, on that system, if I look at what's running, we can see we've got try it dash grade, which is the automatic generated name that the system took. And that's the container that's running after 30 minutes, it will automatically get stopped and deleted. I'm already at the conclusion. The demo went a tiny bit faster than I expected. So, it's possible to offer which has random searches on the internet and not get pwned immediately, which is nice. But you need to be careful. There are lots of things to keep in mind there. As I said, we've done it for five years. We've seen a number of views. I mean, the list of problems I ran through earlier are based from real experience. We definitely had users try to attack it in a number of ways. Prior to even doing that service, I was doing something even riskier to some extent, which was for a Linux distribution called EduBuntu, which is like an educational variant of Ubuntu. We had an integration in the desktop where you could try any application you wanted. And that would spawn a remote container, install the graphical application, and then export it through the non-machine protocol, which is like a slightly nice version of VNC effectively, back to the client. And the client would then just run that application like if it was local effectively, for up to, I think, 10 minutes, then it would disconnect. We pretty quickly had people installing all kinds of shells and then trying to do mass mailing, trying to sports scan the entire internet and doing a whole bunch of weird things. We didn't have a very strong firewall on that at the beginning, so it was definitely a bit of a lesson learned there. And not a problem we had with the Lexi one because that was a few years later and I've already learned from that. So, but still like looking at some of the initial sessions, maybe for the first couple of days or so, that we had that service have for Lexi, which definitely saw people installing a whole bunch of your usual port scanning and vulnerability scanning tooling and then trying to both scan the infrastructure that they're running on, to see whether they could attack us, but also just try to attack the internet. Thankfully, firewalling was blocking all of that and our own systems are up to date, so not an actual issue, but still, people definitely tried. Also, script K-Days, that was actually a bit of a problem at the beginning of that service because the PIDs C-Group is pretty recent and did not exist back then. So, five years ago when we launched the service, we actually had no way to prevent a fork bomb. The only way we could do it was by limiting memory, specifically kernel memory and hope that they would run themselves out of kernel memory before the system would have like a very negative impact. That, yeah, that kind of worked, but the PIDs C-Group is definitely much better and just blocks that kind of a track, kind of attacks in their tracks, no problem. The, otherwise the main thing I would try and recommend there is, you know, make sure everything is well-isolated, everything is kept up to date and that you monitor things as closely as you can. You can pretty much count on some people are using it, no matter what you write in other terms of service, no matter what kind of limits you put in place. People are gonna be people. The best you can do is try to notice repeat offenders and just block their IPs for a while. That tends to send them away. That's definitely what we've been doing. We've not actually had any very nasty security issues on the thing. We would just notice that, hey, the system load is particularly high and every time we notice that, we'd see the same IP address is connected to it. So like, okay, there's definitely something wrong going on there. But otherwise, this whole thing is a great, great tool to onboard new users. I mean, we've definitely had tens and tens of thousands of users going through that by now, probably hundreds of thousands. And getting to test our software and especially test the latest version of it online without having to install anything anywhere has been a very, very good tool for our users. And that's it. So that gives us about 10 minutes for questions. I don't believe we've got any right now. Let me just open that thing. Oh, actually, no, we do, nevermind. Not sure why I didn't see that immediately. Let's see. Okay, let me just read the question first and then I'll repeat it. Okay, first question is someone who had permission issues accessing something that was NFS mounted inside a LXD container. So in that case, that would be an NFS mount on the host system passed as a bind mount into the container and then accessed from within that container. What you would most likely see in those kinds of cases is everything showing up in that share as nobody, no group. That's because of the user name space in place which will prevent you, well, which will cause that shift. I can better show it here. So if I look at my containers, we see that right now everything is running as 100,000 inside them. Even though that process will show up as being rude inside the container. Now, if I was to say, so let's create a file in slash SRV, blah, and what's the name of my demo is the name of my container. So I'm gonna add a new disk to my demo. Just call it test, call it a disk. The source is gonna be SRV, blah, and the path in the container is gonna be, let's, oops, the source is gonna be SRV. We'll mount that as MNT SRV in the container. So if I go in the container and I look at MNT SRV, my blah file shows up as nobody, no group was on the host. That same file shows up as root, root. That's because of that gap. There's effectively no way to represent root, root, so zero, zero in that container because the container is shifted and that's right, it just plain doesn't exist. The easiest way to solve this is through a file system called shiftFS that we've implemented in next day. It's currently disabled by default because of some restrictions still on it, but say if we enable it and then we restart next day, I'm gonna stop that demo container now. Device, and remove that device added, which was called test. And what we're gonna do is we'll just add it back with shift equals true, and that should work. There we go. So with that, so I had to enable shiftFS support in next day. And then I removed that device I passed from the host and re-added it with an extra property saying shift equals true, which now means that we've got a kernel layer, translation layer, which for that particular amount and that particular amount alone, lines up the IDs outside and inside the container. So now root outside the container, appears as root inside the container. And so if in that container, I was to do a survey foo and say, I don't know, let's tend it on a shape two, one, two, three, four, five, six. There we go. Now if we look for the same thing on the host, we'll say that just went through without your ID wrench mapping that happens with the username space. So that's our best solution around that, that lets you still run fully unprivileged containers, including isolated maps and yet share data either with other containers or with the host by using the shiftFS kernel file system to shift things for you. Another question, let's see. So the other question was around using Tomoyo for security on Linux. So Tomoyo is another, my understanding of Tomoyo is pretty limited. I understand it's another mandatory access system. So I think it's a LSM, Linux security module. In our case, we use a partner because we run on Ubuntu, but Tomoyo would be an alternative to that. There is work being done upstream to allow mix and matching LSMs. So that's called Linux security module, stacking and name spacing. Right now, it's not quite there yet, which means that if your host system, say in my case in Ubuntu server, uses a partner, I can't have the container use something else. We hope this will be changed within the next couple of years, hopefully, at which point we'll be able to have a host running a partner and then the container will be protected by a Selenux or Tomoyo or SMAC or the reverse and have the host running a Selenux and then the container will be protected by a partner. That'd be really nice. And those implementations effectively means that the rules would combine, which would then let you use a combination of the host profile, which could be, again, be a partner or a Selenux or Tomoyo or whatever. And then your container itself having its own set of rules, which is in a different LSM, unless you cannot get the best of all worlds by combining LSMs that might have slightly different feature coverage. So definitely something very exciting. We'd love to have that, that would also unblock use cases like running a Android container on an apartment protected kernel and having that container use a Selenux policy for the apps or same thing, if you're running like a Red Hat based distro on top of something like Ubuntu, that would also let you run a Selenux inside the container and apartment on the host or the reverse when running Ubuntu or open SUSE or something like that on top of a Red Hat based distro that would let you run an apartment on top of a Selenux. So it's something we're really excited about, but it's not quite there yet. And for now, our expertise tends to be around apartment, if only because that's what defaults in Ubuntu. And so what we get to play the most with, given that's the kernel we interact with the most. Okay. I believe that was the only two questions we got. So can we wait another 30 seconds to a minute or so just to see if there's any other last minute question or the ones we're gonna be wrapping up? I'm giving you another 20 seconds, 15, 10. Okay. I'm getting a few people saying thanks. Well, thank you all for attending and for watching. It was definitely a lot of fun. And if you've got any more questions, I'll be on Slack for a little while so you can always ask there. Thank you. Bye.