 Welcome everyone. I'm Stefan Krabber. I'm the project leader for LXE, LXD, worked for Canonical as a technical leader over there. And today I want to talk about what happens to, you know, all the physical devices that you usually have to deal with on physical machines and some cases with virtual machines and how those map into the container world and what's possible with all that stuff. So first of all, a very quick intro. I'm really sorry if anyone was in Christian's talk next door just before because that's going to be a bit repetitive. But in this talk I'll mostly be focusing on what we call system containers. That is pretty different from the application container type model, which is what Docker, Rocket and those types of times do. System containers tend to be better suited when talking passing physical devices and dealing with physical disks and nicks and all that kind of stuff because their main goal is to provide the same semantics around containers as you would usually get around a virtual machine or physical system. System containers are the oldest type of containers. They started with BSD jails. Then the similar approach was taken as a kernel patch called Linux vServer. Solaris did something pretty similar to BSD jails, but with networking sorted out, which BSD didn't quite have when they implemented Solaris Zones. Then we got OpenVZ, which was another big kernel patch set on Linux to get system containers going. After which work has happened in the upstream kernel community to get containers in general supported in Linux kernel, which in user space led to the LXD project, which then evolved recently in a more user-friendly manner into the LXD project that we've got right now. System containers very much behave like a standalone system. They run a clean distro images. They don't need any kind of specialized software that's container aware or any of that stuff. They don't need custom images. They don't need none of that. They will boot a full system exactly like you would on a normal machine or in virtual machine. The only difference being that they share the kernel with the host. So long as you're not trying to run Windows or some other operating system, you can run whatever distro you want inside a system container. And there is no need for any kind of authorization because you're dealing with containers after all. So it runs on any architecture, even architectures that might not support any kind of authorization extension. And when talking, and we're going to go into more details after that, when talking about physical devices, that also means that you don't need any special firmware or hardware platform or any of that kind of stuff to support physical device pass-through. So if anyone has been dealing with that in the VM world, you're usually looking at whether your hardware is VFI or capable. And whether your motherboard supports it, whether you've got the right setup of PCI lanes, like put up any lanes shared, that kind of stuff, to try and get devices into a shape where you can move them into VMs. With containers, we use a single kernel, so we don't need any of those hardware and firmware details at all. Now, I'm quickly talking about use cases, where in what cases might you actually need physical stuff? And we can virtualize a lot of things. That's very nice. But occasionally you do need to get down to hardware. And one of the most common use cases these days is really computation. Whenever you want to deal with GPUs, whether you're doing deep learning type thing, or you just want to get rich mining coins, whatever you like, usually end up having to deal with GPUs. You can do very fancy virtualized GPUs. If you go with advanced server type models of GPUs, they can let you slice a card into different virtual chunks. Those virtual chunks will pass virtual PCI devices, and you can bind those to virtual machines. But there's still some overhead of all of that. Whether the slicing is dynamic or not, like how you actually assign resources, and then the fact that you have to pass those to virtual machines that can be a bit painful, or maybe some overhead, you may not want to buy a multi-thousand dollars card just to be able to do that. Especially when all your virtual machines are going to be running Linux anyways. So this GPU is one of the pretty big use cases that we've got for any kind of device password in NXT. And just in containers in general. The other thing you usually run into is very fast networking. That's the cases where you want to pass devices that are capable of like 40 or 100 gigabit connectivity, or possibly if you're doing the HPC type workload, you might care about features like RDMA to directly access memory from different hosts on the network. Those you could also pass through PCI pass through, but either you need a very fancy card that lets you do virtualization and slicing, or you need to have one physical card per virtual machine on your system, which not always that great a fit really. And now then a whole bunch of other devices that more people tend to deal with day to day. I mean, you might, like in your company, I don't know, you might be doing Android app development. You might want to be able to test your code against physical devices. Sure, you can get a bunch of Raspberry Pis and connect stuff to them. That's perfectly fine. But you could also get like some beefy servers and just attach all your devices to that and then just patch those physical USB devices to whatever containers running given test against it. So USB pass through is very useful for that kind of stuff. It also, similar environments, it means you can pass access to scientific equipment, be it like fancy scales or whatever you might have in the lab. Those tend to be pretty simple serial USB type devices that are pretty darn trivial to pass to a container as well. And then you've got some of the weirder cases. I mean, you've got things like HSMs. For anyone who needs to store private keys very securely, you might have a container that you want to do your key management in that doesn't have any kind of internet access but needs to have access to the HSM. So that can have files dumped into it, do signing and then push the file back out or something or not. And then even weirder cases like phone cards. It's not uncommon for companies to still get a good old T1 or similar type phone lines. And that usually means either you've got a physical PBX or you've got something like asterisk running with a fancy phone card. Again, maybe you're willing to waste a full machine just to run your PBX, but maybe you're not. And if you've got infrastructure set, you might as well just run all that stuff inside the container rather than running it on a dedicated machine. And storage is always also a pretty important part of that. Very, very fast storage and virtual machines doesn't tend to align always that well. For anyone who's been dealing with cloud instances, IO performance tends to be a bit of a bottleneck. And usually turn to physical machines whenever you need to deal with really, really fast storage, be that NVMe or whatever else. Again, system containers will let you have something that feels like exactly your virtual machine or your own tiny cloud or something or not. But be running on the single directly with the hardware directly. Talking directly to the hardware rather than having to go through X layers of virtualization and direction. So that's some of the points I also kind of went through already some extent like how the whole device pass through works for containers. Rather than what you would usually get with like a virtual machine where you pass the PCI device or you pass the actual device to it and then you need to load your drivers in there that will then interact with the virtual device and then eventually access the hardware. In a container, the kernel will just load on the host whatever modules needed to talk directly to the hardware and then the device nodes that show up as a result of that are then exposed to the containers that you want to have access to that. That means no virtualization anywhere along that path. Direct hardware access if you want it. You just need your host kernel to have support for whatever device you just attached. The tricky part to some extent is figuring out exactly what device nodes you need to get and move into the container. That's especially difficult with GPUs. When you've got a system that's got a mix of multiple vendors of cards some of which come with drivers that give you extra device nodes think like NVIDIAs, they are dev NVIDIA type devices that needs some extra logic to try and figure out exactly what is tied to a specific piece of hardware. But once you do it, it's pretty simple to then move those into the container. The very interesting feature that you get from all of that, especially when talking GPUs is that you can share those devices with as many containers as you want without needing like any fancy hardware feature to try and do slicing and that kind of stuff. Linux will perfectly let you have like 10, 15 whatever pieces of different processes talk to the same dev NVIDIA nodes for rendering. And there's nothing wrong with having multiple containers that all see the exact same device. Sure they will eventually, they will share time on the card itself but you can do it and then it's effectively up to you whether you want to add out or not. Rather than up to whatever the hardware gets you. And the other thing that's pretty interesting is that you can attach and detach those devices on the fly exactly when you want it. You don't need to take the system in our case the container down to replace hardware like remove a card, add a card, that kind of stuff. Usually hot plugging into a VM is pretty reliable, that part works pretty well. Removing from a VM not always so much. And if you're dealing with physical hardware then you don't really want to start hot plugging and hot removing the hardware from your running server. The bulk of this talk is going to be going through a demo of like all of those different features and the way they work in Lexi. So I just want to quickly go through what Lexi is before we do that. Lexi is as I mentioned an evolution of Lexi. That means it's a modern container manager. It's got a REST API. It's got nice scripting. It's fast by default. It uses optimized storage for any kind of storage drivers that you might think of. Like we support ZFS, BRFS, LVM, CIF, directory-based type storage. And knows how to do like extremely efficient cloning of containers and copying of containers, all that kind of stuff. It's secure by default. We use every single kernel security features it can think of. Like all DLSMs. We also use C-groups for restricting resource usage. We use all our containers are unprivileged by default, which means that root inside your container is not real root outside of it. We use all the namespaces as well to get you like as safe a container that you can possibly think of. And it's designed to be very scalable. So while the demo I'm going to give you is on a few individual systems with a few containers, it works just as well if you've got hundreds of different systems with running each hundreds to thousands of containers. And because it's a networked type demon, it does support moving your containers from one host to another and using one of your hosts as an image server for the others. All of those kind of nice network interactions you can think of. As far as what we support for device pass through in Lexity, we've got five device types right now. Well, six technically. We've got Nick, which is network interfaces that we support virtual interfaces. So that just means your normal Linux bridge plus a virtual device that's connected to it. That's the default mode. But we do support passing a physical device through. Your device node just disappears from the host and shows up in the container effectively. For disks, we support either set just effectively by mounting a path from your host into the container. So you can say that you want home your username, some that actually show up slash MNT in the container and Lexity will do it for you. Or you can point to a physical block device without a full disk or partition and just have Lexity mount that for you at a given path in the container. For GPU, we support passing existing GPU. We figure out why DRI render nodes we need to move and whether there's anything additional like the CUDA type devices that we also need to expose to the container. And we do that for you as well. For USB devices, anything that's supported by libUSB, so that's anything that looks at deaf bus USB and accesses those device nodes, we support as well. So you can effectively configure a container saying that anything that's plugged has that product ID and vendor ID, just pass it to the container. Or you can even say anything that's from that product ID and vendor ID needs to be passed to the container. Because multiple devices, like if a given piece of hardware shows up as like three different devices all from the same vendor, you can have them all automatically be passed to the container. And we support hot plugging for USB as well. So as soon as Lexity notices that that particular device has been plugged into the host, it just passes it to the container immediately. There's no need to restart or anything else. And for low-level type use cases, we also support passing just any UNIX character device or UNIX block device that you might think of. Some of the use cases there are usually run by virtual machine hosts inside Lexity containers. So install libvert inside a Lexity container and then I pass devkvm from the host into the container. So that was the boring speech. Now let's go look at how that thing actually works. I'm hoping it's going to be big enough. The problem is that if it's a nice bigger than that, then tables and stuff don't render anymore. So I'm hoping that will do the trick. The first thing we'll look at is just going to install Lexity on a test system real quick. So just downloading the latest version of Lexity and installing that. Using the Lexity snap package in this case, but we also have native packages and whatnot. And it works a bit slower than it was earlier too. Whenever you prepare your demo, you're downloading it at like 20 megabytes a second. Everything is great. And when you actually do it, then Internet goes to being... Seriously? Are you kidding me? If this would be Internet being done, that would explain it, I guess. That would also be very sad. Because I can show some stuff locally, but I don't have a GPU that I can really do anything useful with on this machine. So that would be pretty unfortunate. Fine, I'm going to bounce the Wi-Fi and see if I get lucky. If not, I'm just going to demo whatever I can locally, but that would be unfortunate. I just reconnected and it seems to be working way better. Okay, now I'm reconnecting VPN. VPN is back on. Come on, SSH. You're reconnected. You should do something about it. Do I seriously have to reconnect? Oh, yay. Look at that. All right. So Lexity is installed. That was a bit harder than I thought it would be, but it is. Just going to do the initial config for Lexity, which usually involves pressing enter quite a few times. It's effectively going to create you a new storage pool. It uses EFS in this case by default. It creates a default bridge with defaults unused, some Netfile, BV400, BV600. There we go. It's created. Now let's create a container. I'm just going to use, say, a CentOS 7 image, because why not? That's how a network is supposed to work. There we go. Okay. So if we list right now, the container only has an IPv6, turn on IPv4 soon. There we go. So that container just created on default Lexity, which means it only has, it's connected to a local bridge. It doesn't have any physical devices on it, that kind of stuff. We can list its entire config and see what's going on there. You might see, done in devices, there's EF0. We say it's connected to Lexity BR0 type bridge. Okay. So that's the default config. Now let's change that a bit. I do have a awkwardly named EMP11S0, I think. Yeah. So physical device on that specific machine I'm not using right now. So I'm going to tell Lexity to replace the EF0 device I've got in that container with that device instead. Now I'm just going to bring it down and bring it back up, which point we'll see how long DHCP decides to take today. Took a while earlier. But the idea is that the container hasn't been stopped or anything. It's just, if EF0 has been effectively removed, a new one has been moved directly from the host in place with the exact same name. I'm now inside the container, I'm just trying to bounce networking without restarting the container, because I thought it would be faster. Maybe not. Shall see. Yeah. There you go. So it did something that you should have an IP before. It does. So we see, well, if you remember it, the IP before wasn't a 10.something subnet, which is what the bridge, default bridge uses. Now it's connected directly on physical network, which is a 172.17 in this case. Now this one is not really related to hardware pass through. It's kind of just for kicks, because it's always fun. So whoops. What's going on here? One copy is not working today. There we go. Okay. So I'm removing the EF0 device earlier, and I'm going to re-add it, but connect it to the original bridge and then restart that one container. Okay. Yeah. There we go. It's back connected to the bridge, and I'm going to enter the container. There we go. And I'm going to be downloading a file from the local network. We can see it's downloading at about 110 megabits per second. Actually, no, it's megabyte. So it's downloading a gigabit rate from the network. What I can do now is I can dynamically add a setting on that device, which is limits in grass, 10 megabits. And we'll see the speed going down and down and down and down. Oh, there we go. 10 megabits. Oh, we can move it, I don't know, 100. And we see how the file might actually finish downloading, unfortunately. But it's going back up. Yeah. It almost reached 100 megabits again just before finishing. So that's for the network stuff. We can pass any network device we've got on your host if there are physical nicks. Just pass them in, and they behave exactly like any normal network device that was next to its concern. Now, let's go take a... Oh, I'm in the container. It's a bit laggy. There. Creating a new container. This time, Gen2, because why not? To look at passing through disk devices and how that looks like. I swear the node was faster when I tried the demo. So Gen2 is surprisingly big. All right. This is not recompiling itself when you launch it. There's that. 2.6 a little bit of time once it reaches 100%, because since I'm using ZFS, it means it's unpacking the image and creating a ZFS dataset, which then allows creating containers very cheaply from that point on. But the initial image import always takes a bit longer because of that. And apparently Gen2 is rather big. Yay. Okay. So Gen2 is finally started. The first thing we do is pretty simple. Just I want to expose slash home from the host into that demo disk container I created at the mount point, mount slash a. So I'm just going to do that. And then I can enter the container and look at what we've got in slash mount. We've got a directory. And inside it, we see my home directory. That's a pretty simple case of just passing through any path you want from your host. I'm going to format with X4, spare SSD, I've got on that given system. And then till XD, I want to expose that dev-sd device into the container under the mount point B. Also in the slash mount. So if we look at slash mount, we've got B. It's just got the last and found because it just got formatted. But we can see at the bottom that it's dev-sd is mounted on mount B. And lastly for storage, let's try and do something even more fun. We recently added support for Ceph in XD. So I'm creating a new storage pool in XD itself. So now if I list my storage pool, we can see default is ZFS. That's what was created at install time. But then we've got Ceph that has been added there. Now I'm going to allocate a new volume on Ceph called data. It always takes a bit of time because of the Ceph cluster needing to replicate and whatnot. There we go. And now we can attach that particular Ceph data volume to the demo disk container with a device called C, mounted at mount slash C. And go in the container. And if we look at mount, we've got C that's in there. Lost and found again. And if we look at df, we see that dev-rbt0 is mounted on mount C. So that's a Ceph volume that's attached to that container. That can be very useful for people who need to store big databases or whatnot and need a replication and clustering and whatnot of Ceph. They can very easily attach Ceph volumes to your containers. This time, let's try option X. We're going to run out of this choice pretty soon. So the next thing I want to show is passing through UNIX devices. That's what LixD tends to do for you with all of the other abstractions that you have level with God. But this thing lets you attach anything else that you might come up with, like any weird device that you've got that we've never heard of. If it's got a UNIX character device or an Xplug device, which is usually a pretty safe bet, you can pass it through with that. The first example I'll go with is KVM. So I'm going to attach to my demo UNIX container a new device called KVM. That's a UNIX character device at dev-kvm. And now I'm inside the container. And if I look at slash dev, I don't have much in there, but I sure do have a KVM device. Now I don't really have the time to install it, but I could install QMU there and then run virtual machines from inside that container. And the container itself is completely unprivileged. There's nothing running a real route in there. It just adds access to the dev-kvm device. The other thing we can do is UNIX plug devices. So in this case, I'm passing dev-sde directly to the container. Same thing, if I go look in slash dev. We now have dev-sde. And you could format it or do whatever you want with it, except mount it, which is a bit awkward, but unprivileged containers are not allowed to mount block devices except for very limited device systems. So if you've got something that uses Fuse, it can interact with it. X4 in Ubuntu, you've got the kernel module option that lets you bypass that particular kernel check and lets you mount X4 if you want to. But all other systems don't allow it. It's just like a safety feature. You don't, because that effectively allows an unprivileged user to mount a block device hitting the kernel superblock passing code. They could handcraft a nasty block device that would then effectively execute kernel code. So that's not allowed by default. You could make the container privileged at which point you would be allowed to mount. Okay, that's it for this particular machine. The next thing I want to show is GPU. So we've got another system that's got two NVIDIA GPUs. We can see there. Unfortunately, they're two of the same, so it's not nearly that great to see the last order of that kind of stuff, but that holds us to NVIDIA GPUs. We also have one container running on there called CUDA. Now, if I go in that container and I run SMI, we'll see it fails because the container itself doesn't have any GPU yet. But I can tell LexD to pass me a GPU. In this case, it passes the first one, so I did ID0. And that is not working. That's a bit special. Fine. I'm just assuming it's the index that's confusing it. Let's just try that. Yep, there we go. Oh, I think I know why. Let me just recheck what I'm right. I believe that system used to just have two GPUs, but I think now it's got three, and zero I'm guessing is an Intel GPU. So assuming that's the case, that means that if I remove that GPU... Oops. Okay, so I removed the GPU. And I'm going to re-add it, but this one will go with index one, which should be an NVIDIA GPU, which means it should show up there. There we go. It's much better. And I should be able to add another one. I need to give it another name, so GPU1 with ID2. And there we go. That's much better. Okay. And that means that in there, we've got card1, card2, and if I now pass zero, it will not show up in NVIDIA SMI because it's an Intel GPU, I think. But it does show up as a DRI node, so you could do OpenCL or something against it if you wanted to. That's kind of the same way like that an AMD GPU would show up. They don't have a specific device node for compute, but they do show up as render nodes under DRI. Now for actual computation stuff, we can run one of the simple benchmarks from NVIDIA CUDA. Just run does quick bandwidth test against the GPU. Just to show what I mentioned earlier, I can copy that container, creating a second one of it, and then start that second container. It takes a little bit of time because it needs to rewrite a bunch of stuff on the file system real quick, and that system is a bit slow. But the idea is that the CUDA container is still running. It's as those GPUs attached to it. I copied it, which means that CUDA1 has now the exact same config and can see the exact same GPUs as the first container. So if you had some reason to run different workloads and not really care about dedicated GPU time, you can just share the same GPUs with as many containers as you want. And lastly in the demo, this time I'm actually on my local laptop, so I'm sure that the network is not going to be a problem this time. I've got my cell phone here. I've got a container that's running that's called Android Dev here. That container has the Android tools installed. So if I run ADB devices, that's going to scan what it's got on the USB bus and tell me I've got nothing. Which is kind of expected since the phone is not connected anyways. But now if I was to connect it, it needs to enable USB debugging on the phone itself so that it works for so that it can work with ADB. So turn on development and turn on USB debugging. There we go. So now the phone, if I look on my laptop itself, I should be able to see a Sony device. There we go. It's showing up on USB. It's still not showing up in the container. That's because I've not passed it to the container. So I can tell that I want to pass to that Android Dev container a new device called phone. That's a USB device with the vendor ID and product ID I showed earlier. Now the phone device has been passed. On my phone it now shows the prompt for the security key for the container. Which means that it's found something but it's offline so it's not trusted on the phone yet. Now it is. Which means I should be able to do they don't trust it hard enough. Because it should prompt it. Let's try this. Oh, there we go. Yeah, this is convincing. So now I can run adb shell and I get the... I'll show them to my container. What happened in the hood there is there's this USB device I should actually just do fine or not. You can see the device node was created under there which is for my phone. Now if I unplug the phone they will attack that and remove it immediately. And if a device is connected that matches the configured pattern then it just... They just get created immediately whenever it's plugged in. And that's it for just about all the devices that we support passing through right now. I've got five minutes left so it seems to work out pretty well. Just to recap kind of what I showed until now we can really pass just about any device into the container. The requirement is that your host kernel must support it. So if it's some fancy piece of hardware that requires its own kernel module that you need to hand compile and load it only works if you can have whoever runs your host to do that for you. So long as they do you can pass it to your container just figure out what device nodes are needed. If it's a GPU or USB device we've got nice abstractions that do all of that stuff for you. If it's something fancier then you just need to figure out what the actual path and the depth is and you can pass that to the container and your container can then access that without any of the usual overhead. All of that stuff doesn't require any kind of fancy hardware. You don't need VFIO you don't need the right version of BIOS and motherboard and all of that stuff that usually comes in with trying to do that stuff with virtual machines because we don't need to virtualize PCI devices effectively to pass the resulting device nodes. None of that stuff applies. You can use consumer-grade hardware if you want to and it just works. And we can share those devices with as many containers as you want. It's pretty simple. The device nodes are exposed to more than one container. So long as the kernel driver supports multiple accesses it's going to work fine. If the kernel driver doesn't support it the first container attaches to it coming back from the kernel saying that it's already being used and they can try again later. So that's it for what I've got here. My goal here was really trying to show you that system containers are in a lot of cases very good replacement for virtual machines. In my mind it doesn't really make sense for people to run Linux on Linux like running a Linux virtual machine on top of Linux. In most cases you don't really need to do that. You could use a container and not get any of the overhead in complexity that comes with that. There are some cases where you need it because of special kernel modules because you need some very specific kernel version or something like that or because you need to run another operating system or you've got some very specific security constraint maybe. But in the bulk of cases you can use containers just as you would virtual machines and you don't get any overhead you get crazy density with that. It's also usually much simpler and faster. So that was really it for my talk. I do have a bunch of flexd stickers if any of you like those. And that's it. Does anyone have any questions for me? In the container you can call them whatever you want really. It doesn't matter. For the GPU case when you say that you want the first cards we will align it at the DRI level because there are occasionally some weirdnesses. If we renumbered things if you say pass us the second card and we were to show it as card 0 that would confuse some software that would be going to slash sys to get some extra information that might be problematic. So we usually pass it as it is as far as naming and we've not really run into any problems there. I forgot to mention the physical NICs that's the one case where sharing is tricky because the way it works at the canal level is that a single NIC network interface can only be in a single network space. So if you move it to a container it actually disappears from the host and that's the one case where that does happen. If you want to share it the only way you can really do it is by creating a bridge on the host bridging that NIC in there and then attaching containers. It is the one case where you can't just share a device node effectively because NICs don't have device nodes in their dev their own kind of structure. Yeah, so if you what you can always do is if you do have the fancy hardware you can configure the fancy hardware to show as a bunch of physical network devices on the host. So say you would have one physical network card that would then show up as like 10 if type devices on the host then you can dispatch those into your containers. But if you've got a card that cannot do that split by itself then you should bridge it and then bridge the containers into it. Or you can use MacVlan. We also support MacVlan directly at LexiLayer which might be faster than straight bridging given that some of most of the new cards are capable of at the hardware level listen for more than one MAC address which is like the usual that's a big bottleneck with bridging. If you bridge it usually means you go into promiscuous mode which then means a lot of overhead traffic actually hits the kernel. If you use MacVlan you don't need to enter promiscuous mode until you actually hit the number of MAC addresses that a given Nick can listen for. And that means that the card itself doesn't send only sends the traffic that the kernel needs and not everything else. It does the exact same thing. As far as Linux is concerned you'd still have processes. You still have some fancy name spacing around it but instead of being in the host network name space you're in the container network name space but it's still a single process actually listening and it still goes exactly the same code path. It depends what you're doing. It's the exact same you get the exact same thing as you would a normal network device on Linux. It shows up as an ethernet device the kernel does the network stack is still in the kernel and in your container you bind a given address and you're going to get your connections onto it. That part's no different. The packet just as it usually would will hit the network card arrive in the kernel driver for that network card. It will go into the normal kernel networking stack and at that point the only thing is that it will do the normal IP stack figure out what IP it's targeted to whether that port is bound by something. The only thing is that all of the the net filter thing is name spaced so all of the firewalling and all that stuff is per container so it's going to know that since that nick is in that particular container it needs to go through that net filter set of rules other than the host and then it will go and see what, whether there's something bound to that given IP and port and will hit it at the process level exactly as it normally would. I shared GPU. The main benefit is that you do have a bunch of... so you might have some cases where say your development lab or something and you've got one machine that's got a couple of big GPUs and you've got 20-something people that may want to run some workloads against that. If they've got raw access to the machine they might just mess it up by installing crap all over the place. With containers it's their own thing so if they break their user space well they just broke themselves they don't break anyone else. And yes it would be nicer for them to have a dedicated GPU because then they would have guarantees timing and performance but that's very expensive. So for development environment it tends to be nicer for them to just be able to hit some shared GPUs do their test against that and if you want to go to production and then want something that's always got the exact same performance then you can just pass a single GPU per container and get guaranteed performance. No, you can pass it to multiple containers they all see it's the exact same thing as running multiple processes that are talking to your GPU on a normal laptop effectively. You can have more than one piece of software do rendering at any given time, you can have more than one container doing rendering at any given time. Just that you share the GPU memory and CPU time. No we don't the vast majority of cases it's really like GPU you could do it for a long time by knowing exactly what devices you needed to pass but that was a bit awkward so we had this abstraction on top of it to make it nicer and that's the kind of thing we might end up doing for other devices but usually there's really nothing to do it's like you just plug some device it shows up as some device node in slash dev you can just pass that into the container. The only case where you might want kernel development is if a device would in theory be able to have multiple connections to it but the kernel does not support that then you might want to try and get that resolve so that you can have multiple containers talking to it at the same time but that's again that wouldn't be any difference than having multiple processes talking to it on the host directly. You should be able to pass the FTPM to multiple containers if you want to, whether it's a good idea or not that might depend but yes we're out of time so if anyone has more questions we can talk outside there are stickers there if you want them thank you very much