 Surprisingly enough, that's me and that's what I'm here to talk about. So hopefully you guys are here to hear about that. If not, feel free to go learn something else. My name is Drew. I'm with Toradex and I'm here to talk about containers, but specifically containers built using open embedded and how we do that. And more importantly, why in the world would you want to do that? So basically, I've got a lot of material here, probably more than can be covered reasonably in the amount of time we have because this is a big topic and I know there's a lot of different expectations and experience levels of people when it comes to containers, especially in the embedded world. So really want to just kind of, I've got some overview stuff about containers, why containers are in embedded, how we use them at Toradex and then talk about how to do that specifically with open embedded with some specific examples. I have code snippets and that kind of thing. I don't have a completely functional demo that I can just hand you in a Git repo, but copy and pasting from the code sample should get you up and running as long as you have a usable open embedded configuration. And finally, I want to talk a little bit about meta virtualization at the end, which is where a lot of the new technologies in the open embedded YACO space are coming in when it comes to all things virtualization, not just containers. Quick disclaimer, I've only ever used Docker for anything. I know there's other options out there. I know very little about them. I'm happy to discuss it. If anybody has any specific feedback, I'd love to hear it. And most importantly, this is not, I'm not putting together a production ready secure system here. So don't take what I do and go ship it and then call me and yell at me later because I haven't even given that the second thought on these things. So obligatory marketing slide, we make hardware, we make software. Verizon is just a little background as the impetus of this talk. We have a container based distro that we provide for our customers. And we get the question all the time, how do I make my container smaller? How do I do proper license checking and that kind of thing? So if you're used to the Docker world and you pull things from Docker Hub, there's certainly some shady things out there. There are some good ones out there. So I got to thinking about it and looking at the things you can do with open embedded to be able to actually build the containers directly. And that's kind of where the idea for this talk came from. All right, for a moment, let's pretend we're on a Zoom web call. I just want to kind of get an idea of who knows what about containers here. So if everybody knows all the basics, we can blow right past that. I've got the obligatory picture of a storage container, any talk where you're talking about Docker or anything like that that's mandated. So make sure you add that. So first question, have you ever used containers at all? Or maybe more importantly, who has never even launched a container? Okay, I figured that was the right answer there. All right, have you ever used them on an embedded device? Okay, pretty good number still. Have you ever tried anything on a commercial offering similar to Verizon? Belana's got one. There's probably half a dozen of them out there. Okay, numbers are smaller again. What about directly through open embedded? Okay, and finally, are you a meta virtualization contributor or maintainer? Okay, so why are you here? Tim, you're not going to learn anything from me. I can tell you that right now. Okay, so everybody's used containers. Let's talk a little bit just for a few minutes about what they are and what they are not. I think that's always an important way to start. So first thing, they're not virtual machines. It took me a long time when I started playing with Docker to really divorce my mind from the idea that they were lightweight virtual machines. Yes, most people come into it, at least with my background, they come in thinking, okay, this is a lightweight virtual machine. And it kind of does that. You can kind of think about it like that. Kind of gives you an idea of what it is. They're not universal applications. Yes, you can run Docker on Windows, you can run Docker on Mac, but typically under the cover there's a Linux VM of some kind running these applications for you. You don't have to know about that, but that's the case. And also, they're not magic. All right, this is standard Linux stuff. But what they are is a convenient way to package software with all the dependencies that your application needs. And it gives you a very convenient way to ensure a very consistent runtime for your software. So if you need a specific version of, I don't know, an MQTT library, you put that in your container, what's in the host OS is irrelevant. You're not touching what's in the host OS. If you want to run another container right next to yours that needs a different, maybe incompatible version of MQTT, so be it. You do that, right? So that is ultimately what the intent of containers is for. So difference between virtual machines and containers, as you see here, containers, typically you're going to have multiple processes running in the same system. Each of these would be considered a container. But the important thing is they share the kernel. Whereas with the virtual machine, the kernel is actually part of the virtual machine and you're virtualized at a lower level. So that's a kind of an important distinction to make. And a lot of times you'll run containers on top of a virtual machine and sometimes you might even have a much more inception like View of the World where you're running containers in a container on top of a virtual machine and it gets complicated very quickly. So I wanted to bring up this slide. I usually include this. Normally I would have this just as a direct link to Docker's description. It's a very long page. It's actually a pretty good description of what containers are. For whatever reason, the PowerPoint plugin to pull live websites is not working for me. So I just went ahead and grabbed a screenshot. If you really want to look at it, there's the link there. So how do we make containers? What are the things that are used to build a container? And for the most part, they're just standard Linux features. Things like namespaces and C-groups are ways to limit what your individual processes can see or do in a system. And things like networking components, you can have software bridges and things like that. So there's not a whole lot of building blocks in the container world that are not standard kernel mechanisms. And what's important about that is that means when you are running containers on top of a Linux system, you are running standard Linux kernel processes. They are scheduled by the Linux kernel scheduler, just like any other process. They just might have some limitations in the task control block that are managed by the kernel. So this is one of the very first questions we always get from more deeply embedded folks coming to the container world. There's not a separate scheduler in there that adds overhead to your runtime. It's the same as if the containers were running natively on the kernel. Now, that said, you can still implement containers poorly and have performance issues that way. But you can do that when you're running natively as well. So there's nothing inherent in these container setups. It's going to add additional runtime overhead. There is some storage overhead, typically a little bit of initialization overhead as everything gets set up and running. But at steady state at runtime, you're usually in pretty good shape. Just to mention a few implementations, Docker. I've obviously mentioned that one already. LXC is, I think, an older technology. It's just a container technology that's been in Linux for a long time. I don't know. I must have mistyped something there because I don't know what run is. But systemD inspawn is actually part of systemD. You can actually launch containers directly from the systemD system if you're running that. And there are ways you can actually, I've seen some posts where you can actually use systemD inspawn to basically run a container of your entire operating system. And your head hurts if you think too hard about it. But I know that systemD guys use that a lot for their actual development. So they don't mess up their main system, but they're still able to install new things and try new things without having to jump through too many hoops. So just a few things you'll see. LXC, I know OpenEmbed, it has good support for it. That was supposed to be run C. That's what that was supposed to be. There's another container run time. Again, one that I know nothing about, but I know that it is available in meta virtualization. So what are the high level benefits? These are the kind of questions we get from our customers. And typically from the management types at the customers, why would I care about this? The biggest thing is no dependency help. We all know that dependency help is a thing. There's a link here that's kind of fun to look through. But as I mentioned, if you need incompatible versions of things, you can do that. It's all managed directly by the container run time. From our perspective, the biggest thing we get out of containers is convenient package and delivery of things to the system. So we use the container to implement over-the-air updates for the application stack. It's all built into Docker to be able to say, create me an image that has all these things in it. We didn't have to reinvent all that. So we were able to take advantage of that and be able to use that convenient packaging and delivery. It is standards-based. And I'm sure I know it didn't work very well rotated 90 degrees, but we do what we can. So there are lots of standards. There are some other standards that we're going to look at here that are well implemented and well supported. Another thing, especially in the embedded space, is all the modern DevOps workflows. The embedded space tends to kind of lag behind some of the more enterprise web-type things and the ability to do a get push and have a whole chain of things happen to build and deploy and test your code automatically. The containerized workloads, they lend themselves to that very well. And that's one of the big selling points that we try to provide with our system is that our users can then go and quickly deploy things, get their nightly builds automatically deployed to the hardware so that when their developers start in the morning, they've got the very latest and don't have to spend a whole lot of time jumping through hoops there. And there's lots of readily available software. I just picked a few random logos here from things I found on Docker Hub. It's very easy to go out to Docker Hub and say, you know, launch me a MongoDB instance and you don't have to know anything about MongoDB other than copying and pasting from a blog post somewhere and now you're able to run, you know, whatever software you want. I've got some blog posts about using MotionEye, which is a webcam monitoring software for surveillance cameras, things like that. It's very easy to set these things up. I probably have a dozen different containerized workloads on my home network doing various things, home assistant and Piehole DNS and things like that. So certainly for those kind of things, it makes it very convenient. So that's more of the high level stuff. Now this picture, I kind of like this picture. It kind of gives an idea, the whole idea of microservices and the ability to split things up into multiple blocks, right? That's very well supported and that's very much the common use case for containers. We talked about the dependencies. That's, you know, from the technical perspective, that's probably the biggest one. The ability to find these multi-service architectures does allow for each individual service potentially to be simplified over a whole monolithic system where you were putting everything in a single block. And another very useful thing for a lot of our customers, especially who are maybe not familiar with the OctoOpen embedded, they're used to coming from a desktop world. You can actually basically just say Docker run Ubuntu and you're in a shell prompt in an almost complete Ubuntu system, except you're just sharing the kernel with whatever your base operating system is. So you can do app installs and all kinds of things, which our customers really like because they can get started quickly. They know how to do stuff on their desktop. They can do very similar things in their embedded workloads. And a lot of people get hung up on the isolation portion provided by containers. So those C groups of things that we talked about, there is some isolation involved. It does provide a level of protection beyond just running things in the base operating system. It's not as secure as, say, a full hypervisor setup, but it is better than running things natively and being able to see the entire root file system and things like that. Typical objections, runtime performance hit. I already discussed that. It's pretty much the same. What is the increased storage in RAM? There is typically some extra storage needed. If you're using Docker in particular, they have a layering mechanism that largely mitigates that. So if I have two containers that are based on the same components, they share those components and I don't have to download everything twice. So you've got to be a little bit careful how you structure these things, but they're not terribly complicated. Other technical objections we get a lot are, my people don't know this. It's new, but it's quicker to learn this than to become a Yachto open embedded expert. So we think we've got a good answer for that. And some people think that the overall design complexity, when you're dealing with multiple blocks in a microservices architecture, the overall design complexity might be a little bit bigger, even if each individual unit might be a little bit simpler. So those are some of the objections we get. And specifically as it comes to embedded, one of the concerns, start of time, runtime performance, they're not unique to embedded, but those are very common concerns we get. How do I define the multiple services? What are the languages that I use? You need to learn Docker compose. You need to learn YAML to be able to specify that multiple service architecture. But the biggest concern we get is how do I get access to hardware? I'm inside a container. I'm kind of virtualized. I don't have direct access. Fortunately, in the Linux system, typically everything is a file. So as long as you map the device node into your container, you can generally access it. You might have to give some extra permissions and that kind of thing. The container systems will have the ability to specify capabilities and things like that. There are some more complicated things like getting access to the display. We actually have some reference containers that we provide that show you how to run Wayland and actually light up a display and do things all from within containers. So we have yet to find anything that our customers have wanted to do inside a container that they couldn't do with the capabilities provided by the container runtime we're using. So what are the benefits for the developers? It's that familiar environment, apt install, whatever they're used to. I mentioned over-the-air updates, completely unattended updates in the field of the application stacks are well supported by container workloads. It does give you an increased pool of potential application developers. If you're out interviewing strictly for people that have experience with Yachto, you've got a much smaller pool than if you go and say, I want somebody who's used Docker. So from a management perspective, that's a good thing. And it does, in some ways, make it easier to do your development, say, on your desktop, get it working there, and then easily port it over to your embedded device for later on testing. So I just wanted to mention a couple things that I came across in my research. I haven't spent a whole lot of time with them, but these are useful container resources and tools. Open Container Initiative is primarily a standards effort to define standard formats for the containers. There is support in meta virtualization for building OCI compliant containers and things like that. I couldn't get it to work. There's some builders or something that I was unable to troubleshoot at the moment, but that's out there and that's certainly helping to drive some compatibility across these container runtimes. Podman is a daemon-less container runtime that's basically a command line API compatible with Docker. So a lot of people that use that, they just say alias Docker equals Podman and they don't even have to know they're running Podman instead of Docker. Builda is a tool for building container images. Kind of similar to what we're doing with Open Embedded, so I didn't spend a lot of time on that. And then Scopio is just a tool for copying images around, converting them from one format to another and things like that. And with my thanks, most of the material I pulled in here came from one of these three talks, either Scott, Robert, or Bruce. They go into a lot more detail about the mechanics and the specifics. I wanted to kind of cover the more higher level and the I'm just getting started as a beginner level. So I encourage you to spend time with these talks if you want to learn more, especially Bruce's talk there. He gives a lot of motivation and goals for the future, which I think was a very helpful overview to figure out what's going on in the meta virtualization world. So how do we create containers? So this is a standard Dockerfile. This is kind of the Docker way, right? You create a Dockerfile and you tell it what you want to do. So we start with a from, typically, we'll start with from, which says I'm going to inherit from some already existing container. In this case, I'm pulling one of the Crop's Yachto containers. It's based on Ubuntu 20, so I've got a full Ubuntu setup. And then I can basically within this Docker container, I'm modifying the image. I'm making my changes. So I actually have a local Dockerfile where I add things that I like in my container. You can see I'm running APT commands there, installing things. I'm pulling down Git LFS. So at the end of running Docker build with this Dockerfile, now I have a customized version of the image, right? And it's very flexible, very easy to use and pretty self explanatory. The syntax for these files is not terribly complicated. And you can do a lot with it. But that from line is the scary part of this, right? Now, I'm pulling from a reasonably trusted source here. And they're pulling from a reasonably trusted source. But as we've all heard with the supply chain issues with things like PyPy getting bad code injected into it and things like that, these kind of things can be scary, right? You're pulling essentially prebuilt binaries from somewhere on the internet and you're running your stuff on it. For my local containers at home, probably not a big deal. If I'm shipping a million units and I have to support it for 15 years, that can be a little bit dodgy. So that's the big thing that scares me on them. So the majority of the functionality in Open Embedded that I used for developing this is actually in Meta Open Embedded. It's not in meta virtualization. So there's this image container BV class that's pretty, this is the entire class. It's not a terribly complicated thing. It sets up a few things. It clears out some of the kernel related things that you're not gonna need in a container. It says that you need a tar dot BZ2 file, which is essentially just a tar dot version of your container. And then just some extra error checking and things like that. You wanna make sure you're using the Linux dummy kernel as your preferred provider so that you're not building a full kernel and things like that. And with this image container class, you're actually able to build any container you want based on just about any kind of Yachto build package that you might have. So this is the first thing I tried to set up. So in my local.conf, I simply specify I'm building a container FS type. I set that preferred provider for the kernel so that I'm not dealing with a full kernel. And then I had a custom image, which is a pretty basic image. It inherits from image. And all I'm installing here is busybox. Now, because open a bit of tracks dependencies, the nice thing is I don't have to manually specify them. This, and I do my BitBake build here, it's gonna pull busybox in anything that it needs to be able to build. So in this case, I'm doing a build for Timo-ARM64. Just because that's kind of the lowest common denominator of ARM64 compatibility, I didn't have to do anything crazy with building a custom machine or anything like that. And at the end of the build, inside the deploy directory, I actually have my busybox container tar file here. I simply copy that over to my hardware. Now in this case, my hardware is actually running to Ryzen, so it already has the Docker runtime on it. So we'll talk about how to get the Docker runtime on it in just a bit. But assuming you already have a Linux system with Docker on it, now that I've copied that tar file over, with Docker, I simply do a Docker import, give it a name, and now you see in my list of images, I actually have many minimal busybox, seven megs, and I can run it. Now there's not an exec command that's in the Docker file, one of the things you would typically specify as an entry point, which is the default command that will run when you launch an image. The way these are generated, you actually have to specify the command. So in this case, I just simply say run SH. So now I'm actually running bash in a container on my ARM64 system. One interesting thing you can do, if you build for x8664 or whatever your desktop machine is, you can actually run these containers on your desktop machine, and you get very, very small containers, typically that won't have a whole lot in them and may not do a whole lot for you, but you can do that, it's kind of fun. So this was kind of how I started was okay, if people are gonna be building, if people with Yachto experience want to do this, they're used to building images. You can actually do a full image and run it in a container. So in this case, I'm actually just running core image full command line. I've got the same config that we had previously with the image FS type set to container. There's probably a lot more in that image than is appropriate to be in a container, but we've explicitly stubbed out the kernel stuff so we know we're okay there. And the one thing I did differently this time is instead of copying that archive over to my device and doing a Docker import, I actually pushed it up through Docker Hub. In this case, I pushed it to my name space on Docker Hub. So from the board, it looks just like any other Docker container. My board is able to go to the Docker Hub or whatever repository I have set up, and it's able to pull down that container that I built. And now I'm able to run, and I'm actually in a core image full command line build. So like I said, this is kind of how I envisioned our users that were wanting to migrate to this approach to do that because most of them already have an image of some kind so that they can get that up and running on top of our system pretty quickly. And then over time, they can kind of pull things out, re-architect it, look at what's in the image that they may no longer need that's now provided by the base operating system and be able to get up and running pretty quickly. So the next thing is, how do you want to make it a little bit smaller? The simplest answer, typically when you're dealing with Yachto Open Embedded is to use Muscle. One line change in your local.conf, you rebuild, you can see some of the differences here. Adding Python in didn't make a huge amount of difference, surprisingly. I thought it would be more than that, but you can see down here if you've got that container with just Busybox, we dropped quite significantly. This is a big concern a lot of our users have is because when they're deploying updates to their application, they want that as small as possible. So depending on how much is in their application, switching over to Muscle might be a little easier. This is obviously something that you would have a much harder time doing with standard Docker mechanisms, because most of those images that you build from are gonna be G-LibSea based. The biggest thing you could do is switch from, say, a Debian variant as your base to something like Alpine Linux, which is based on Muscle, you'll see that a lot of the containers that you find on Docker Hub are gonna be based on Alpine because of that. So I mentioned meta virtualization, and Bruce gave me permission to steal the slide. He did want me to give the warning that this is about six months out of date and he's got an updated version. And I don't know all the details of everything in meta virtualization, obviously, but this kind of gives you an idea of all the things that are included in there. It's a lot more than just containers. It's all sorts of hypervisors and things like that. And you can kind of get an idea from the timeline here of what was available when there. But if you start using this stuff with open embedded, you will eventually need to use meta virtualization. So that kind of gives you an idea of what's there. So that leads into how do I build a Yachto configuration that can run these images? So we wanna build Docker into the image, right? So it's pretty straightforward. We add meta virtualization, which brings in a few other layer dependencies. We turn on virtualization in distro features, which may not strictly be necessary for Docker, but I figured it didn't hurt to have it. And then we just say simply add Docker. Couple options. You can switch to Dockermobi. You can switch to Podman. I think Dockermobi is really just a fork. It's the open source version of the Docker CE, something like that. So functionally, it's equivalent. I couldn't tell you what the differences were. Everything I've tried to do, it worked just as well in any of those three options, Docker CE, Dockermobi, or Podman. And then you can see here on the running Kimu system. Now in this case, I'm actually running Docker CE. You see the version number there. And then information about the Docker server. So if I was running Podman, obviously there would be no server. That's one of the potential benefits of using something like Podman. So a lot of the next question really is, why do we wanna jump through all these hoops, right? We can just simply go create a Dockerfile and run it. I kind of alluded to some of these things, but there's a couple things to explicitly point out. Reproducibility and repeatability of builds. This is one of the things that Yachto does better than any other system I've used, right? I want to know that I'm building a system that I ship to my customers. And if they come back to me in two years with a support issue, I can come back and do a complete exact same build of my two year old version, right? Yachto allows me to store the downloads, the estate cash, all that kind of stuff. So I have exactly the same bits that I had two years ago. If you're using Docker Hub stuff and you're not careful, it's very easy to lose control of that because you're dependent on so many other sources for caching that information. There are probably tools and things. I know there's a lot of development in that space. There are probably tools that allow you to do that, but this is my hammer, so I know how to use this one. And it can be completely self-hosted. So that's kind of what I just said about saving those downloads, saving that estate cash, right? At some point, I can cut off my network connectivity and do a complete rebuild because I've downloaded everything that was needed in the first place. So that's obviously important. There are plenty of customers where they're not allowed to have network access on their systems, so they have to kind of do this all in a staging area and then they package it up and do most of their work in a completely network isolated environment. And then the other big item is source archival and software bill of material stuff, right? If I go pull Docker Ubuntu, even if it's the official image sponsored and hosted by Canonical, if I download that, how do I get back to the actual sources that created that? Again, there may be a way, I don't know how to do it. So this is obviously very important for long-term of your product and lifecycle. And even more importantly, license tracking and compliance. I think we're all familiar with that. Yachto does a very good job of creating a manifest of what licenses are in there. It does a great job of allowing you to restrict what licenses you're allowed to even include so that if I say no GPLv3, and I try to build something GPLv3, it'll throw an error. Kind of hard to do that with pre-built binaries from Docker Hub. And just in general, allowing you better visibility into what goes in there. You can go into your metadata, your recipes, and you can configure things, turn things on and off with much more granularity than you can do when you're downloading pre-built binary stuff. So all the package configs and things like that that are standard functionality within Yachto, you can do that here, and still be able to use them as containers. So that was about it for the content, in terms of the demos and things. Just real quick, and then we'll have a few minutes for questions. Future work to make this more usable, setting up some kind of container repository. Docker makes it very easy to host your own, and then figuring out what a usable set of containers would be for any particular application stack. So Docker Hub obviously has everything in the world. If you're going to be building with open embedded, you're probably not gonna have nearly that many things. BitBake World probably is a bit more than you want as well. So you've got to kind of balance that based on your application needs. The next thing for me is to test this in some kind of production system. Obviously, in my case, it would be Verizon. I've done some manual testing here where I launched containers from the command line, but I want to actually go back, figure out, okay, let me set up that repository and actually be able to use that as my over-the-air deployment of applications and containers to my devices so that when I hand this off to a customer, they can use this in the context of Verizon. Also, I want to investigate, this may be overkill using a proper machine config instead of Kimu. Whether that's gonna make any difference or not in these containers is hard to say. A lot of the containers you download from Docker Hub, they're not gonna be built very specifically optimized for the SOC and everything that I have on my chip. This may or may not make a huge amount of difference, but it's relatively low-hanging fruit to be able to play with that. And then, I did that build of CoreMageFull command line. As I said, I'm sure there's a lot of stuff in there that can be stripped out, so I'd like to kind of take a look at that, see about what is the minimum amount of stuff I need to be able to build that. I know some of the other talks I've looked at, they went in-depth in there and there was definitely some things in there that could be removed. So that's kind of one of the next steps. And then, this is the big one, is investigating this multi-config setup. So one of the things that a lot of the, and I've seen probably half a dozen different approaches to do this is, how do I do a Yachto build that includes the base OS and the containers and then kind of bundles them together at the end? It seems like today, the canonical way to do that is to use a multi-config setup where one of your multi-config is the base operating system and the other is the container and the container payloads. And then the dependencies are set up so that the base operating system includes the container payloads. For our Verizon system, it may or may not be interesting, we distribute Verizon with reference containers and then our customers use our external tooling to actually bundle in their own containers. So the multi-config stuff for our commercial usage is probably not all that interesting but that's kind of the way it seems to be going. I've also seen some other mechanisms for people to do this but I see some shaking heads that seem to indicate that this is the kind of the best practice right now and I know that Bruce has some ideas and some goals to make it even easier in the future. And as I mentioned, I was playing with the OCI container types to try to start playing with some of the other container runtimes run CLXC and that kind of thing. I couldn't get it to build. I want to figure out why. There's a few other folks that have posted some reference builds with it. So I just need to go look at theirs, figure out what stupid mistake I made and correct it and then get it out to get it tested and working. So there may be future versions of this talk. And for me specifically, I want to learn more about what's in meta virtualization. There's a lot in there and specifically OCI container stuff that's in there. So that's just kind of a sneak peek at maybe what next year's version of this talk will be about and hopefully we will get something more interesting. So with that, I think we got about six minutes for questions. Anybody got any questions? Yeah, go ahead, Tim. Is the, do we have the mic on up here? Is it on? No, it isn't. So as you indicated, there's been a number of talks on this. Yes. And I'm wondering with your fresh eyes, what kind of opportunities we have for documentation since we also have one of our big documentation folks here in the front row. So that's probably an open question, right? But can you think of ways to do that? Well, just in general, I mean documentation on this in the informal documentation would be nice. Meta virtualization doesn't seem to have, it seems to be a lot of tribal knowledge, a lot of mailing lists posts and a lot of YouTube videos, but no formal documentation that I could find. There was some stuff scattered throughout the source code, but even that there wasn't any canonical how-to's or anything like that. So I would love to see something like that where when I started the research for this, I would have found two or three posts that kind of laid it all out because image container BV class isn't very complicated. But I don't know that there's an actual how-to anywhere for that. Others here who have even less experience than I do, if that's something you guys would like, please let us know. And we definitely wanna get this better moving forward. Okay. There's no other questions. The second part of it is for your customers, it sounds like they're pretty comfortable having the container registry as the mechanism of delivering. Yes and no. Some are, some are not. So the reason I'm kind of asking that is from the perspective of multi-config, what we actually, I think, from a Yachto world perspective, right? Not the way that containers normally work out in the rest of the world. What we really need is a package that just installs the container, right? So there's no Docker import needed. Right. And so the problem there is now you have to know what super secret path do you put that into and how do you generate the hash that that directory went into and all these other things to finally package that up and make that into just a container package. And this is something I've been looking at. I know Bruce is working on that as well. So that's an area that I think we could use some more help. Sure. Especially understanding how people are actually really using it. And I think you guys have more experience with your actual customer usage versus theoretical usage. Right, well, yeah, and I will say many of our customers are concerned about Docker repositories and they want their own private repositories and things like that. And we have some mechanisms to allow them to do that. Similarly for the base operating system, we're all OS tree-based for our underlying operating system and we're now rolling out a feature to allow them to kind of self-host that. And as far as getting the containers bundled in, we actually have external tooling that our customers use called Turism Core Builder and it basically takes the pre-built binary, pulls it apart, puts everything back in. So we kind of sidestep that issue. We do still support people doing Yachto builds, but at the moment I think if one of our customers came and said, I want to do a Yachto build, but also at the same time include the containers, I think our answer would be, well, you can do that, but you got to figure it out on your own. We don't have a good answer for that for sure. All right, anybody else? Do we have any online questions? Do we not run into two different S-bombs? One for the host OS build with Yachto that runs the container runtime and the other one for the container content mostly composed with artifacts from a registry such like Docker Hub? Absolutely, we would. And that's where the whole multi-config comes in, right? Everything I built here was just the container payloads, right? I didn't capture the details about building the base operating system, but the way I did it, there were two completely separate builds and I would have to maintain the downloads in the shared state and the licensed manifests and things like that independently. If you use the multi-config, it's still actually separate builds from within the BitBake world, but the licensed files and everything are going to be maintained properly by that multi-config setup. All right, anybody else? Very good, well thank you guys all for your time. I appreciate you coming. Enjoy lunch. Thank you.