 Hi, my name is Bruce Ashfield. I am a principal engineer at AMD. I'm going to be talking today about embedded containers as a deployment component and in particular how they interact and can work with the Yachto project. Unfortunately, if you're watching this, you're watching the recorded session as part of the virtual summit because I could not be there this time around. Hopefully I can give this in person at a future conference. I will be around for questions at the end, so hopefully we can make up for any extra details at that time. So the agenda and what I want to talk about is a quick overview of embedded containers, what they are, how you might want to use them, you know, where they come from and, you know, similarities to just containers. So we're going into a little bit more detail about the five W's as I described this presentation, which is the who, what, where, when and why about containers and answering those questions and how you might not think they lead you to the Yachto project or map to the requirements of your typical embedded platform. So it'll be sort of some questions and some answers that will show you that it can lead to, in fact, the Yachto project and an embedded device. I'll summarize some of the Yachto project capabilities around containers, and then I'll do a quick sample deployment and update of a container, and then we'll have some questions. So one of the things that, you know, I've been working with embedded containers for probably almost 10 years now. And one of the biggest things that has changed is in fact the increased compute power and resources available for devices that were traditionally called embedded. In this case, we're talking about, it's not necessarily a really tiny device, but something that runs on the edge of the network, or something that is part of a larger solution. So we're talking about things that are more powerful than they used to be, but we're not necessarily talking about servers in the cloud, for example. But that the power has of the devices has increased over time, maybe not to the point where you're running multiple virtual machines, for example, but you do have plenty of resources to run containers and you'd like to leverage some of the benefits of running containers. Also, what we're seeing on embedded devices is the need or the desire or the requirements to leverage modern or deeper software stacks. And by that I'm talking about some of the different frameworks like Kubernetes, K3S, even some of the Podman based frameworks and those sort of things. Also that, you know, you might want, if you're talking about embedded containers, the ability to do orchestration, to do control and to do updates of devices at scale without needing some sort of custom solution, because if you take my first two comments with deeper software stacks, and the frameworks that you get some of the ability to do the monitoring the orchestration and control at scale without needing to write anything custom. You also might have application developers running on these working and developing for these more powerful edge devices, and they might be more familiar with containers or the workflow of updating modern application stacks. And but don't want to interact with a low level embedded build system so their usability can come into play while you might want embedded containers and why you might want them on an embedded device. And also, we found that you know there's this building up of the really slimmed down small footprint embedded distributions they've been adding more func we've been adding more functionality they've been building up. At the same time, there's the efforts to maybe slim down or remove components from a larger enterprise distro. So there's sort of a meeting point size device where you might run a built up embedded distribution, or you might want to run sort of a cut down enterprise distribution. And the comment that we had that at least I always make is that of course it's better to or easier to build up than to tear down in particular if you want to do it in a controlled manner. The other advantage if you run with containers on your embedded device is that you do have the flexibility to address the requirements of different verticals so what you want for networking and automotive is different from a security application for example but you do have the flexibility to change the framework to change the configuration to use the sandboxing to use different networking namespaces in order to meet the requirements of the verticals. Also, the footprint of a container runtime, it can be tailored to the device. So we have, I'll go into a little bit of detail in the Octo project in the meta virtualization layer. We have everything from RunC, which is the standard Golang based runtime for many of the container frameworks. We have CRUN, which is a C based one. We have Kubernetes, we have K3S, we have all these different frameworks that have different resource requirements, different security surfaces, and that you can tailor the footprint and the surface area of your runtime for your needs. So it's not one size fits all. And that if you're running your application already on top of a in a container based environment that your applications, they can either be streamlined to take advantage of that lower footprint, or they can be largely unchanged. So you can run the same application in a container on these embedded devices as you could anywhere else because there is no modifications to the platform. We're talking about standards based unmodified container run times just like you would find on your Linux box running at your house. So I briefly mentioned some of the things in meta virtualization on the previous slide. And, you know, we've been at doing virtualization, which is actually virtualization in the sense of hypervisors and system virtualization in the sense of containers. We've been working on this since 2012, starting with over time adding more complex type applications and frameworks, as well as adding more of them. So in, you know, in 2012 it was basically LXC, Libvert and Zen. And there was that type of containerization, as well as the virtual machine options but as as we march through time you can see that we added Docker showed up run C container D all of a sudden you're now getting into some of the components that you would have heard about as you know in all of the the current container based platforms and stacks. And then there was sort of the breakup of Docker in the sense of the different components run C and the container D in the way that they would talk to the deeper frameworks like Kubernetes and then cryo appears different back end then we became the standards started to happen so OCI tools showed up in 2017. And then, you know, as we march forward we now have pod man scopio C run, we have K3S, NERC control, and different types of utilities from Nipoli and containers so there's a pretty broad set of options that you can map to what what you need and I've done demos at different points and different conferences of different parts of these. And today we'll be using pod man and and and curses a curses based interface to pod man as part of the demo today so looking at something that we haven't demoed before and just showing how it can be used to as part of the deployment and update mechanism. So now I wanted to go into a little bit more detail, as I mentioned in the description of this presentation which is the W's. So this is the the Q&A about the questions that you might ask in and my description of how they could lead you towards the demo project, as well as embedded containers. So the W's of course the questions and the answers that you would ask. They do, depending, they do vary based on the device your target, your platform, what are your requirements but the flexibility as I was reading through that set of technologies, flexibility is critical and that you can chart a path through the questions and the answers and the options that we have in the audio project and you can pick what meets those requirements. But the goal overall is to avoid either ad hoc or one off solutions with a magic set of cut and paste instructions that one guy knows or one developer knows and also and then not being locked in. You want to be able to potentially change your container runtime or change different parts of the system in the future and not have to follow one technology line until the end of time or at least the end of your product. So and we do want things like determinism in the build. Can everybody build it? Is there is, can it go into a CI CD pipeline? We want reproducibility. We want everybody to be able to build it. We want it to be the same thing every time. And the ability to upgrade the system, upgrade the platform and in fact upgrade and change the different underlying technologies that are part of the platform and actually running the containers. So for who? Some of the questions that come to mind when I was doing this presentation was who needs containers? Who creates these containers? Who is consuming them? Who is using them? And then other questions that follow from that that you might be aware of with some recent developments of supply chain and software supply chain is, you know, can we tell who created a container? Can we tell what's in a container? I'll cover that later. So the answers to some of those questions in the context of this talk and the goals is, you know, who needs a container? Well, it could actually be an application or a system developer, meaning we can deliver high level application functionality in a container. Or you can actually have containers delivering low level parts of your system in bit streams to FPGAs, device drivers, firmware updates, things like this can also be developed and delivered through containers. But the big thing is, whether it's the application or system developer, they have, they want a different visibility into the container and the system and how it's built. So with the Yocto project, you know, that is backed by the build system and or an SDK, meaning you can go back to the build system to regenerate update containers, or maybe you just have an SDK or binary artifacts or a container registry somewhere that you're using. The other question we had was who created the container? So part of what you get with the Yocto project is, you know, it provides traceability as part of the licensing, the S-bomb, the source archiving. There's a bunch of pillars of the Yocto project that, you know, it provides traceability into the software that you're running in the container that you can provide along with the container. So we can figure out how it was built when it was built and do analysis based on that. How they created the container that, of course, also depends on the use case. Is it just a test system that we're doing the container work in? Is it meant for debugging? Are we talking about DevOps? Is it a deployment? Is it a pipeline? So at that case, the, you know, who created it, it could be an integration developer engineer, could be the application developer doing local testing or deploying to test platforms. Or it could be, again, that system integration and developer and engineer. They may be providing these containers. But as for who consumes it, of course, it's pretty much, it can be anybody that uses the platform. So it's developers can absolutely consume the containers as part of their work. And users can absolutely, meaning somebody that buys your device or your platform could be consuming the containers. They might have nothing to do with the development or the creation of the platform. They might just be a user. Or it could be somebody in the middle who's a third party integrator, which is integrating their own software onto the device and then going to end users. But we should point out is depending on where you are in those, in that spectrum, that there is no open embedded or the yakuya project build system knowledge is required. You don't, in this case, I've done talks in the past about binary outputs and artifacts on the yakuya project, the fact that you can consume and use these containers without ever touching or knowing that it was built from open embedded and bit bake. The build system in the SDK is there backing it when you get to the point that you do need it. So the next of our five W's is what I've covered a bit of this in the lead up to this slide, but you know some of the things that come to mind around the what theme are what containers, you know, what are containers used for on an embedded device. What are the contents of the container, which is similar to who created it that I was talking about what container runtime is being used, what cloud native framework is being used so they these are more in and following the theme these are the more that the technical questions about how and you know what is used. The answer is, you know, is that containers, again, depending on those requirements as talking about they can be used for deployment, they could be, you know, that in that sense of software delivery, they could be used for security and isolation of the application. They could be used as an update mechanism for the device once it's been up and running and it's in the field. It could be maintenance. They could be application or system. Container so that they can be what can they be used for they can be used for many different parts of the system. What are the contents, and this is again goes back to the who created your container and what is in it so the Octo project I mentioned as artifacts detailing the contents of the container outputs from the build system and they can be delivered along with the container signed and and provided in the technology timeline that I provided for meta virtualization of the Octo project, you know, the answers the question about what runtime or what should I use so of course we don't and we won't and haven't developed our own container runtime so standards based solutions are supported in particular. C and CF projects and OCI the open container initiative technologies are available and are supported and but it's not a prescriptive sort of layer as part of the ecosystem if you will you have the flexibility to choose what works for you and there's different configurations in the system will work but there will be a bit of configuration on your own after you've chosen the technologies that you want. And again, do you want a framework what what C and CF what what cloud native framework should I run do you want to run k3s do you have enough horsepower to run full Kubernetes you want podman do you want none do you want system to eat a launch your containers on start up you can absolutely do that so the the flexibility of the watts are more technical and the the answer is we provide a whole set of different technologies and you can choose the one that makes sense for your device. So you know where in this case is where you know our typical devices that can leverage containers where where is this running. You know and also where can a Yachto project created container run were to where questions that I came up with to and the theme of this presentation. So the big thing is is that platform because we I get I've keep mentioning about this platform flexibility and standards based approach that these containers can run anywhere from the edge to the core to a small embedded device. And you don't actually have to change your application, you can run it on any one of these different devices, it can be air gapped you can be installed you can have a local container registry can do whatever you need to do. So the answer is where can they run. The answer is anywhere. Almost anywhere. Somebody will catch me on claiming anywhere. The Yachto project in particular yard again the Yachto project create create create created containers can also run anywhere because they're built. They're they're cross built just like everything else in the Yachto project and open embedded but. Whether or not the platform was built using the Yachto project that you're running on you can run these containers because of the underlying core technologies of containers the isolation and everything that you need. In fact that you know with the Yachto project of course because it is multi architecture, multi C library, multi everything that you can build for any number everything from arm 64 actually 664 to. To power PC and you can tailor your build and then deliver them to run on almost any kind of device whether it was or wasn't built with the Yachto project itself. So our next one is you know when when can containers be used in particular when in the life cycle of device or the not other question that we get when looking through requirements is you know when should packages full images or containers be used. So the answers that I came up with some of these questions that are are leading us towards the the answer of the Yachto project is that you know containers they can actually be part of almost any part any phase of development which is sort of what I was hinting at when I was talking about the who question as well. They can be used during development. They can be used during production they can be part of your debug your tests, whatever you need to be so when can they be used. They can be used at almost any part of your development cycle. They can be baked into the image and used for update. You know which can be locally provided which I mentioned about a local registry, or maybe they're fetched remotely. So they can be used at any of these points in the life cycle meaning you had to do it when you were producing the device or it's in the field and you're updating it later so that again the when depends on how you built it but both ends of the spectrum can be supported. And what I would answer for the question about when to use containers packages images is that it's not a exclusive sort of question that the containers they can coexist with the doctor project image creation. The package feeds that the doctor project can create and other binary artifacts which it could include as I mentioned bit streams or boot loaders or. First stage boot, you know there's no need to choose one of the other. In fact, you know you can use these containers to feed to pull in the packages you can use the packages to create containers on the target. You don't have to choose so and you can of course migrate from one to the other. The last W is the why and how. So the who what where when why and how you know why use containers you know why can't we just use container build service and distro X. How can a embedded device use containers, which we've mostly answered already and how can embedded embedded developers leverage containers which we've also pretty much also answered already. So, in particular, for the answer of why use container I would say, why not. But no, they're not actually appropriate for every scenario because there is some extra configuration, there is some if minimal overhead, whether it be disk or memory footprint or that sort of thing. But and so you need to look at your requirements and decide if you need to do any sort of containerization. And of course, to answer the question of, you know, can why can't I just use any distro or build what build technique for creating my containers, whether it be a Docker file or some other build service, and that absolutely any distribution, or any way. So for building the containers themselves can totally be appropriate, can be part of your your your product in a commercial situation. The only thing we say is of course just be aware of all the requirements around delivering container based platforms and applications. And that, you know, the the Octa project has many supporting capabilities around that it provides as part of the base project that can be leveraged by the containers and that was again the licensing the software build materials and those sort of things that I was talking about earlier. And that of course, many are all of the benefits of containers that you'll read about anywhere, you know why use containers what are containers, they can absolutely be applied to an embedded platform and embedded developers. So the summary of the Octa project and container capabilities is that, you know, it isn't if it isn't it all about the the application, why would you care from building from source. And the thing is, yes, it is actually a lot of times about the application, but again the supporting issues around delivering your platform. That is, you know, that's why you care about knowing where the source is where it came from because that's how it is easier to generate potentially easier to generate those artifacts. You know, the the Octa project it those underlying capabilities are solving problems that you may not know you even have yet that we are standards compliant and compatible, and that it's about building block technology so it's choice. We're not looking to pick winners for Canadian container run times and lock you in its flexibility elements of the solution are spread all through the ecosystem whether it be the meta virtualization layer we core meta open embedded meta security different layers that are part of the ecosystem different parts of the solution live in different places. And that it's all about configurability and tune ability containers are again what I've been saying are one of many different doctor project outputs whether it be the, you know, the SDK packages s bomb images, you know, it's only one thing, and you have the ability to use any or are none of them. And again, as I've been talking about different parts of the value proposition, if you will, come from different parts of the system. So that is the talking part of the presentation I'm going to quickly show a demo that shows the system in action to use pod man in this case to do a sort of small footprint really basic service deployment. Using pod man and this pod man to you know this end curses interface from meta virtualization. We're going to deploy a third party container and a doctor project container together and then do a quick update and extension of the of the container. Let me switch my share. I'm going to do a different part of my, I will now share I think I found it finally. I will share my console. All right, so here we go. This is a running I built just a few days ago. I built a QMU x86 6464 bit. It's a really minimal system. It's using the 5.19 kernel that will be part of the upcoming doctor project release. And, you know, so that's our base platform it's got pod man and the supporting technologies underneath. I'm also I also have on this target in a different terminal. We have pod man so we can run this and this is a little bit of a monitoring solution if you will, for to see what's going on with pod man and right now the system you can see our run C versions there. There's some con mon build anyway there's different parts of system that are here. And there's no pods there's no containers there's no volumes there's no images there's there's just the pod man network bridge that's available at the moment. So there's nothing actually on the system. We have a simple. There's there's two parts of the system. Pull and run a banner is sort of a standard if you will banner application that is used quite often to show how pod man works, and then I'm going to deliver curl. As a minimal container built by the doctor project so we can interact with that banner and there's two different forms of this curl container. And that's what I'll be. That's what I'll be showing. So if we come over to the main console and I do you know I do to a pod man images. We don't have any images so in this case we're going to pull a third party container one that I didn't build. So we now have an image on the system if we jump over to here we can see. Yes, we now have this live pod banner image created two months ago, 12.1 megabytes, but still no running containers. We're actually going to. Before we start pulling our own container, we'll use pod man when we could absolutely launch it through K3S or something else or the two able we're doing this by hand. We're going to run. Give it a name web server and we're going to run it so we're actually starting so we now have a running container if we jump back over to the interface here. Oh, we now have a container. It's eight seconds old it's running. That's running a banner. And in this demo. The way that we will interact with. That is through this curl application container image that basically just delivers the. The doctor project. Curl built put in a container an OCI container, which will be put on to. A Docker. Hi repository and then we'll pull it onto the container so. Over here. You can see I can I can do a quick bit bake. On this application container. We build and I had the system prime so it doesn't take very long. It's wonderfully built and we will now run. Skopio to copy it to my Docker hub under something called metavert. Curl using. The latest tag. So we now have that container. Available on my my Docker hub. So if we head back over. And we will now. If we can just find the right command, I think I found it. We will pull in. This container. So we're doing a poll. The container has arrived. We go over it's still not running this where I should say it's an image. So we now have my meta virtualization curl based. Container. Anyway, if we want, of course, we can do something like this. We can bring this up and we can we can inspect the details of it and that will tell you again. When it was built. The entry point, which is the curl binary. The different details of. The container. And quite simply, I should show in the application. The OCI image entry point is sent to curl. And that it's it's got busy box back in. That's all it does. So when you run the container curl is going to run as the entry point. So when we actually run. The container, which is right here. We run it with no arguments whatsoever. The entry point will be executed. So when we run this, the entry point will be executed. Of course, curl says, well, you didn't tell me to do anything. So of course we can run it. We can add that on the end. Local host. 80. And it's now talked to the banner application, and it's printed out pod man. And okay, so that's that that's the first iteration. So congratulations. You wrote a simple container that knows how to run curl and it's interacting with different system, but you know, we don't like, for example, say we don't like passing options. We don't want to type in local host port 80 every time. So I wrote a slight tweak in the single image right now just so we can show us a demo. I gave it a different tag called easy easy to use, and that I'm going to actually provide some arguments to the entry point, which is that the that HTTP local host 80 are the entry point arguments. These are all parts of the meta virtualization framework around building containers. So if we now bit make the container again. It builds it finds something to do because I changed the configuration we rebuilt the OCI image. Congratulations. We now have that. And we are going to change our tag as we push it up to easy up to Docker Hub it goes. Of course this would be part of a pipeline or application develop. A workflow that we normally be doing this but you know, for our purposes. This is what we have. So now we need to pull this. We need to do now, of course. Pull the new tag. On to the target. So if we run. Pull again, but in this case using the easy tag. We have an update if we didn't provide that it would have told us. Everything's up to date. Congratulations. We jump over to here and we now see we have two tags for that container we have latest and easy. If we run it like we did before. Well, in this case we want to make sure we run the the right version of the right tag because we're because I did it with a tag versus. Reusing latest. We actually can have them both coexisting on the target, which is of course useful for many things as you can imagine. So when we run. This easy version of the container. If the demo works. It automatically prints podman because the HTTP local host was changed as part of the container definition on the build side which built the container which we uploaded to Docker Hub which we pulled down and it always asks to do. It always has that HTTP. Local host. And yeah, it's on the system. It ran. Those are both and of course we can always go back to by default, which will be the latest. It doesn't know how to do it. And we can run local host. And there you go. That is the tour of. The pod man. The different interface that we have to talk to it. You can see now that we have multiple running different containers that have exited. You can re execute any one of these containers at any time even through this interface. Or you can delete them, remove them, print them, whatever you want. And that you know it's a system like you would find or inspect any sort of system. So that is. That is the demo. And now we have it looks like just a few minutes left for questions so I can either we can either take some now or that can be something that is done offline later and you can find me in the usual locations. Thank you very much for listening. I appreciate the chance to present this and again hopefully I can do this in person at some point in the future.