 Awesome. Thanks, Christina. And thanks everyone for joining me today and this morning or this evening or wherever you are. And I hope this is an interesting topic for some of you. But today I'm going to be talking about building and maintaining your own secure container OS. And the example that we're going to use is Bottle Rocket OS. And to kind of dive right in, I want to give a bit of an explanation to what Bottle Rocket is. So I'll talk about that first. And essentially Bottle Rocket is a minimal and secure Linux operating system that's been purpose-built from the ground up for running containers. It's secure by design and follows some of the best practices for container security. So it only includes the tools that are needed to run containers and significantly reduces the attack surface and impact for vulnerabilities. And by virtue of being minimal, nodes running Bottle Rocket have essentially a fast boot time and are thus able to enable clusters to scale quickly and varying traffic patterns or workloads as those change. So in conjunction with an orchestrator, Bottle Rocket enables various update strategies to new versions of the software with little to no operational overhead. Node level updates are essentially handled in an atomic manner and provide safety and visibility throughout the entire update process. And customers can always have essentially the latest and greatest version of the OS running on their hosts with minimal effort. And I'm going to dive into some of these a little bit after this. So I'm going to breeze through these a little bit more. But separately, Bottle Rocket also provides a suite of tools that help users build a custom community-supported variant of the operating system directly from the source on GitHub. And that's going to be the bulk of what we're going to talk about today. But before we really kind of dive into that, I want to touch on some of these first. And so one of the first areas that we want to talk about with Bottle Rocket is security. Out of the gates, and actually we've got a question that I, Jose, I will get to your question a little bit. It is a good question and we'll talk to that probably a little length. But the idea is we wanted to build a secure operating system straight out of the gates. Everything that we build at AWS is essentially security first. And so we wanted to build an operating system that focused on that as well. And so we built it with SC Linux and enabled in an enforcing mode by default. So the goal of the SC Linux policy here is to really separate the containers that are going to be running on that particular host from the underlying operating system itself. And if you have scenarios where you need to elevate permissions and gain additional permissions for the workloads that are running on that, we want to provide the flexibility to be able to do that. So there are ways to elevate your permissions. But by default, we want to make sure that we're separating those workloads as much as possible for safety and security concerns. In addition to that, we have treated the operating system much like we treat the workloads that are running on it. So typically when you're building a container workload, you want them to be read-only. So the file system for Bottle Rocket is largely read-only. It's using something like something called DM Verity, which allows us to check a hash on every block that is written on the disk. And if any changes are actually made to the disk or to the file system, those are detected on read and the IO operation is actually blocked. And so what this allows you to do is detect if there's any changes that have occurred on your system and flag that system for being able to be quarantined and have a security team basically do a root cause analysis on it. But we understand that as an operating system, we actually do need some ability to write configuration. So a folder like Etsy, for example, which is where the system holds all the configuration for the applications that are running on the operating system, we need to be able to make that writable. But leaving it writable is essentially a risk. And so what we've done is we've made it essentially a stateless file system that is empty at boot. And we use a number of various helper programs to populate it with necessary components when you actually build your image and when you boot it up. So essentially, it starts blank and populates with the things that you need to be able to run successfully. And these kind of these three things really do help make sure that changes can't be made on the system itself. And if they are made, you can detect them pretty quickly. So this security first kind of mindset really is pervasive throughout the OS. In addition to that, we don't include any shells or interpreters. So there's no way for you to actually log into the underlying host OS directly. We have mechanisms for that that I'll talk about in a moment. But for the most part, everything is walled off. And even things like the binaries that we've included within it, those are built with hardened flags to ensure that they can't be leveraged or abused as best as possible. We also wanted the operating system to be flexible. So we understand there's multiple orchestration orchestrators for containers. There's multiple cloud environments. We wanted to be able to build an operating system that could support that. So we allow for these things, these different builds, and we refer to them as variance. So every variant of bottle rocket is essentially a combination of software settings and disk layouts that are then used to build an actual image of the OS. And so I think the easiest way to think of bottle rocket in this kind of context is that bottle rocket is essentially a container host OS builder. So you provide your spec, you provide your settings, and then you actually use the bottle rocket SDK to actually build the finalized OS that you're going to use. Or you can leverage one that say like we provide through AWS for running on AWS. In addition to, like I said, like we've got security, we've got the flexibility. What we also wanted to do is make sure that updates were easy and far more simplified than what you would get with something like a general purpose OS. And so for this, we're using the update framework. And this allows us to create secure repositories for updates so that if a bottle rocket host needs to download a new version, it can do so in a way that we can ensure there aren't or there's minimized opportunities for security concerns in between like say during a poll of a particular update version. We verify cryptographic keys and make sure that you're pulling from the right location. The updates themselves are handled in an atomic way. So rather than like a general purpose OS where you might have one package that has an update, and then that has 50 dependencies downstream from it that also have updates. And before you know it, you know, a simple update process is you're validating thousands of packages. The way this works is it actually downloads a full image of the updated OS to another partition on the host itself. And so you imagine you have your partition that's running the operating system, and then you've got a secondary partition, it'll actually stage a new version of the OS on that secondary partition. And we use a tool called up dog that actually handles swapping of the priority of those partitions will go through and validate that the download is complete, and it'll actually handle the reboot in any particular potential failures that might occur during a reboot. So if it detects that there's an issue, it'll automatically roll back to the previous partition, and you're good to go. And so we've got a couple of questions. So we've already talked about a little bit about variants. And we've got another question here. So the first question is, you know, is bottle rocket available out to run outside of AWS? Today it is not, but that is the long term goal. So we want to be able to give you the flexibility to produce variants that will run on bare metal that will run on other clouds that will run on prem. What we've started with, because we're producing this, we've started with our two orchestrators first. So we've got EKS and ECS that we support today. And I'll go into that a little bit more. But the long term goal is to support variants and build variants that can run in wherever your containers are running. So that's kind of the long term goal. And then the second question we have is why would we need bottle rocket if we already have Linux operating systems with container Docker engines installed on them? And that's what I'm going to dive into a little bit right here. And so because we've stripped everything out of the operating system, this only includes the things that are necessary to run containers. So basically, there's no additional binaries and libraries. There's no package management solution. There's nothing installed on that OS outside of what the specific configuration for that variant is and essentially play Docker or container D. And so with the way we've provisioned our variants for EKS and ECS is there's essentially two container runtimes on that host. So there's the runtime for the scheduled workloads. So those that your orchestrators are going to be placing tasks or scheduling pods on. And then we have essentially another runtime that is separate from that, that runs the administration tasks on the host itself. So by doing this, we're providing a far more secure environment for running your containers that is far more lightweight. It comes with security option, because security configuration out of the box without you having to necessarily do anything additional, which you would typically have to do is in what we would call undifferentiated heavy lifting if you were using a more general purpose OS. And so what this does is it gives you these degrees and a foundation of security to start for your container workloads on day one with versus having to figure this out and kind of grow it on your own organically. And so it gives you a lot of benefit there. But the idea is we've built this to be flexible, secure and allow for you to have some access to the underlying hosts for things like development, debugging, troubleshooting on the host itself. Like I said, we've got the runtime specifically for your workloads. And we've got what we call host containers. And there's a runtime of container D or Docker specifically for the host containers. And we have two that we basically provide, but you can add additional ones. And I'll go into this later in the talk. But we have one that's a control container that's on by default. And this essentially exposes an API, which is how you interact with the underlying host itself. So you can make updates to configuration through the API, you can add additional host containers through the API, you can check settings and modify settings through that. If you want to make any modifications to that underlying host, you can use the API to add what we call an admin container. This is off by default. But basically when you activate it, it'll go and download a container to that secondary container runtime that has additional permissions. And this will allow you to get a shell prompt. It will allow you to get much more deeper hooks into the underlying OS of the host. And this should be used for something like debugging or exploration during build processes, but it should be used essentially sparingly. And so is it possible to run on a Raspberry Pi 4? Not today, but we will get into the fact that we can cross compile to ARM 64. And so if you wanted to build a variant that could run on a Raspberry Pi, that is something that the community can totally do. It's just a matter of writing the config for that and going through and building the image. And we'll kind of dive into that question right now as we go over some of the build concepts. Some of the high level build concepts. So as you start to try and build your disk image for Bottle Rocket, you're going to need a few things. And so the components look like this. You've essentially got a build machine that you're going to use to do your builds. On that machine, you're going to have some required tooling. So everything that we've built, we've built that's first party. We're using Rust and cargo to kind of use, handle our packages and build process. We'll use Docker to actually do the build itself. So the SDK is essentially a Docker image. We're using RPM, but we're not using it in the sense that we have RPM images that we're installing through a package manager. We're using it as more of the package spec. And I'll talk to that a little bit. And then there's Linux build tools. And so on that machine, you're going to basically suck in the Bottle Rocket source code from our Git repository. And then this will also download the SDK. And then if there's any particular dependencies or packages that you're variant that you want to build needs, it's going to download those as well to that build machine. As you go through the build process, it will produce a couple of different outputs or there's some options for different outputs. So the primary output is going to be a Bottle Rocket disk image. And then there's some optional outputs that you can create along with that. So a Bottle Rocket repository. So if you create, say your own variant of Bottle Rocket and you want to be able to create an update repository, you can create that and that will publish out to wherever you want it to be. So this can be for the Bottle Rocket variants that we manage at AWS. These will be published to an S3 bucket with a cloud front distribution in front of it. The key point here is you need it to be accessible to the hosts so that they can check for updates regularly. And if there is an update, they can download it to that secondary partition. And a third option here for an output is essentially an AMI or a disk image for Amazon machine images. And so this is what we use at AWS to be able to run these on the actual host themselves. So what it's going to do is it's going to create an AMI, publish that to a particular region. And then when you provision a host, it's going to use that AMI as its disk image. And I've got some questions. Is Bottle Rocket the underlying OS for Fargate? No, it is not. The underlying host for Fargate is essentially a lightweight VM running using Firecracker. If you want to look that up, you can look up AWS Firecracker and there's some more details around that. It's somewhat similar in concept though. It's designed to be very, very, very lightweight, very stripped down, secure. But ours is a little bit more purpose built for the task of just containers. So it's a little bit different. And then talking tooling, some of the tools that we're going to need to go through and go through the build process are things like cargo. So like I said, all of our first party tooling is written essentially in Rust. And so we're leveraging the cargo package manager, which comes along with that. And that's going to handle a lot more than just the our packages. We're essentially using it as a dependency solver for our first and third party packages. So it's going to handle a lot of the orchestration of the build process on our behalf and will actually invoke other tools to handle different components when that portion of the build is executed. And we'll dive into a little bit more of explanation in a moment. We're also using RPM. So like I said earlier, we're not installing the RPM package manager on the system itself. We're essentially using the RPM spec files to identify any of the necessary packages that you want to include within your variants, its dependencies, its source code. And the idea is that we will download all of that source code locally during the build process. So when you execute a cargo make, it's going to go download all those requirements. It will build an RPM based on that. So it's not going to pull from essentially the typical package manager location. It's going to pull all the source and compile it locally on your machine. And then it's going to cash that on that build machine. So in the future, if you run subsequent builds, it will leverage the cash for that. And then it's going to use the bottle rocket SDK, which is essentially includes all the necessary tools that we need to actually perform all of the builds for the tool chain and actually produce the output. And this is essentially a Docker image. And then like I said, we're using Docker throughout the whole stack to produce the exported image. And there's a question now about whether bottle rocket is being used in production. And yes, it is. So we've got a number of partners and customers that are using it safely in production for EKS. The ECS is in preview right now. So if you wanted to test that in preview for ECS, by all means, I would recommend you give that a try. And as soon as that goes to GA, I would recommend that for production workflows. But for EKS, customers are using that today. And the build process itself is pretty straightforward. What you're going to need essentially is a machine that's capable of actually performing the build. And this is probably the most challenging part of this, that the build process and its artifacts can actually run in excess of about 80 gigabytes. And so you want to make sure that depending upon the variant that you're building, that you have enough disk space on that underlying build machine to be able to support that storage requirement. If you're using like an NAW9 environment, you'll probably want to add additional disk space, because I think they start with 20 gigabytes. So you'll blow through that pretty quickly. But you also want to consider that it's pretty resource intensive, but it can scale pretty well through additional cores. And so we've got some build environments that scale up to 32 plus cores. And the builds on those systems typically take about 12 minutes. But if you've got like my MacBook that I built it on that's got eight cores or four cores actually, it took about three hours for it to go through. So it's going to require a couple of libraries and tools like I mentioned and some patience. So if you trigger a build, I recommend you have some strong coffee and some kind of hobby to help you pass the time. If you want to get into needlepoint or crochet, maybe look up some Star Wars patterns and have some fun. But you're going to take a little bit. And the way this works is it's pretty straightforward. You're going to start a simple cargo command, which will go through and it's similar to make, but we're using cargo to actually handle the make process. So you trigger the builds using that. There's some additional flags that you can specify. So if you want to build a specific variant or say you're managing multiple variants, like I want to build a specific version of the EKS variant, you can specify that variant. Like I said, we support multi arcs. So if you want to run this on, say like a Graviton processor on AWS or RM64 processor, you can actually build and compile a variant for that, so you can specify the architecture you want to support. And this will actually go and invoke a few things. At the core of cargo is essentially a make file. And this is a Tommel file that actually specifies all of the environmental variables. And it starts to list out the tasks and build dependencies that you're going to need to actually execute this build. And so it's going to evaluate this, set up all those dependencies, set up some specific pathing, and then it'll actually start to go through and build out those dependencies. If the dependencies have been built previously in their cache, it's going to leverage the cache version. If there was a delta between when it was previously cached and what's available today, it will of course grab a new version and rebuild based on that. And then you can even invalidate that yourself. So if you want to build a fresh copy, there's some arguments we can provide in the command line to make sure it grabs a fresh version. And so the idea here is it's going to trigger this, and then we're going to actually use a feature within each of the project or each of the packages in the project itself, which is known as build RS. And so this is a Rust script that actually will go through. It's at the base of every package that you have, it's going to be in the same directory as your cargo Tommel file. And basically what this does is it compiles all of the, it'll basically be built and compiled and executed prior to the cargo package, building the cargo package in which the cargo build was invoked. And this gets kind of convoluted, but the idea is you've got two things that we're going to build here. So there's the variant, which we triggered through our cargo make, and then there's packages, which are the dependencies. And so once we've triggered the variant build, it's going to go through and find all the dependent. It is a pretty simple script. And it's going to specify, like I said, two things, either build variant or build, I think you can see my mouse over here. So it's going to build variant or build package, and it's going to use a tool called build sys to do that. And this is all this build RS file does. And what this will actually do is it will trigger this build sys application that will go through the bottle rocket tree and it will identify all the packages that need to be executed and built. It will spin up a Docker build command using the bottle rocket SDK, the SDK, it will then spin up a Docker build for that particular one. It will leverage the SDK to then build the RPM package for that dependency. And it'll copy that desired artifact out of that container onto a local disk in a particular directory that will be later used when we actually go to build the variant. And so this seems kind of confusing, but essentially it's going to go one by one through all those dependencies, build them as an RPM file from source, put the output in a particular directory, and then move on to the next one. And so it will leverage that cargo Tommel file to go through and build each package and identify any additional dependencies that they need. So this can get pretty nested. And so this is why I say it can take quite a bit of time. And so if you have a package that has other dependencies, it's going to kind of spider through each of those. And each time it needs to do that, it's going to spin up another build process. It will go through and build that, put it in the output directory. And so for example, we've got our EKS variant here. And so this is going to be running Kubernetes 1119. And in order to be able to connect that finished host, so imagine we've gone through the build processes and we have our image. When we spin up that host, there's certain configuration that needs to be on there in order for that host to communicate with the EKS control planer, the Kubernetes masters. And so we have our AWS, I am authenticator. We've got the CNI and CNI plugins so the networking can connect. We have the specific kernel that we want to build. We have the Kubernetes version that we want to build and have a part of that. And so by using this combination of these Tommel files and our build RS and build sys applications, you can actually go through and build this very complex set of dependencies and this tree of dependencies into a meaningful output that can then be leveraged by the build to produce our disk image. And so once it's gone through those packages and it's built all those dependencies, it's basically going to then focus on actually building the variant. And what this looks like is it's a simple docker build command. And we've specified specific directories that it's going to use to basically go through and build each of these. I need to make sure I'm on time. And so it's going to take all those RPMs and leverage the RPMs from the output directory, the artifact directory that we've created. And it's going to install those as a disk image, which will actually become the root file system for our image. So it's going to basically use RPM to image to create that file system and then make sure everything that needs to be installed for that variant is installed and configured from those packages. And then what it will actually do is it'll output a final disk image to this build or slash build slash images directory. And at that point, you have your actual completed image, but it's just a disk image. And so you need to be able to put it to use. And so for the sake of something running in AWS, like I said, we want to convert that disk image into an AMI. So we have a cargo make a method that will actually go through and do that. So it'll publish that disk image, convert it to an AMI, get it into a specific region that we specify, and then it'll actually make that public so that the hosts themselves can go through and download that. It's pretty straightforward. This is optional. Like honestly, you know, from I'll go into this a little bit more. And the way that these, you know, once we've got it published and it's out there, we can go through this update process and we create essentially a repository to handle the updates. This portion is somewhat optional. So if you want to run in place updates and you want to go through and build out a repository to be able to say leverage the in place updates along with our update operator, say for Kubernetes, that will actually keep these hosts up to date, you can publish a repo and, you know, we publish the ones for the variants that we manage. But if you want to create your own variant, you can actually publish your own repo. It just requires some simple metadata, some cryptographic hashes, basically some signing keys as well to make sure that when a host wants to connect to that repo, it's doing so in a secure way and it's pulling a validated version from the repo that you've created. And cargo will actually go through and make this for you. And then you basically just copy those images to that repository wherever you want it to be stored, whether that BS3 or wherever. And that can be, it just needs to be exposed in a way that the host can actually reach it. So if it's on an internal network, as long as the host can reach it, you should be fine. But the idea here is this is optional. You don't have to do in place upgrades. You know, one of the beauties of BotterRocket is the fact that it is an atomic update. So you can do an alternative view words instead of updating this way, you can just do a wholesale image host replacement. So you can spin up a number of new hosts running the Lillie Disversion and just replace all the existing ones. And if you wanted to run your operation that way, you could even build a variant that doesn't include that secondary Dispartition and doesn't include the update process itself. So I say that this is optional, but we want to provide a mechanism that if you want to handle updates yourself, you can do so in a secure way that enables the operating system and the host to make sure they're running the latest version at all times. But if you want to run this through a different process, there's that flexibility. And there's a couple of questions. I'll get to those in a little bit. And the idea is at this point, you basically have everything that you need to run the operating system. But there's ways that you might want to extend that. And so, you know, I show you how to go through and build the variants. But the idea here is not everything needs to be built into the host OS. In fact, we would actually prefer that you build minimally into the OS as possible. And you use one of these extension methods to add features and capabilities as needed. And so one of the obvious ways that you might need to extend Bottle Rocket is adding additional permissions. And, you know, like I said at the beginning of this, we have the SELenix policy in place that finds, you know, how to transition rules for container runtimes. And we've set that in a very restrictive mode by default. But we have three methods that you can use to elevate those permissions. Everything by default that's running as a workload on top of the host is going to be running in a what we call container T. And so this is going to be the default for any ordinary containers. This is going to be essentially a walled garden from the underlying host itself. And then anything that requires additional privileges. So if you want to run a container in privilege mode or the control host container, I'll use this as an example. It's going to be running in control T. And so this gives it elevated privileges. And you kind of need to be careful here because if you think about this, we've got the control host that's running on that instance that provides the API. But because that control host can add additional host containers to that host, there are ways to leverage that control T to get super T access, which is super powered. Essentially, you have unlimited access to the underlying host OS. So you want to make sure that if you're going down this path, you're being very specific about why you're doing it and you have specific processes to ensure that you're doing this in a safe manner. That's not introducing security holes along the way. And so if you have say like a solution that needs to have tighter access to do runtime scanning file access scanning, something like that that requires a little bit more permissions. I would do that in a way that that's cognizant of the fact that you need to look at security in depth, defense in depth, and make sure you're doing so in a way that's going to maintain that security while providing access to this. The way this typically works for something like Kubernetes or ECS is in your pod spec or task definition, you can actually specify these in those files. And this is the way it looks like for a Kubernetes pod spec. So you can come in and specify the specific security context and the SE Linux options you want. So you can specify the user, the role, the user type, and even the level. And you want to be very careful here because like I said, what you provide here can actually be used to gain additional access. And if we look at this at providing containers, if you want to dive deeper into how this works, we've got this specific guide here that will go more in depth on what options and guidance we have around providing your workloads with additional privileges. I would say, you know, like I said, use this sparingly. And if you want to have additional features or capabilities, I would target something like a host or bootstrap container. And then we'll dive into EBPF and other stuff in a moment. But the idea here is you can add additional host containers to the host itself. So if you need to have a workload that's running that needs tighter access to say you're an ISV partner or a solution provider and you need to have an agent that's running on that system at all times, you can add that potentially as a host container. And so you can give that host container privileges to be able to run it boot and essentially have access to that system. These host containers that you have to be mindful of, they're not orchestrated. So they only stop and start according to whether or not an enabled flag has been specified. And while they aren't orchestrated on the host itself, they are managed and monitored. So if they do stop, we can start them again. They're essentially run, like I said, in that second instance of container D. So they are in a different walled area for the the OS. And they're not updated automatically. So imagine this is a container that's going to be downloaded and run on the host in another container D environment or Docker environment. So if you have updates to your host container, you'll have to go through and disable that container, download the latest version and then enable it. So it's there's an update process there that's kind of managed separately. And if you set this thing to be superpowered, it essentially has unlimited access to the host. So you need to be mindful of that. Recently, we added the concept of bootstrap containers. And these are designed to help bootstrap the host before other services start. And the idea here is that they're unlike normal hosts and they're not treated with superpowers. Like you can't give it, like a normal host container, you can't give it a superpowered access. They have access to the underlying root file system at slash dot bottle rocket slash root FS. So be mindful of that. Essentially, they start and run after system D configured dot target unit is active. So once that's running, these bootstrap containers will be executed and they're not running a deterministic order. So the boot process will essentially wait for these to execute. And if they execute in a non zero with a non zero value, they're actually going to stop the the the the boot if we set it as essential. And so this is what it kind of looks like when you want to add a host container. It's a simple API call to host container, the control container that's on there. You specify the URL, you specify whether you want it to be enabled. Like I said, you can this can be downloaded and not enabled. You can enable it. It will. And then you decide whether you want it to be superpowered. And the same with the bootstrap containers as well. So we've got this idea that, you know, you specify the URL, you decide whether it's run once, whether it's run always, whether it's off. And then you decide whether it's essential or not. So if this is essential, and we go through and there's an error, it will actually prevent that host from from being booted up because something in the provisioning has has failed. And then if we want to extend the capabilities even further, this is honestly the preferred method. So, you know, we have this way to provide additional access. We have this way to add additional containers to that host that can perform things that may not be possible in the US. So imagine you need to have additional tooling rather than installing that in the West, you can install that in your host container, you can add that there. These should be used is like very sparing, a little less sparing. And this is kind of the preferred method here. So if you want to extend, I would start with looking at kernel modules first. And I would look at this using EBPF first. So we're using a version of the Linux kernel that supports EBPF out of the box. And so from just the containers that are running in the orchestrated environment, so imagine you're running in Kubernetes, you have an EKS cluster that you've got a number of these bottle rocket hosts in, you can have these run as a Damon set. So you can have a Damon set running in that environment that has a specific application that has an EBPF connection to the underlying OS and is written in a way that it's leveraging EBPF to handle any communications with the underlying kernel itself. And it'll do that using the security methods provided by EBPF. And so this is honestly the preferred way that you would do this. The only caveat here is you have to keep in mind that running this connection this way, it's still going to be a read-only file system for those workloads. And so anything that needs to be able to write to disk on the underlying host, this may not be an option for it. But it is the preferred method from a security perspective when you consider it. So we have another number of vendors that are exploring capabilities in this way. I'll use Tigeri Calico as an example. We recently did a really good blog post with them where we showed how to use EBPF to accelerate the networking in Bottle Rocket so that we're using the kernel for routing using EBPF versus the networking stack per se. And so what we can do is actually limit the need for things like Qproxy and actually use the kernel to accelerate packet filtering and even routing of the packets to underlying containers on that host. All using EBPF in a way that is very easy to set up and straightforward and secure. And so there's lots of opportunities for you to very, very powerful integrations with the underlying OS, but in a way that is still respecting essentially the security of the underlying OS itself. And let me look through some of these questions because I know there's a few that have been sitting here for a while. So is the IP address of the container visible from other OS environments outside for integration or by permission from package managers only. And so the idea is if you have a repo that you want to expose, you can expose it to just those particular hosts and you can set up a specific network path. The point is we want to make sure that if the host needs to be able to reach that repository, it has to be able to communicate from there. So whether that be the way we're running it for the variants we manage where there's an S3 bucket with a cloud front distribution or you just have a private bucket, we want to make sure that at least you're storing it in a place that those hosts can actually reach it. And it can be private to just those hosts. So it doesn't need to be exposed to the world unless that's the point of the variant that you're making. So imagine if you wanted to build a variant for say GCP or Azure, you would probably want to expose that repository in a way that we've done it for AWS so that it can do that. So those hosts can actually reach it. And so that's kind of the goal there. Are these generated images, standard ISO images that could be installed on a host offline? I don't have a really good answer for this. I believe we're using OVF format or VMDK format for the disk image. So that's one of those ones that I will probably have to ask the team to get a little bit more clarity on what that process should look like. At the end of the presentation, I've got my Twitter handle. And so if there's a question along these lines that I can't quite answer on the call, by all means, hit me up on Twitter and the team and I will be happy to follow up with additional answers and conversation. So that one, I will have to get back to you on. So please feel free to hit me up at boring geek on Twitter. When running Bottle Rocket OS on EC2, in order to start the admin container for troubleshooting, we connect to the EC2 instance via SSM. Is there a way to start the admin container when running Bottle Rocket OS outside AWS and without access to SSM? So yes, essentially you can activate it two ways. So you can activate it via the control container using the API itself or you can use user data. So when that host is actually spun up, the user data will actually specify whether or not the admin container can be activated. And so you don't necessarily need to activate it outside of AWS, but you can activate it as part of the provisioning scripts and the user data for the host itself. And so there's kind of two ways to do that. So that's one method. Regarding EBPF, are the kernel headers included in Bottle Rocket or the kernel compiled with K headers? I don't have the answer to this one, honestly. I believe the headers are included, but I will validate that. So Todor, if you want to hit me up at boring geek on Twitter, I can validate that. Pardon me, I can validate that for you. And in fact, I'm just going to switch this over to the Q&A slide because we might as well just start diving into Q&A. We've got about seven minutes left. Is there a timeline for allowing Bottle Rocket via managed node groups or a timeline for GPU support? I don't have public time for GPU support. And I know managed node groups is on target. So it is one of the goals that we want to have out there. And so it's something you can do today as well. So it's I want to be very clear. It's like you can in a managed node group, you can actually specify a custom EMI. And so you can specify the Bottle Rocket EMI and there's some config for that. If you hit me up on Twitter, I can actually share our former evangelist actually has a GitHub project that shows how to do this. But you can basically specify a custom EMI for Bottle Rocket that will leverage the same variant for say EKS and will actually spin it up in a managed node group. We are working to make that much more streamlined and just native. It'll be part of the dropdown essentially via managed node groups. That's kind of the goal. I don't have a timeline that I can share for that, unfortunately, but it is a very requested feature and it's something that we want to make sure happens. And then as far as GPU support, I don't have a timeline for that. But if you want to get a part of this, oh, and actually Ben on the call is responding. So Ben is one of our engineers or principal engineer on the project. And so if you want a specific feature or you need a specific capability, this is an open source project. And so I want to make sure that you all understand like if you want to participate, if you want to submit issues, if you want to submit feature requests or even pull requests, you can actually go directly to the project page at github.com slash Bottle Rocket OS and start to participate. You can plus one features. So if you need GPU support, definitely go in and plus one that feature request. Ben has been very clear regarding kernel headers. The kernel is compiled with K headers. And we also make the kernel develop files available under slash user slash source slash kernels for the Kmod use case. So that should be there. I'll be the why not this other tool guy. Why did you make your own build system instead of using available one like Yachto, for example, a lot of the caching RPM packaging is done there. I don't have the history on that one, Carlos, unfortunately. I know that we made selections around cargo and rust because of just that the language support and the kind of the security first capabilities of rust. And so I think from a simplicity standpoint, that's why they went with it. So everything was essentially in these these make files and Tamil files. And then by using cargo, we could wire together all these disparate other capabilities together pretty cleanly in a way that worked pretty well. But we are open to suggestions. And so if there's a better way of doing things, by all means, definitely come reach out to through the project. And if you want to submit a pull request and offer suggestions like we would welcome participation gladly. In fact, we've got a number of companies that are actually working with us directly on bottle rocket and building support for bottle rocket. So we have a number of technology partners that are going through today and are providing features and capability that most people are leveraging today or need. And so if you want to extend this capability, I'm going to go back over to this slide again, if you want to extend the capability and participate by all means, we welcome you in. Like I said, at the beginning of the call, we want to create additional variants to support additional workloads and orchestrators and even cloud environments. That is the long-term goal. We want to make it so that bottle rocket is an OS that you can run for all your container workloads wherever they are. And so if you want to help participate in that by all means, we would love to have you be a part of that. And if you want to get started with bottle rocket, you can go directly to the bottle rocket project page and start getting your hands dirty with it today. So with that, are there any other questions? Many last-minute thoughts that any of you have? We've got about two more minutes left before we've got to call it. So we got one that just came in. So bottle rocket OS would never be a full-blown OS. Well, technically it is a full-blown OS. So we should be very clear on that. So bottle rocket OS is a purpose-built OS. And so it is a full Linux distribution. It comes with all the necessary components and binaries and libraries you would need to run your container-based workloads. The difference is it's not written in the same way as a general purpose OS. Or if you consider a general purpose OS, it includes everything that you would need to run basically any workload. You can compile and build whatever workload you for whatever workload you needed. The point of bottle rocket OS is really to tighten up that security, really minimize the footprint, and make it so it's focused on just running the container-based workloads for that specific purpose. If you wanted to make it a desktop OS, I would imagine a certain degree of work that you would have to do because you would have to build in support for something like Nome or a desktop environment. And I would imagine a lot of that would require additional binaries and libraries that we've actually removed from the system. So it's not something that I would necessarily recommend for that kind of purpose. Like for a desktop environment, typically you would want something that's more general purpose. Are kernel versions hard-coded or can they be chosen on the variants? And so when you saw my example of the code come up earlier, Carlos, regarding the EKS variant that we built, we had a very specific version of Kubernetes and the specific kernel version in that particular variant. And so if you wanted to build a variant that had a different kernel version, it would just be a matter of specifying how that would be built into that particular variant. And then you would have to go through any dependency management that would be required for that specific kernel. But it is possible to go through and do that. And if you think about how we've built the Kubernetes variants and even the EKS variants so far, as we've evolved and upgraded the kernel, we've done that as part of that variant along the way. So it has support for the ability to specify that version. It's just you're going to have to build that into your specific variant. And there's examples in the code repository of these variants and those make files so you can go through and see how that process is built if you want to dive into it a little bit more deeper. Can you post your LinkedIn and Twitter ID to get more updates on Bottle Rocket? So I don't have my LinkedIn, but it's slash Curtis Ricci. My Twitter handle is it boring geek. Our project page is at github.com slash Bottle Rocket OS. If you do want to participate and get more feedback or more information, by all means hit me up. The team and I are very happy to get anybody engaged and answer questions. So feel free to hit me up whenever you want. And I think with that, we're actually at time. Right. Thank you so much to Curtis for his time today. And thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. And we hope you're able to join us for future webinars. Have a wonderful day.