 All right, we're going to go ahead and get started. I'd like to thank everyone who's joining us today for the webinar, Immutable Infrastructure in the Age of Kubernetes. I'm Taylor Wagner, the operations analyst here at CNCF, and I will be moderating today's webinar. We would like to welcome our presenter today, Tim Gorilla, the CEO of Tello Systems. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee, but there is a Q&A box at the bottom of your screen. So please feel free to drop your questions in the Q&A, and we'll get back to you as soon as we can towards the end of the webinar. A reminder that we want to use the Q&A box and not the chat window. So please direct all questions to Q&A. This is an official webinar of the CNCF, and as such is subject to the CNCF code of conduct. So please do not add anything to the chat or to the questions that would be in violation of the code of conduct. So please basically just be respectful of all of your fellow participants and our presenter today. And one last thing, the recording and slides will be posted on the CNCF webinar page at cncf.io slash webinars later today. With that, I'll hand it over to Tim with me to pick off today's presentation. Thanks, Taylor. I appreciate it. Thanks for joining everybody. My name is Tim, and I'm going to be talking a little bit about Immutable Infrastructure in Kubernetes, a little bit of the history of what Immutable Infrastructure is and how it's kind of evolved from the early days. So first of all, a little bit about myself. My name is Tim. I've been working in infrastructure software for, you know, most of my career. I spent some time at a company early on called RPATH, and we were doing some interesting things around kind of automatically generating system images based on applications and their dependencies. And I think that some of that work kind of paved the way for my interest in customized Linux operating systems, containers and virtualization, obviously. I spent some time at a company called Eucalyptus. We were building an on-premise implementation of Amazon's web services. I'm sure some of you remember Eucalyptus and probably kind of open-stacked the spiritual successor, so to speak, of Eucalyptus. From there, I went to Ansible. I was one of the co-founders there at Ansible, and we ended up selling the company to Red Hat. And from there, I co-founded a company called Talos, and I'll talk a little bit more about Talos later on down in the slides. I live in North Carolina with my two dogs and my wife and a bunch of trees in the backyard. So that's me. So enough about me. Let's talk about immutable infrastructure. First of all, what is it? Let's take a step back and start with some of the traditional methods of systems management from the 1990s, maybe the early 2000s, and what systems management would look like back then, at least in my experience, that there were some exceptions, I'm sure. But the general rule was you had your identically configured systems running in a data center, sitting in a rack, you had your web servers, your app servers, your load balancers, etc. And these probably would have been configured by hand. Logging in, a system would log in, run commands, update packages, maintain these machines manually, kind of on a per machine basis. Operators would apply security fixes, change configuration, and do what they could to keep these systems consistent across the fleet. Over time, though, this manual approach usually led to divergence between systems. So you'd end up with slightly different packages on different systems. You'd end up with slight configuration differences between systems. And the operators of these machines, they're well-meaning, working to solve problems on a case-by-case basis, fixing each system as necessary to keep it running. And if you're old enough like me, I'm sure that you probably remember operating some of these machines this way. And so the end result of this kind of process was a series of so-called snowflakes. These are artisanally handcrafted systems that were built over time to solve a particular problem. And they were difficult to replicate if something happened, if the disk crashed, or if an operator moved to a different company, it was sometimes rather difficult to get back to a known state and get back to a situation where you felt like you could repeatedly recreate whatever system you had in your data center. So what did that do? That gave you a fairly brittle infrastructure. If something changed, if a system went down, if a disk crashed, if an operator left, you didn't really have a path forward to get to a known good state. So back to immutable infrastructure, what does that exactly mean? My definition is it's a system that does not change once it's been deployed. If you need to patch it, if you need to upgrade it, if you need to change its on disk configuration, you'd shut it down and you'd launch a new one. So you wouldn't make changes to that system midstream. You wouldn't apply security patches on a per package or a per file or a per application basis. Instead, you shut it all the way down, you throw it away, and you start up a new one. What's the benefit of this model? So I've identified a few here. There's probably others. What it does is it makes a lot of things related to the deployment and management of the system easier. If you're dealing with immutable infrastructure, you can basically eliminate configuration drift. If nobody is modifying these systems as they're running in place, you have a much greater assurance that the system, as it was deployed, is the one that's currently running. Along with that, you gain improved consistency between your development stages. So if you have a strict workflow and a strict process, each step along the way, whether it's your developer on the desktop or laptop, into QA, into production, ideally you'll be dealing with the exact same system image, with the exact same bits, and within reason, the same configuration. What you also get from an immutable system deployment methodology is much easier access to more sophisticated mechanisms like rollbacks, A-B testing of environments. It makes staging upgrades easier. If instead of modifying a system in place, you're just throwing something away. You do need a lot of infrastructure for this to happen. This wasn't really possible back in the 90s and 2000s before virtualization came onto the scene and gave you a lot more agility and flexibility around how you deploy systems, how you throw away systems, and how you launch new ones. But of course, nothing is free. Immutable systems do have their downsides. And in the early days of immutable infrastructure and immutable systems, there were some problems with the approach, some issues and some things that were difficult. Primarily maintaining that golden image was a bit of a chore. You usually had to be dealing with an entire operating system and you had maybe some rudimentary tools to make that happen. The tools that we were building at ARPATH were intending to solve some of these problems. But what you ended up with is a system where the OS and the application were fairly coupled. They were tightly coupled. They were inextricably linked. If you're deploying these to a virtualization platform as opposed to a container platform, your application images had to basically contain the entire operating system and everything else involved. I wanted to talk a little bit more about the history of immutable environments and immutable systems. The first hint of immutable systems in my mind came from Red Hat about 13 years ago in 2006. Red Hat announced a project called Stateless Linux. Stateless Linux was intended to solve the so-called snowflake problem by using an immutable read-only file system for the root partition of each class of machines. By each class, I mean web servers, database servers, app servers, etc. Red Hat developed a whole methodology and put some engineering work into making this happen. When you needed to make a change to the system's configuration or upgrade a package, that change would be applied to the golden image and it would be rolled out via different mechanisms to the appropriate machines. This ended up eliminating the snowflake problem and it ensured that the configuration and versions of the underlying software did not diverge between systems, between deployments, between dev test and production, etc. So for various reasons, and I'm not sure of all of the reasons, Stateless Linux's specific implementation of immutable infrastructure didn't really catch on. Instead, I think that the industry kind of moved more towards tools like PuppetChef and Ansible. Things that allowed you to automate the process of logging in and applying security updates and making configuration changes in a declarative way or even in an imperative way. So you ended up with the same result. You had a common state deployed across your infrastructure, your configuration was consistent across your fleet of systems, using a different technique, using playbooks and manifests and so on to describe the state of your system and then there was a tool that went out there and made it so. And the benefits of either platform here are pretty clear. You can eliminate that configuration drift, you end up with more similarities between test and dev, and so on. So I like to look at the PuppetChef and Ansible style tools as kind of a diversion that the industry took on the way to getting to real immutable infrastructure. When Amazon rolled out their EC2 elastic compute platform and as virtualization kind of gained favor and gained performance and began to be deployed in people's infrastructure, those advances kind of made the idea of a stateless system more approachable. And I think that Netflix really kind of helped popularize this approach as they really went all in on the Amazon cloud platform. They popularized the concept of the blue-green deployment where this methodology involves deploying an entirely new group of servers, which in a pre-cloud environment would be very difficult and very expensive. You basically have to double your data center. But since you're paying by the hour, paying by the machine in the cloud, you could do that and you could stand up this entirely new group of green servers and instead of logging into the live, the blue servers and updating them, you'd stand up your green system, you'd test your green system, you'd make sure it's good to go, and then you'd flip your load balancer over to talk to the new servers. This means that you could do a lot of upfront testing and validation against the green stack without impacting users. And then once that testing and validation passed, the traffic would be shifted to the green stack and the blue stack would be decommissioned. So using these tools, using these concepts, a series of golden images that represents your applications and your components, Netflix and others got quite a bit closer to the goal of having these systems be fully immutable and see all the benefits provided by that. There still were some downsides, as I mentioned in a previous slide. Building the golden images was a difficult process. Depending on the particular cloud you were working on, the process for getting those new images, those AMIs up to the cloud platform involved quite a bit of work. It was slow. You're pushing hundreds of megabytes or maybe gigabytes up to the cloud over your 2007 internet connection, et cetera. The other thing that you had in this situation was that the application and the operating system were still tightly linked. You couldn't really separate those two parts if you needed to move one piece forward or keep one piece back. You had to worry about the dependency closures and so on. So there were still some challenges. A big organization with a lot of resources like Netflix could certainly take advantage of these concepts, but it might have been more challenging for a smaller organization with less resources to really see those kind of benefits. Moving ahead kind of in the timeline, after Amazon EC2 came out and some of these new concepts were being deployed, tools like Packer from Hashicorp came on the scene and Docker as well. Both of these systems had ways to generate these golden images based on a descriptor. In the Docker case, of course, it would be a Docker file. Packer had a mechanism to assemble these systems and output them in a certain format. So we started to see some tooling emerge that made that image build process a little bit less troublesome. We had improvements in the way that these improvements in the cloud APIs to receive these new images and it became easier to iterate, to update. It became easier to track changes and update existing platforms. All of this is kind of at the beginning of containers taking over the industry a little bit. And you still had some challenges in terms of, you know, how do you run these containers? Who is responsible for making sure that the right containers are running at the right time? Where are they running in your infrastructure? What happens if they go away? What happens if a node crashes? What happens if you lose access to one of your systems? And I think, you know, from that problem, that's where Kubernetes comes in. And that's where Kubernetes began as this container orchestration platform to solve some of these problems to again kind of shift the burden away from the manual step-by-step process to handle some of this stuff and move it more to a declarative, more to a, you know, desired state configuration. So Kubernetes really made this the container workflow possible. It began to decouple the operating system from the applications. Containers allowed you to really pare down what goes into your systems and decouple the application components from the operating system components. It took the burden off of the administrator to decide where and how these application containers could run. Things like storage abstractions and network abstractions made it easier to begin to separate the ephemeral storage that you might need for, you know, local state for a particular application and persistent storage, which is, you know, of course, the storage that holds your application data and your user data and database storage and so on. So going back to the Ansible Puppet and Chef style tools, as containers and Kubernetes and Dockerfiles and so on, as these things gained prominence in the industry, I think that those, the imperatives and declarative configuration management tools like Ansible Puppet, Chef and so on, I think they become less important because you're doing less work on the live running systems and you're doing, you're kind of pushing the, you're pushing the changes earlier and earlier in the development cycle. So more of the, more of that change and more of that image churn, those cycles, updates and so on are happening earlier in the development cycle. And of course, that also began to continue this decoupling of app and OS. So I think kind of, you know, talking about today, we've got, we've got Kubernetes, we've got container image build tools, we've got a lot of interesting componentry out there at the application layer and I'm sure a lot of you, if you're using Kubernetes, if you're deploying your applications on Kubernetes, you've seen and taken advantage of some of the concepts of immutable systems for your applications and hopefully you've seen some advantages in terms of agility and, you know, time to deploy and uptime and so forth. What I think hasn't happened very much are changes at the host level, right? So the machines that run Kubernetes, the machines that run your container orchestration tools, what if you could apply those same immutable immutability concepts to your Kubernetes host environment? And, and I think you would begin to see some of those same advantages. You would see more stability, you'd see less sort of, you know, handcrafted hand-configured systems and you'd be able to upgrade faster, you'd be able to roll back if you needed to. So a lot of this stuff in terms of running your Kubernetes infrastructure entirely in an immutable way, that's been one of the design goals for Talos, the company that I'm involved with. It's a new operating system designed specifically to host Kubernetes clusters and to be a container host. I'll talk a little bit about Talos and then I'll talk about kind of how these concepts of immutability apply to your Kubernetes environment. So one of the design goals of Talos has certainly been to be immutable and for these systems to be immutable and ephemeral, meaning you don't change them once they're launched and you can throw them away at any time. So Talos is an open source operating system based on Linux. We've been in development for just about three years and last year we launched a company to fund its development and to build out services and support organization around this new method of hosting Kubernetes. Talos is very, you know, highly inspired by CoreOS and a couple of other platforms that have come before and I'll just talk a little bit about the architecture of Talos. Again, it's an open source OS based on Linux. You can see that the Linux kernel is at the base of each of these components and the goal has really been to build an operating system that makes sense in a modern distributed environment. So we've made a number of design decisions in that direction and some of them I feel like some of them are a little bit radical, right? So we have actually removed console access. There's no SSH. There's no shell in the system. Instead everything is managed via an API. So as you can see in the diagram we've got the OS CTL command line tool which communicates with the API. You can also communicate with it directly with the API and we've tried to, you know, build a solid and modern API to do these OS management tasks that you need to do. A little bit more about Talos' architecture. Just like Kubernetes, we've got it split up into control planes and workers. So Linux, Talos, and the kubelet run on all of those. The system has been designed from the beginning to be immutable so we actually launch into a squash FS root file system. It runs out of RAM and it never touches disk. Kubernetes does need some ephemeral storage so we do have a provision for that. But what this means is that even a dedicated attacker, from a security perspective, even a dedicated attacker who manages to access the system, they can't change that root file system. They can't log in. They can't remount it with write access, et cetera. Along with this, the fact that there's no shell, no SSH server, no other way to obtain console access to these Talos hosts means that the system is very difficult to modify while running. I would imagine that if you have access to the data center itself, if you could roll the crash card up to these systems, I'm sure you could find a way. But we think that we've added a few layers of safety and security there from an OS perspective. So the immutability of the system gives you some security benefits and as we talked about earlier, you also get the benefits that you know that the configuration is what you tested with. You know that there's going to be no unexpected changes from either well-intended operators who might make a mistake or might make a change and not tell the rest of the team. You're also more secure against nefarious intruders who might exploit a security problem in various ways. So I could talk more about Talos and I'm happy to answer questions there. I'll leave some of that for the Q&A afterwards. But I did want to kind of walk through the upgrade process, kind of an illustration of one of the important workflows related to an immutable system. How do you update it? How do you upgrade it? How do you change it when it does need to be changed? So this is just one example. This is a workflow that we have implemented as a Kubernetes controller as an operator underneath the hood. So this is automated. I think you could probably take a similar approach to handle your applications. There's a lot of good examples out there on the internet. But let me just walk through this little workflow here and talk about what it takes for Talos to upgrade a node. Again, it's similar to how you'd upgrade your applications. But this is for the host environment that's running Kubernetes. This process is the same no matter where you're running Talos and Kubernetes, whether you're on cloud, virtualization, bare metal. It's all handled automatically by our upgrade controller. And it starts with someone or something making an API request to perform the upgrade so that API call comes in. We cordon and we drain the node of requests. If you're familiar with load balancers and how that flow works, this should be familiar. We stop any new requests coming in. We do what we can to get rid of any existing requests, wait for them to finish. Once that's done, once we're cordoned and drained, we'll stop the cooblet and remove all the pods on that system. We'll verify the upgrade path. We'll do some work within etcd to make sure that all the members are set up and started appropriately. And then for this particular node that we're upgrading, we'll unmount the ephemeral disk, we'll reset the partition table, and run the installer. In the Talos example, the install is pretty quick. It's a fairly simple setup. I don't have a lot of technical detail in here. But if you'd like to know more, feel free to join in Slack afterwards, or you could throw a question into the Q&A and I'll do my best to answer. Once the install is complete, we'll reboot the node and we'll verify its health, we'll uncoordinate, and we will bring that machine back up and it's hopefully ready for workloads. So that's kind of a general upgrade path that applies to Talos and applies to Kubernetes. I think it could also apply to your applications and it might actually be quite a bit simpler for your apps. If you look at the history of immutable infrastructure, the styles and technologies, and fashion changes throughout the years. I like to think of each of these iterations and evolutions in the way that people manage systems. I like to look at it as incremental progress. We're all trying to make computing a little bit safer and a little bit faster and a little bit stronger, more resilient to problems, more resilient to attackers and improve the security. And so I feel like every little thing that you do to get a little bit closer to a better system is worthwhile. So I'm hoping that by using the concepts of immutable infrastructure and immutable systems for your container hosts, for your Kubernetes hosts, I'm hoping that we can just make computing a little bit safer, a little bit faster, and a little bit more secure. So I'd be happy to run over to Q&A now. If you'd like to learn more about either Talos or what we're doing as it relates to Kubernetes, maybe start with the last link on this slide, our documentation page will walk you through it. You can see the source code on GitHub, and we would love to have you join our Slack if you'd like to have a more in-depth discussion. We've got the creators of the system and our engineers there, and we'd be happy to talk. The project has been around for about two and a half years, as I said. We would love to have more contributors, more users poking at the system and finding ways to improve it. So hope to see you there. I will stop here and I will switch over to Q&A if I can get this panel to show up, and it looks like it's not actually going to show up. So Taylor, I wonder if you wouldn't mind reading me off a question or two, and if I can answer it, I will. If not, I will take a pass. Of course. Okay, so the first question is from Samuel. Is there already a plan to have TALOS OS working out of the box would be awesome with other CNCF projects like Q-Router and Rook? Sure. Good question. Yeah. So I don't know the specific technical details of how to connect those other CNCF projects to TALOS, but what I can tell you is that our architectural approach is such that we don't want to reinvent the wheel if we can avoid it. So if there's a good storage platform or if there's good networking platforms out there, we will work and integrate with those. We do have plans to put together a simple plugin mechanism where you'll be able to kind of have these things out of the box if you so choose. But today, basically you can run most of that stuff just as a container alongside the rest of your infrastructure managed by Kubernetes, managed by TALOS. Okay. Next question. How is TALOS OS Linux different with Core OS? Sure. Good question. I think you'll see a lot of similarities. We were heavily inspired by Core OS. We thought that Core OS was moving in the right direction. I think what you'll see in terms of differences is we've gone further sort of in a more radical direction than Core OS did. So we're not based on any Linux distribution out there. We're quite a bit more minimal than Core OS. We've removed SSH. We've removed console access. I believe that Core OS was a little bit more of a general purpose operating system. It was suitable for application hosting as well. Today, we're very strictly focused on the host, on the host environment. And so everything that we do in the system is going to be oriented around making your Kubernetes host deployment easier and safer and faster. Okay. The next question is from David. Is TALOS HA, can you run more than one control plane? Let's see. I believe the answer is yes, but if you want a definitive answer from our engineers, let's see. I've got a live update right from Andrew here. Yes. You can, I think you can run as many, oh, hey, I see the questions and answers now. Excellent. Okay. Yes. You can run those in an HA environment. And I believe we've kind of, we've done our very best not to impose any additional architectural restrictions on your TALOS-based environment. So each Kubernetes master is a TALOS master and you can have multiple ones. All right. Next question. Can you mix TALOS hosts and non-TALOS hosts in the Kubernetes cluster? Oh boy. Let's see. I'm probably going to have to wait. Okay. I'm getting yeses from the engineering team. Yes. I, my understanding is that as long as the, as long as those two, as long as your control plane and the non-TALOS hosts are compatible version-wise, et cetera, and you've got the right bootstrap tokens and so on, yes you can. Whether you'd want to in a, you know, long-term, I'm not sure, but yes. Okay. Questions. Just keep coming in. Next one from Dimitri. First impressions of AWS Bottle Rocket OS. Yeah. Good question. Yeah. So we, we, we had, we heard a rumor that Amazon was building something similar to TALOS and sure enough they are. There, there are definitely some technical similarities. I think that, I think that Bottle Rocket was, was somewhat inspired, the design of Bottle Rocket was somewhat inspired by the design of TALOS. And, you know, we, we think it's, you know, we think it's great, great validation for us. We're, we're glad that, that Amazon is building something similar to TALOS because it kind of, you know, it justifies our existence and it justifies the work that we're doing. We like a lot of the technical aspects of the system. I think that, you know, I think that technologically, you know, we're, we're a little bit further along in terms of our development. And I know that Amazon is kind of, you know, they're naturally, they're focused on the Amazon componentry. We are, we are very strictly cloud agnostic and we have support for as many, as many cloud platforms and as many other, other sorts of deployment infrastructure as, as we can. But yeah, we're looking forward to, to seeing what the, what the Amazon folks are doing and if they're, if they're good ideas there, hopefully we can share. Okay, next question. What is the release cycle of TALOS? How far is it behind Kubernetes upstream? Yeah, great. So we've been, we've made a commitment to, you know, kind of track upstream Kubernetes as closely as possible. So, you know, as opposed to some of the other more complicated platforms like OpenShift, we are able to, we're able to be more responsive to the Kubernetes releases. Previously, we had, we had our release cycle was kind of, you know, kind of mirrored, it mirrored the Kubernetes release cycle. So every three months we'd release something, it would have the latest version of Kubernetes there. We're in process of kind of changing that deployment model and moving to kind of a faster cycle. But I think we're, we'll still maintain that commitment to shipping within reason, the latest version of Kubernetes that we can just, you know, it's a fast-moving project and, you know, our approach might change in two years when things are more, things are more mature and further along. But for the time being, we're gonna, we're making a commitment to be as quick as we can to release after a Kubernetes release. Okay, next question is when you coordinate a node, the Kubernetes service might still route the traffic to that node and thus you the client can experience some percentage of errors. How do you avoid that? Let's see, that would be a great question to ask our Slack channel. I see that Andrew is piping a bit of a response. So I'll see if we can get something quick. But if you'd like a, if you'd like a longer response, please drop into the, into the Slack and ask us. I, the, the architecture there is a little bit, a little bit above my, above my pay grade, so to speak. So I'm, I'm sure there's a good answer, but I'll, I'll wait for, I'll wait for the team to chime in. I think it's probably not going to be specific to Talos though. So any solution that you can find in kind of the, the broader Kubernetes ecosystem, you, you could apply to a, a Talos-based system. If we move on for a second, come back to that one. Are you? Oh, I think, yeah, I think that's the best I can do. Okay, let's do that. Thank you so much. If you need to, if you need further answers, feel free to drop in and, and ping us on Slack. Okay. So next one is clarifying that you have nothing to do with Oracle Talos, single point of talent. That's right. That's right. The namespace is crowded. We have not received any cease and desists from, from Oracle or Cisco or any of these other folks. And we're hoping to keep it that way, but there are only so many names out there. So that's where we're at, but no relation. All right. How mature is Talos to run production services in particular on bare metal? Sure. Yeah. So Talos is, as the operating system itself, you know, we've been around for, you know, two and a half, three years. We have a, we have a handful of folks who have, who have talked, you know, community members who have, who have been working with us for a while and they are running in production. It's a, you know, the operating system itself is, is fairly simple. There aren't that many moving parts and pieces. The, the code is all pretty, pretty straightforward, relatively speaking. So we feel as though Talos is, is ready for production. In terms of bare metal, I would say stay tuned. We're actually going to release something related to Talos and bare metal very soon that I think will, will be interesting. So, so stick around and watch for that release and hopefully it'll be useful for you. Okay. Is Talos available through any public cloud providers? Yeah, definitely. So the cloud providers were really kind of our first target. So we have support and some documentation for Amazon, for Azure, for, for Google's cloud platform. So, yeah, all of, all of the major cloud platforms are provided and we publish assets for, for every provider on every Talos release. So, yes. Okay. Any plans to maintain a Terraform provider for Talos? Yes, we would love that. It's been on our roadmap. It's one of the things, you know, one of the, one of the things that we know that we need to do is build integrations with existing system management tools, right? So maybe an Ansible module, maybe a Terraform provider. We, we have not ourselves had the, had the resources to work on that. But, you know, in terms of, you know, community contribution, if someone were to come to us and say they wanted to, to work on that, we would be happy to help and we can provide some, some guidance and some assistance. And I think that would be really valuable. So, Lucas, yes, if you're interested in, in, in this, in more detail, if you want to, you know, give us some opinions, if you want to maybe, maybe sketch something out, we'd be happy to hear from you. Okay, next from David. Can you run Windows containers workloads on Talos? Good question. I am not sure. I don't think, I don't think we've tested that. I think, you know, if, if you can run, if you can, if you could run these workloads in an, in an ordinary vanilla Kubernetes environment, you almost certainly could run them on, on Talos, you know, whatever Kubernetes and container D supports will support. But we have not, you know, it's not part of our test matrix. And it's not something we're targeting now. But if there's, you know, if there's community or customer interest, we'll, we'll make it happen. Okay. Someone's from PSI, from resiliency perspective. Talos provides, does Talos provide Bosch? How is it defined at Cloud Foundry? Yeah, if I understand the question correctly, I think that, that Bosch is a, is a deployment methodology for, for Kubernetes, if I remember right. I'm not sure, I'm not sure, Si, if I understand your question fully, but if you'd like us to, to dig in, you know, please feel free to send us an email or join our Slack and we'd be happy to talk. I haven't done a head-to-head comparison or anything like that, but we'd be happy to take a look. Right. Thanks, Si. And then what are typical approaches to investigating and responding to a security compromise without a shell access? Yeah, great question. So a couple different things there. This is something that we've talked a lot about internally, you know, how do we, how do we take advantage of, of kind of our unique architecture in that we don't have SSH, don't have console access. There's, there's a few different angles there. So, so basically when you're troubleshooting a Talos-based environment, whether it's responding to a security compromise or, you know, dealing with some other problem in, in the system, our API has been designed to replace the, the process of logging in and checking log files and, and, you know, running process lists and so on. So the, the OSCTL command line tool, which talks to the API, handles all that for you. So you can fetch log files, you can look at process lists, you can, you can, you know, list files, you can dump, dump files from the file system. You can do all that analysis work in, in a slightly different way. You can also stand up an administrative container alongside. Right. So I think that the Kubernetes world kind of calls these sidecars. So you might stand up an SSH sidecar to really get in there and take a look at things. We think that our architecture gives us some interesting opportunities to build tools for this sort of thing. So perhaps, you know, if we detect a security compromise or if the administrator sees something fishy, in the future, we hope we plan on building some tooling around that. So you'll be able to say, oops, stop this container, pause it, freeze it, move it to Amazon S3 or download the, the container image to your workstation, immediately take it out of rotation. If you think it's compromised and then you have this disk image that you can then analyze, you know, out of production, out of, out of, out of band. So yeah, I think, you know, we've talked a lot about this and we think that there's some really interesting capabilities that would be possible in this kind of architecture. So stay tuned and if you're interested in that sort of thing, we'd love your input. We do a couple things out of the box. We've begun working with the Linux IMA, Integrity Measurement Architecture, which actually, at a kernel level, watches for changes to files and can, you know, send some signal somewhere in case there's a suspicious activity. So we hope to expand the use of that and, and build out some tooling for, for that exact purpose, for that kind of, you know, forensics approach. Okay, next question from Lucas. I've had issues with minimal container-based OSes like Container Linux and Rancher OS, not supporting some low-level components slash drivers for things like NFS mounts. Are you aware of any similar limitations in Talos? Yeah, great question. I am sure that there are similar limitations in Talos, you know, here and there. We, we build our own kernel kind of out of the box, you know, we don't enable, we kind of, you know, we've chosen to minimize the number of kernel modules that we build in entirely. We do have some tooling so you can build your own kernel with your own configuration if you need to, to enable certain drivers or certain kernel features. We do have users who are using NFS and, and I think that that particular limitation is probably doesn't apply to Talos. But, you know, you might run into things, especially in a bare metal environment, you might run into a situation where you're missing a particular kernel driver. And we do have a path to build your own custom kernel and, and include those, include that driver support. Okay, we don't have any other open questions at the moment. Maybe we can give it a minute, see if anything else comes in before we close up. If you have any questions, anybody please enter in an Q&A right now. Yeah, and if you think of something after the fact, feel free to, you know, shoot me an email, tim at talos-systems.com. We'd be happy to have you join our Slack. We do, we do host a weekly community meeting that you can join kind of, you know, office hours with the team. Be happy to, happy to have a live discussion either, you know, on the, on the video call or just on Slack whenever we're, we're always there, always, always, always eager to talk to new users and people who are interested. Okay, here's another question from Axl. How about these drivers? What about GPU support on bare metal? That might be challenging, right? Yeah, yeah. So we've had a few people ask about this. We've done a little bit of investigative work to see what it would take. I think, you know, in general, the previous answer to the, you know, the kernel driver question applies. If you can, if you can build your own kernel with the options that you need enabled and set up the pass-throughs and so on within the Kubernetes and the container D infrastructure, then you're good to go. It's not something that's in our test matrix today, but I think it, you know, I think it will, it will be there at some point. Okay, well, we don't have any new questions. Right, Tim. I think that about wraps things up unless anything comes in the next 20 seconds. But thank you for a great presentation. There were such good engagement questions from the, from the crowd. Thanks so much everybody. Just like to thank everybody for joining us today and let you know that the webinar recording and the slides will be available later today at cncf.io slash webinars. So keep an eye out for that. And we look forward to seeing you all at a future CNCF webinar. Thanks everybody. Thanks, Tim. Thanks, Taylor. Thanks, everyone. Hope to see you around.