 How are we doing today? The internet's having problems, so you have to pay attention, I guess. My name's Jimmy. And I'm the co-founder and CTO of Kubernetes Security Startup called KSOC. We're here to talk about a project I've been working on with a few folks over the past six months. If you want the slides to follow along, here they are, tinyurl.com slash the ncf-owasp. Don't click Strange Links. So who here has heard of owasp before? We've heard of this thing, Open Web Application Security Project. I've been involved for over 10 years, off and on. And the time came for yet another top 10. And what most people don't realize is there's tons and tons of owasp content out there in the world that isn't really APSEC. It spans the gamut of Docker and serverless and all sorts of different things. So you can see here different projects that you can get involved with, use internally as references, and learn more about the space. So we have a Docker security cheat sheet. I worked on the Kubernetes security cheat sheet as well last summer. And we're going to talk about the top 10 today. So the projects gained a little bit of momentum. There's like 400 stars. It's getting translated into other languages currently. It's really exciting to see in this space is growing rapidly. So please check it out. And if you have suggestions, I'm usually reviewing all the issues in PR. So be gentle. But I will definitely talk to you there. And this is the top 10. We have a little under 30 minutes to cover. So hopefully we don't just get to top seven. But this is what we're going to be going over today. And I won't read these one by one, but this is the overview. So again, we know what Kubernetes is. We're all here. We kind of came to this inflection point in Kubernetes where it's getting complicated. And every day, that complication adds more security problems. And the top 10 is a reference guide. It is not gospel. It is not something you have to follow. It is a work in progress and really used as a reference document. So kicking things off with number one, in most things in cloud, misconfigurations bite us all of the time. And Kubernetes is no different. We have the famous Red Hat security survey basically saying everyone's dealing with these things all of the time. And that's why it kind of makes the very top of the top 10. And what do we mean when we talk about misconfigurations in Kubernetes? Well, it's everything. Kubernetes is one giant collection of configurations. So here you have just a snippet of YAML. This is a daemon set with a variety of different configurations that are set for this particular workload. And we've seen some of these things before. Privileged, Host PID, Host IPC, all these different configurations construct your security posture inside of your cluster. Multiply this times thousands and thousands across multiple clusters. You have a serious problem, hence why we're at the top. So how do we deal with these configurations? Well, in an open source sort of ecosystem, we have tools such as Open Policy Agent Gatekeeper to not only find these problems, but protect us from these upon admission. There's plenty of other tools out there in open source land as well that can help you find misconfigurations. Our friends at Bridge Crew have some tools. KSOC has some tools out there. We also can align ourselves with the CIS benchmarks and other frameworks, such as NSA, to really give us a guiding light. Number two, Trebingo Cards Out, Supply Chain Security. So we mentioned this earlier. You don't really have a security conference in 2022 without talking about supply chain. Kubernetes is no different. Containers themselves are comprised of lots and lots of different open source software. Kubernetes itself comes from all over the place. There's lots of contributors, packages. All of these things add to the kind of threat model that is your Kubernetes cluster. And today we're seeing more and more tooling come out to help us verify the integrity and origin of these particular packages and libraries. Things like doing CVE scanning, which we've been doing for a long time, the infamous SBOM, calculating your software bill and materials, checking the image integrity, using cryptographic checks upon admission. These all come into the supply chain story. And we see it in news a lot. Hence, it's number two. My favorite, number three, overly permissive RBAC configurations. Has anybody had to deal with role-based access control in Kubernetes a little bit? Yeah. Is it straightforward to get right? Probably not. This is how we enable access to end users, humans, and service accounts inside of Kubernetes. And RBAC is totally 100% configurable. It's very, very verbose and very granular, which makes it easy to figure out a policy that works, but also makes it easy to have far more permissions than needed. So role-based access control is really this combination of roles and cluster roles and role bindings and cluster role bindings all tied to subjects. And it's really hard to monitor. It's really hard to get right. So we see a lot of privilege escalation and traversal sort of attacks and a lot of problems stemming from over permissive access. And it definitely makes the top three. So things like this, if you were set into a situation where you had to audit a cluster and you had a number of cluster role bindings tied to service accounts and things of that nature, how would you audit this? And this is actually fairly dangerous. You're having the default service account kind of bound to cluster admin, which is built into Kubernetes and really gives you all the access you could possibly need in the cluster. And this manifests itself across the board inside of Kubernetes. And we see tons of problems here. So we have tools. CubeCTL doesn't really give you this very nice, pretty output that says, like, CubeCTL, show me what I have access to. There are some pieces in there. So we've kind of glued together the API responses to come up with tools such as Rackus and some RBAC visualization tools. These are really, some of them are rough around the edges, but they give us that one place to see who has access to what, which is the really existential question in RBAC. And it's very, very difficult to discover. On the top 10, we had an awesome contribution last week. We're going to actually do this in a workshop later. But this is something that goes unnoticed, and it's not talked about much. But basically what you see here is giving the list verb access via RBAC and getting access to secrets. Well, that isn't really the intended use case of RBAC. And this is not really a bypass. It's a built-in piece of Kubernetes functionality. And I'd be happy to explain this more. There's a full write-up in the top 10 of how this works. It works with the list verb and the watch verb, where you get this items line here and you actually print out secrets, where really that wasn't the intention of this particular RBAC policy. The number four, lack of centralized policy enforcement. So it's one thing to go find issues in your cluster, misconfigurations, CVEs, et cetera. But how do you enforce governance across multiple clusters? Well, it's not that straightforward. And we need to do this because these clusters become unmanageable. We're trying to give developers what they need to ship quickly and build the software they need to build. And that means we need to put guardrails in place as security practitioners. So detection and prevention are part of the equation. And again, in open source sort of situations, we have a variety of ways to do this. We have built-in admission control into Kubernetes. The deprecation of pod security policy happened. Now it's a security admission. And we have open policy agent. If you want to plug something in and utilize the Rego language, Kubewarden, Kiverno, these are all policy engines that help you take all the regulatory controls and policies you need to distribute as a security team and place them in a different cluster, stopping bad things from happening or ever entering the cluster. There'll be lots of talks at KubeCon on probably all three of these. Yeah, there's going to be a lot. Number five, a favorite thing, logs. Kubernetes itself generates lots and lots of logs. And what we do with them matters. If you have a single cluster, great. You could probably handle this through standard out and standard error, dumping the logs in the S3 and doing some rudimentary, it's kind of searching. Once you start growing into 50, 100 clusters and they start to scale, it gets very noisy, very fast. And you need to take the logs generated by the Kubernetes API, the applications running inside of the cluster, and do something with them. So we see a lot of folks just kind of dumping API audit logs into S3 and hoping for the best. That's one strategy. But hopefully we can build a better centralized logging platform that finds anomalies and detects issues as they're happening and looking for problems. So the Kubernetes API audit log entry itself, it's kind of hard to see here, but it has a lot of things ready for you as a security practitioner to utilize and start detecting anomalous behavior. The last two lines of this log entry, this event entry, are decisions and reasons that are spit back from the Kubernetes API via RBAC. It's like I allowed this request because of this reason, this particular RBAC policy matched and I allowed this user or service account to access the thing they wanted to access. You can also see where they came from, the source IP address, the user agent, the timestamp. These things tell a story of what's going on in the cluster at any given moment and you can make better security decisions later on. Number six, broken authentication. So Kubernetes supports a variety of authentication mechanisms. It's up to you to choose how to authenticate to Kubernetes. You could use certificates. You can tie into your OIDC platform and you can use your cloud authentication IAM if you're using AWS. All of these things are viable, but it's still easy to get it wrong. And if you're sharing credentials or minting static tokens that are irrevocable or certificates that you can't rotate, there's a problem. There are actually regulatory bodies out there, namely PCI, looking for things like this. Developers have access to clusters and how do they do it? And this is just from the Kubernetes docs, but just to show the flow. It's not just humans. These are service accounts that need to talk to the Kubernetes API. Authentication, authorization, fairly straightforward, and then emission control. This is where we do our policy enforcement. If you have any sort of OIDC connectivity, if you are enforcing MFA, Kubernetes is no different. We don't want to let folks just have access because it's easier. We want to apply the same corporate policies that we have for other infrastructure access. So number seven, and you'll see more and more of this at KubeCon. There's a bunch of co-located events happening in the networking space, but missing network segmentation controls. So your network inside of a cluster or multiple clusters has a lot of chatter. Pods talk to pods. There are services that are exposed to the internet via load balancers or node ports. There's probably some reason to lock that down. You want to stop a certain namespace from talking to another namespace. And by default, if you just turn Kubernetes on, nothing really happens for you. You have to go build these policies. And we have a lot of technology that's coming up in this space to help us. Built into Kubernetes, depending on how you do networking, you could utilize the network policy object here. And you can be very granular. It's all YAML all the time. So you can tell Kubernetes how you want to handle networking. We can use service mesh, technology such as Istio, Cilium, Linkerd, like these things are there to help you plug in overlay networks to build and carve out segmented network connectivity. And starting with default denial, this has been something since I was managing IP tables many years ago is kind of what you do. We decided not to do that in Kubernetes. And we have to go back now and do this default denial. And then you build out individually what you want to allow. And this is just this is from Ahmet Balkins. He has an awesome GitHub repo with a bunch of fancy visualizations on writing network policies. But the TLDR is really not everything in a cluster should talk to everything else. It shouldn't even be allowed to. And you need to put those policies in place. So off to eight, secret management. Secret storage is a highly debated topic. I don't have a prescription for everybody to out of manage secrets at your organization. But it's something to consider. Using the Kubernetes secret object, you certainly can. We all know at this point that those secrets are base 64 encoded, which is the highest form of encryption that we have and shoved into Ed CD. We don't do it for security, but that may be fine for your threat model. And you have to rely on RBAC at that point to stop access to those secrets. I get worried with mixing too many things in the cluster that are very critical, such as secrets, and then failing at your RBAC story, giving the world access to those secrets. You can use other mechanisms, such as KMS or Hashicorp Vault, whatever it may be, you need to have a really strong, standard cross cluster story around that. And then off to misconfigured cluster components. So as we know, a Kubernetes cluster is just a bunch of other services, right? Like the Kubelet controllers, the API schedulers, all of these things are up to you, especially if you're kind of DIYing your own clusters to configure securely, right? The CIS benchmark has a bunch of guidance for us. There's also quite a bit taken care of from our cloud provider if we choose to use managed Kubernetes, but these configurations, they're vast, there's so many of them if you were to kind of roll your own cluster. And random aside, I was trying to figure out what the correct name is for a non-managed Kubernetes service. So out of 65 people, it's free range artisanal. That's what we call them. So I don't know, go figure. So things like this, right? The Kubelet itself is like a kind of agent that runs on every node. We don't think about it a whole lot when we turn on an EKS cluster, because we don't have to, but for those of us who are out there doing the hard work of building our own clusters from scratch, using kops or cube ADM or something like that, these are up to you, right? Anonymous auth and the variety of other different configurations that exist. These flags get deprecated all the time. You have to keep up with them through the Kubernetes versions and they apply to all the different components, right? So I would refer to CIS, but CIS isn't always at the bleeding edge of the latest version of Kubernetes. So if you're out in the world building your own clusters, these are for you. And then finally, 10 outdated and vulnerable Kubernetes components. So we're at a point in the Kubernetes journey that we have CVEs. We actually have a whole live feed of CVEs on kubernetes.io. There's CVEs that come out pertaining to different individual components and APIs and things that make up a Kubernetes cluster. There's also CVEs in very, very popular open source packages. Such as Istio, Envoy Proxy, Argo CD, things that have a lot of access to your cluster that we rely on. They're not immune to this barrage of CVEs that are coming out. So patch management. This is stuff that matters a lot. Asset inventory, knowing what has access to things. What are you giving our back policies to inside your cluster? We've probably all dealt with the Helm-Tiller sort of situation of a few years ago where Tiller was the de facto way to use Helm to let you deploy workloads in your cluster. But Tiller turned out to not be a great piece of code to run inside because of the privileges it had. And we're gonna see more and more of that, right? There's a lot of stuff that we're running from open source and some of it's great and secure, but it takes a lot of maintenance. And this isn't to pick on any one technology or anything like that. It's just what is out there in the world. Now we have these feeds, they're updated daily. Argo CD's been picked on a little bit in the CVE world, but these CD platforms have components that run in your cluster, right? You gotta watch them like a hawk and make sure that you have the latest version and you've given it minimal privileges. So I'm not gonna go through all of these one by one, but I suggest you do a self-audit. I suggest you ask yourself the hard questions, right? Of, you know, can your containers run as root? Do I have a story around multi-cluster governance? Do I have an inventory, right? Where are my workloads? Where, how many nodes do I have? What version of Kubernetes am I running? Do I have a standard? And am I bringing in third-party tools, packages, libraries into this ecosystem? And what am I doing to make sure that they're safe? So the OWASP Top 10 was really meant to help that conversation, right? I know some people may leave and say, we have a new standard to follow and a checkbox sort of thing, and that's not really the intention of the OWASP Top 10. It is to spark a conversation internally and more importantly, make sure we don't miss the highest level issues that are out there today. So I think we're good on time, but I, available for questions, we have a booth, G32 at KSOC if you wanna learn more. It's a party going on with our friends at Bridge Crew and Palo Alto and things like that. And we'll be, and come see me if you want to partake. Any questions? Yeah, is it gonna be added to the ASVS? The ASVS is a different governing body inside of OWASP. I would say that there is a project trying to build more of a Kubernetes container-based verification standard. The ASVS is probably the most mature OWASP project out there. We won't fold it into that particular project, but we might have a new take on a verification standard for containers. Yeah, so there's like a, we're entering an era where we can actually have a Top 10 for the OWASP Top 10s, which is, I don't know, go figure. There is more than one and some are not as maintained. So we're trying to keep this one alive and spread it across OWASP. And we have a workshop helping Steve with the workshop after this. So we're gonna talk about things like this in very great detail. Yeah, yep, there'll be, if you want them right now, they're right here. So tinyurl.com slash cncf-owasp and there'll also, I guess, be distributed by the conference. Not really, haven't thought that far ahead, but they'll be there somewhere. Or just hit me up. All good? All right, have a great rest of your conference. Yep.