 So, yeah, this workshop is going to be about Kubernetes security 101, some best practices to secure a cluster. So, yeah, just a little bit about myself. As Jeff mentioned, I work at Trend Micro as an information security specialist on part of the cloud and container security research team. I'm also a member of the CNCF security tag team. Now to change name, it was called SIG Security and Security Tag Team, which is like the technical advisory group for the Cloud Native Computing Foundation. It's a volunteer work that we do, providing guidance and creating documentation in regards to cloud native tools, supply chain attacks. We published last year our cloud native security white paper and recently our supply chain security white paper. So those are all available and free for everyone to check it out. And I just want to say before we start that I don't consider myself a Kubernetes security expert. As you're going to see in this workshop, Kubernetes is a very complex technology, I would say. And I've been studying for over one year, one year and a half now. And still, sometimes some things can be tricky. And so yeah, feel free to ask questions if you have them. The idea of this workshop here is to be very hands on so that you can do it yourself as well and follow along. So I'm going to do things very slow and step by step. Yeah. And I also have a personal blog on the link there, katanasac.com, where I publish some articles at least once a month. There is also a list of all my previous talks before the slides or videos if they were recorded since 2011 when I started speaking at conferences about application security and all that stuff. And there is also all my contact information, my social media in Twitter, GitHub, and LinkedIn. It's all Magmalogan. So you can easily find me if you want and feel free to add me. And we can chat more after this workshop. So the agenda for today is what is Kubernetes? So I'm going to assume that you either heard about Kubernetes or at least have seen someone or an organization that uses Kubernetes. But in this workshop, I'm going to assume that you've never used Kubernetes. So the idea is to start from scratch. So what we're going to do, the way that this workshop works is we're going to set up an environment on AWS. That's why there was a prerequisite of having an AWS account. And we're going to use Cloud9, which is like a virtual developer environment for AWS. So we're going to deploy a Cloud9 instance and use that to deploy our cluster. So we're going to understand what is Kubernetes, the Kubernetes architecture. I'm going to talk about all the small components, the two main components, the control plane and the worker nodes, and then each little component that's inside those major ones. As Jeff said, we're going to do some track modeling. I don't know if you heard, but this year, in April this year, the MITRE released the attack framework, the attack matrix for containers. And we were a part of this work with MITRE. So we partnered with MITRE when they did this, when they called the community for help. And we provided some data regarding our honeypaws. We have some Docker and Kubernetes honeypaws that we used to analyze attack data on those environments. So we're going to talk about that as well. Then later, we're going to talk about, after we set up our environment on AWS, we set up a cluster. We're going to deploy a container, containerized application there, a vulnerable application. And so we're going to attack it. So we're going to understand what's going on. How can an attacker compromise the cluster, one of the many ways that that's possible. And of course, this is a misconfigured cluster so that you understand the fundamentals of why it's important to keep your Kubernetes cluster well configured and secured. And if we have time, then we're going to talk about defending Kubernetes. So what are the main issues that you can order the best practices, right? Either regarding the API server and the whole process of generating a container from scratch and deploying it into a Kubernetes cluster. And there is many things that we can talk about here. So yeah, I hope you enjoyed. And feel free to ask questions. I know that people are monitoring those questions, and they can send them over to me. So if you're struggling, if you're trying to follow along and something's not working for you, don't worry. Try a feel free to ask questions, and we can make this very interactive. OK. So before we start, I just want to say that since I started working with Kubernetes last year, I created this GitHub project, the Awesome Kubernetes Security List. And this project has a bunch of links and information that I use myself to learn Kubernetes and understand, and has a lot of stuff about Kubernetes security. So let me share that here with you so that you can take a look as well. So yeah, first, I started with that as kind of my own thing when I was studying, saving links on a text file for myself. But I said, no, maybe some people, other people, can benefit from that as well. And that's why I did this GitHub repo, which has almost 1,000 stars already. And I created that in October last year. So it hasn't been even over a year yet. And it has a lot of information. I know there's some stuff that I need to update, and I'll update it soon. So yeah, feel free to start to work it. And if you have any other links that are either related to Kubernetes or Kubernetes security, you can submit a PR to me. And then I'll take a look and add to the list as well. So there's a lot of information here. OK, so I'll give an overview of about what is Kubernetes so that you can understand. And then we go to set up the environment, and I'll continue the slides while the environment is setting up, because it takes a few minutes to set up the environment on AWS. So yeah, let's do that first. So what is Kubernetes? So what is your understanding of Kubernetes if you have seen it, if you have used it? Yes, Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. But what does that mean? If you remember when Docker was released in 2013, there was a lot of hype about using Docker, and using containers, and containerizing everything. But quickly, we learned that by doing that, there was a lot of increase at the overhead of management of those applications. Because when Docker was released, there was no way for you to easily manage multiple containers running on multiple servers. And so Kubernetes was created as an idea to help manage those containerized applications. So it's a container orchestrator, as we say it. And they just released the latest version, which was 1.22. And it has, until last year, new versions were released four times a year, so every quarter. And I think now it's three times a year. So it gets updated very quickly. So you need to follow along and keep yourself updated so that you also can keep your cluster updated. So some quick facts and fun facts about Kubernetes here. It was created by Google in 2014. And it's based on internal projects from Google, which are called Borg and Omega, which are closed source. And they decided to, Google has been used containerized applications for many years now. And that's why they worked on this project. Kubernetes means it's a word from Greek, which means helmsman. And helmsman is the person who drives the ship, a big ship, and allusion to Docker, which are the containers. And the person steering the wheel of the ship there, that's the helmsman. Sometimes can be the captain. Sometimes it's another person, so that's why. And that's another reason why the icon for Kubernetes is a helm. That's what we call this kind of steering wheel of a ship. It's a helm. Yeah, another fun fact about Kubernetes is it's also called Kates, K-A-S. And the reason for that is just because there is eight letters between the K and the S of Kubernetes, right? If you count the letters there in between the U-Bernetti, it's eight letters. So that's why an abbreviation of that, that's the way that they do it. Another thing Kubernetes developed in Golang, right? It's same as Docker and many other cloud-native applications as well. So yeah. OK, let's move on. Why Kubernetes? So since it's important to understand what Kubernetes is, it's also important to understand why Kubernetes. Should you use Kubernetes in your organization? You don't want to go to your company on Monday and say, no, we should move everything to Kubernetes because Kubernetes is the next big thing. There is a lot of management required to run Kubernetes clusters. And as I said, I think, personally, from my perspective, it's a very complex system. It's hard to understand. And especially if you don't have the background of what Kubernetes do behind the curtains, right? So once you understand that, that gets easier to know what it's doing. But it's like you need to have that background, right? So yeah, just a kind of a comic here from Gilbert, right? It's not everything that needs to be, OK, Kubernetes, right? Doesn't mean like that. You have very few people that really, really understand Kubernetes properly, especially even Kubernetes security. And yeah, if you want to start moving your applications to your Kubernetes cluster, then you're going to require not just budget resources to do that, but also skilled people to maintain your cluster, right? So think about that before you go into, OK, yeah, let's move everything to Kubernetes. It might not be the solution, and it might not be required for your needs and your organization needs. OK, what is the CNCF, right? The CNCF is the Cloud Native Computing Foundation. It's the foundation. It's a sub-foundation of the Linux Foundation. It helps and maintains many different projects from what they consider cloud native, right? So Kubernetes is just one of them. There are many different others that you can see here. And some of them are even part of Kubernetes as at CD and core DNS and also Helm to package Kubernetes applications as well. So there's many different ones, right? So it doesn't mean just Kubernetes. When we're talking about Cloud Native CNCF, the Cloud Native Computing Foundation, and Cloud Native Security, we're not just talking about Kubernetes, right? This is the URL there at the bottom, the l.cncf.io. If you go there, it's going to show you a lot of different projects that are related to cloud native, right? And just a quick thing here about what means to be a cloud native, right? Doesn't mean that the application only runs in the cloud, right? And we're probably going to talk about that in the next few slides. But just to add to that, being cloud native is there is different characteristics on applications that they have. And they're created to run in cloud-native cloud environments, but they can also run on-premises, right? There are organizations and many different companies that run Kubernetes on-premises, right? OK. Yeah, so that's the slide here. As I said, what it means to be cloud native, the definition here, it's from the CNCF itself and can be found on their GitHub. Basically, as they say, to an application to be cloud native, it needs to have these characteristics. On the left, you're seven characteristics. And some examples here that you probably are using or you probably heard about, right? Containers, service meshes, microservices in general, immutable infrastructure or infrastructure as code, and declarative APIs. So these are some examples of cloud native applications. OK. OK, so I'm going to stop here before I go into the architecture and explain all the details. I'm going to share my screen and try to have everyone follow along to set up their environment. OK? So let me stop sharing. Let's see if it can share again. One second. OK. OK, can you see my screen? Yeah, good. So one of the prerequisites of the workshop, as we mentioned, is having an AWS account. So if you are not logged in on your AWS account, please go ahead and log in. And what we're going to do, as I said, we're going to deploy a Cloud9 instance, which is this service from AWS. And we're going to find out very soon how it works. It's basically an online version of VS Code. So if you use VS Code before, you're going to find it very easy to follow along. And from that, the idea of why I'm using this, right? The idea is that I want to make sure that everyone has the same environment. And so if you're using, I didn't want to create a VM and deploy and send it to you to download and everything. Let's do it online. And using Cloud9 is a good way to, everyone's going to have the same environment if we use the same configuration here. And then from there, we can set up the instance to deploy the cluster afterwards, right? Once we have the Cloud9 instance set up, and that takes, that's very quickly, but then we configure the cluster on AWS as well. So, okay, let's start here. And I'm going to create one from scratch. I have some created here, but I'm going to create a new one. So let's see, CloudVillage, right? Workshop. So the name you can give any name that you want, right? Just remember if you have different Cloud9 instances, but yeah, you can add a description as well. Here, you have the settings of the environment of your Cloud9 instance. It's going to deploy an EC2, right? So the way that Cloud9 works, it's deploying an EC2, but it's going to configure it. That's going to work as an online developer, developing platform, right? So it's basically an online VS code that's hosted on AWS so that you have all your code there if you want, you can use that as well. So yeah, basically it's going to create a new EC2 environment, right? The idea here, you can use T2 Micro, right? Which is free, free tier, if you have free tier. Yeah, just another note here before we move on. There might be some charges, not because of this Cloud9, but once we deploy the cluster and EKS and everything, depending on how long you leave the cluster online, there might be some small charges on your credit card, on your account, right? But at the end of the workshop, I'm going to show how to remove everything so that you don't leave anything running and don't get charged a lot of money, okay? Sounds good. Okay, so here, if you're all there, basically we're going to create a new C2 instance. It doesn't matter like I don't need that large instance for the Cloud9 instance because I'm only dealing with code and executing some commands there, so that's fine. The platform can be Amazon Linux too and I can leave this everything default here after the 30 minutes of idle time. If I'm not using the Cloud9 instance, it's going to shut down automatically here. So I can leave everything here as it is by default. And I'm on the right region. Yeah, I'm on US East 1, so I'm going to deploy it on US East 1. The cluster, I can deploy anywhere and we're going to see how to do that, but basically for the Cloud9 instance, make sure that everyone is on the same region and they have this access to the same features, right? That's the only reason for that. Okay, here just an overview of the settings that I set up. Basically everything is default right now. We're going to create a new role for my instance later, but let's leave it at that right now. Okay, so it's creating my environment. It might take a few seconds to mean it, right? And this is going to be, like if you use VS Code before, if you work with coding and programming in VS Code, you're going to find that very interface is very similar. There's a few things that we need to do here once the environment is set up. And so creating an IAM role for our instance and also disabling some credentials. So I'm going to show here once the Cloud9 is completed and deployed, right? Let me see. See if it's taking a lot of time. Yeah, let me go back. Yeah, if I have, I don't know if we have any questions so far, but yeah, feel free to ask them. If you're stuck, if you missed something, until now, feel free to send the questions over through the form, the Google form. Okay, no questions, sounds good. Awesome, thank you. So yeah, let's, let me wait here once. Okay, now the Cloud9 instance is set up. As you can see, it's basically an IDE platform online. I have a terminal here. I can add new files and new terminals as well, right? Okay, so what we're going to do, I mean, new terminal. Okay, is the font good enough for everyone to see? I don't need, maybe, let's leave it at that. Okay, so this is basically the environment that we're going to use to deploy our cluster from, right? Okay, so a few things that we need to set up here and I'm going to show you is, oops, sorry. It's on this settings here on the top right corner of my interface, there's a gear icon called preference, right? And in this gear icon, I need to go to AWS settings. And on the AWS settings, I'm going to disable these AWS managed temporary credentials, right? All I need to do is click here and disable that, and that should be fine, okay? Everyone got that, preferences, go to AWS settings and disable the AWS managed temporary credentials, good. So after that, let's see, let's, now we're going to, let's create the, now we're going to go back to the AWS console to create the IAM role that I need to give to this EC2 instance because the cloud nine is an EC2 instance to be able to have permissions to deal, to talk to EKS and create clusters. Yes, the workshop is going, is being recorded and it's going to be shared later as well. So yeah, fine. Okay, let me go back to the AWS console here. We open another tab and I think I need to share, stop sharing this one and share the other one. Sorry about that. Okay, so yeah, on, oops, I think I'm in the console, yeah. On AWS console, what we're going to do is create an IAM role and please don't do this on your production account because what we're going to do is create an IAM role with administrator permissions, administrator privileges so that we can give the cloud nine instance permissions to create the cluster and everything, right? So that we don't have any issues, we don't face any issues. We're going to give this administration a role. So I'm going to IAM and let me go back. So what I did was, okay, I went to IAM which is the identity and access management service from AWS and basically what we're going to do is create a role for that, create a role. I go here to roles on the left side and create role and here I select AWS service that's selected by default, EC2, okay, and then I click next in permissions. Under permissions, it's going to load and I'm going to give this role administrator access to my account, right? So be very careful with that. Don't use that in a production environment. We don't want that because that's too permissive, right? I don't want to give that IAM role. This is just for the workshop so that we don't have any issues deploying the cluster. You can add tags but that's not needed. And role name C9, I'm going to do cloud nine instance role. Okay, you can put any name, just remember that name because you need to attach that to your cloud nine instance. Okay, so what I did was just repeating what I did was I went to IAM, I went to role and create a new role, selected EC2 and gave the policies administrator access and created a name for it, right? That's it. Okay, the role has been created. So the next thing that I need to do is go back to, I'm going to the EC2 now and attach that role to my EC2 instance, my cloud nine EC2 instance, right? So I'm going to EC2 here, that's why I tap EC2 and to see my machine's running, I see that there is an instance running. Make sure that you're on the same region that you deployed your cloud nine instance and basically see there is this instance running here. That's why I have the cloud nine running as well. So select the instance here, the EC2 and on actions, security, modify IAM role, okay? That's going to, we're going to attach that role that we just created to this instance. So again, select my instance, go to actions, security and modify IAM role. And here I have my role that I just created, the cloud nine instance role. And I'm going to give it, I'm going to attach that to the instance and I'm going to save. Okay, if you got this message, successfully attach your role to the instance, that's great, that's all we need so far. Is anyone behind, does anyone have any questions? Anyone struggling with something? Feel free to ask, okay? So now that I set this up, I need to go back to the cloud nine and install a few things on my cloud nine instance to be able to deploy the cluster, right? So some things I'm going to install is Cube CTL, some other packages, so we're going to follow along. Let me share that with you. Let's see if I have it here. Okay, yes, let me share my, let me go back to my cloud nine instance because I'm not sharing that. Yeah, sorry for the switching here, but I don't want to share any, any sensitive information, right? Yeah, okay, so I'm here. One of the things that I need to do here on the cloud nine, I've attached, remember if you haven't done the AWS settings, right, disable that as well, just let me show it again, right? The temporary credentials, and yeah, okay. So you can see that there is a kind of directory structure here which shows your environment. I can have many terminals and all that stuff. That's great. So now we're going to use this, this project on GitHub and to download two files that's all I need to set up my cluster basically. And I'm going to install some, some package and some tools, but basically those two files, it's all I need to set up my cluster. So let me show you here, the clone and this URL, the Kubernetes Sack 101 workshop. If you download that, you can see that there's basically two files there. And I'm going to explain what they are soon. And I'm going to increase the font and close this. I don't think we're going to need this. Okay, show me here. Yeah, so you can see that there's two files here, two YAML files, right? And if you haven't played with YAML before, that's something that Kubernetes uses a lot, right? To manage, to deploy applications and objects on Kubernetes, right? They're all usually YAML files that turn into JSON or same to the Kubernetes API server. And I'm going to explain that soon. So if you haven't cloned this, so the command is git clone, let me post the whole command there, git clone, I'll share it with the people participating. And yeah, if you clone that, if you have those two files on your Cloud 9 instance, you're good, okay? So far you're good, let's move along. So now this Cloud 9 instance is brand new, right? So I don't have, if I try to use kubectl, which I'm going to explain some what it is, but it's basically a CLI to kind of talk to your Kubernetes cluster. So I don't have that right now, right? Command not found, that's not installed. So I need to install that. So the first command to install kubectl is this one here. Let's see. Okay, let me post that on the chat. And so this is the first command here, and I'm going to share with you. So basically I'm downloading that from AWS, let's see if that's done. So now I should have, I need to give it permissions, of course. The second command is to allow execution permissions to this binary, right, kubectl. And don't worry, I'm going to explain everything. Once we start deploying the cluster, which is the thing that takes longer, then we go back to the slides and I'm going to explain some things that I'm doing here if you're not able to understand yet, but you will be able to understand, no worries. Okay, let's see. Yeah, so kubectl is working, right? We have that already now. kubectl controls the Kubernetes cluster manager. That's great. Other things that we can install that's going to help you with our interaction with the cluster. So let's start a few, a couple of packages that can help you win. And those packages are JQ and Gattax and batch completion, right? So basically what I'm going to do is sudo yum install those packages, right? Because they don't have that on the Cloud 9 instance by default, great. Now I'm going to configure a batch completion to kubectl, so that helps me when I'm managing my cluster, right? And using commands with kubectl. So basically kubectl completion batch, batch completion and another one here. Let me post those two commands to the team as well so they can share with you. Yeah, so those are two separate commands that you're doing. And let me clear here so that I move this up. Okay, that's better. Okay, any questions so far? Anyone struggling? Any comments that you don't understand? There's just a couple more things that we're going to do. Now we're going to install EKS CTL, right? And what EKS CTL is is another binary to interact and help you deploy EKS clusters, right? And what is EKS? EKS is the managed Kubernetes services from AWS, right? So it's like a Kubernetes that's provided to you by AWS and AWS manages that, right? Every major cloud provider has their own managed Kubernetes services. You have EKS for AWS, you have EKS for Azure, you have GKE from Google and many others, right? Okay, so we're going to install the EKS CTL binary because that's what we're going to use to deploy the cluster. Once the clusters deployed, excuse me, we just use Cube CTL basically. Okay, so let me set that up. This is the command, I'm posting that to the team so they can share with you too. Sorry, okay, moving that to the local bin and also enabling bash completion for EKS CTL. And that's it. Okay, EKS CTL, okay, that's working too, great. So so far, we've only downloaded a few package, installed Cube CTL and installed EKS CTL. That's what the two main binaries that we're going to need to deploy our cluster, right? People may ask, oh, why don't we need to install Docker? We're not running our cluster here on this instance, right? We're running that, we're going to deploy two new instance, actually one new instance to run our cluster, right? And the way that EKS works is that they manage the control plane of the Kubernetes cluster and we only need to deploy our worker nodes and that's what we're going to do. We're going to deploy as in situ instance that's going to work as our worker node to run our applications. Okay, let's see, no question so far. Oops, take your download link for sure. Let's see, this is the whole, yeah, this is the whole link. Maybe if it's truncated, yeah, that's the whole link. That's the whole link. Basically, I'm downloading the kubectl binary. If you want to download that from directly on Kubernetes, the Kubernetes website, which is kubernetes.io, there is also a kubectl there. I'm downloading that from AWS from an S3 on AWS that they provide, right? So that's better. I don't think you're going to have any issues if you download from the kubernetes.io website. But yeah, if that doesn't work with the link, please let me know. Okay, so far we configured just to give you a overview of what we've done if you just joined the workshop. We deployed a Cloud9 instance. We configured an IAM role for this instance and then we downloaded two files from a GitHub. That we're going to use to set up the cluster and then we installed kubectl and eksctl. That's all we've did and don't worry if you don't understand some of those names and tools. I'm going to explain that as soon as we start deploying a cluster and move back to the slides. And I just don't want to waste too much time. So I want to do the setup first if people haven't been able to do that. And then we go to the slides while the cluster is being deployed. Okay, let's see here. So basically what we're going to do is, okay, to create this cluster, I'm going to use, let me show the files first. Let me go back to the files. Okay, so the first file that we're going to use is that we're going to interact with is this eks-cluster.gemo, right? This is going to deploy my cluster on AWS. So that's why we installed EKS, that's going to help me. And here on this file, you can see the command here already, right? That's what we're going to do. Basically run this command on the terminal, of course pointing that to the file that we have and basically I'm deploying a cluster configuration, right? And this is the name of my cluster, this is the region. So here you may be able to change your region if EKS is available on the region that you are in, right? And it's also a good idea because if we deploy all, I don't know how many people are following along and deploying clusters as well, but if we all deploy our clusters on the same region, we might run out of resources. So the way that it works on AWS, they have a limited resources for availability zones, right? So inside the region there's different availability zones and we might have some errors saying that you don't have enough resources to deploy that cluster in that region or on that availability zone. Because EKS-CTL, they're going to choose from this configuration here, they are going to choose which availability zone on US East One region, they want to deploy a cluster. We don't manage that. Of course, we could add that to the configuration file, but I don't wanna add a lot of overhead to the configuration, so you don't need to worry about that. And on this cluster, I'm creating a managed node group, and I'm calling that managed node group one, and I'm deploying an instance of the size T2 small, and here I can say the minimal size and the maximum size. That means that my cluster can have only one worker node, only one instance, and maximum three. So if I need to increase the number of worker nodes on my cluster, I can easily do that. And if I can change that to 10, it doesn't matter. What's going to matter here is the desired capacity. That's what the cluster is going to start with. If I change that to three, for example, then it's going to deploy three different easy-to-incesses of the same size in the node group. And I don't want that. I don't want three, I don't wanna, we're only going to use a small web application. So I don't want that, I just want one. So that it doesn't, I don't need to pay a lot of money for deploying this cluster, right? This is the volume size, right? The hard drive and some labels, the worker, labels the worker row, right? And basically that's it, right? That's all that is on my cluster. So here, I'm gonna use this command. Let me go to the, let's say, workshop. Here is my EKS cluster. So I just entered the directory of the GitHub project that I just downloaded. Again, it's just, if you haven't, this is, say, how many ounces? Do you like, sure. Okay. Yeah, maybe the chat, let me share the whole file here. One second. Some people are struggling with the commands, the first commands, and then give me a few seconds here. I put that in a drive and I can share that one second. Hang on, I mean, I should have done that before. Sorry about that. Okay, let me share all this file here. I created a Google Docs and I will share the link of the Google Docs of all the commands. And then we can share with everyone participating so then we can check. Let me just bring that in, one-on-one workshop. That quickly. Okay, yeah, I'm sharing on the Google Docs. So if you can share that link with everyone, everyone can see that link. And these are the main commands that we did to set up Kube CTL and EKS CTL on the cluster. So if you're having any trouble, some of those are two commands. So remember that, step one, step three. So follow that along and check out the commands there. If you're still struggling after following that, let me know, please let me know. So I'll move on here and I'll deploy my cluster and while that's going on as we go back to the slides, I can help anyone that's struggling or answer any questions, right? So yeah, as I said, right, I want to create my cluster and to do that, I'm going to use EKS CTL to do that. And the command here is EKS CTL create cluster, right? And I'm passing a file, which is the one, this file here that I have. So this is the command. And I'll add that to the spreadsheet as well. So that you can follow along if you get stuck or something. Let me add that to the spreadsheet so that everyone can see. Give clone, right, second one, workshop. Okay, I'm adding that to the, sorry, to the Google Docs. Just adding that there so that you can see. Let's see if that's going to work. Yeah, live demo or live workshops are always hard. So here I started creating my cluster and as you can see, EKS CTL is doing its job. It's creating subnets, it creates a VPC and does everything for me. So that makes things easier, right? And this is what takes a little bit longer. You might face some errors here because of resources. If you're doing on the same region, if everyone's doing it on USC's one, what you need to do basically is either try on a different region or try again, but change the name of your cluster so that it doesn't, what this is doing actually, EKS CTL is using cloud formation to deploy your cluster, right? As you can see here, it's waiting for the cloud formation stack and all that stuff. And so if you'll try to deploy and it didn't work, then if you try to deploy again with the same name, it's going to complain to you, oh, there's already some cloud formation template created with the same name. And let's hope we don't have, we don't face these issues. I faced that on the previous edition of this workshop that I did, but let's hope not. It seems that no issues so far. So yeah, I'm gonna leave it running here. I'm going to go back to the slides so that we can continue the, just the explanation of the Kubernetes architecture. And then once the cluster is deployed, we come back here to deploy the applications and now play along with the attacking the application and all that stuff, okay? So let me stop screen and share the slides again. Okay. So yeah, we stopped here. We stopped here, right? The Kubernetes architecture. And so as we can see, if you're struggling, if you're free to ask questions, I'll stop and answer them or why I'm explaining a problem, don't worry. As we can see, there's two main major components here. There is the Kubernetes control plane. It used to be called master node, but for inclusive language, we don't usually, we don't call it anymore. Even on the new versions of Kubernetes, it doesn't say, I think it does say, but it's being deprecated, the name master node, right? So I'm gonna call it from now on control plane. And so this is on the left side here with five smaller components. On the right side of the slides, we have three worker nodes, right? And the worker nodes are the ones that we use to deploy our applications. And today in this workshop, we're deploying only one worker node, only one in C2 instance, that's going to function as the main node for deploying our vulnerable web application. We don't need three, but if we need it, as I showed there, with the max size and the desired capacity on the YAML file, we could easily change that there, right? Okay, awesome, awesome. Yeah, yeah, I should have thought about the Google Docs. So yeah, on this, let's look at the left side first. The QBAPI server, the QBAPI server here in the middle, right? We see that's kind of a big thing. It's a main component, one of the main components of the control plane. Every other component is talking to the API server and you can see from the arrows there. It's not just talking with the components from the control plane, but it's also talking to the worker nodes as well, right? And the QBAPI server, it's basically an API server and REST API server, if you played with APIs before. And that receives and forwards all the communication from all the components of Kubernetes, right? It has basically everything that you, when you type from kubectl, it becomes an API request, right? An HP request that goes to the API server of your cluster, right? On the bottom here, on the bottom left, we have atcd. And atcd is the main database, the data storage of your cluster, right? It's a key value store where all the components of the Kubernetes cluster is stored there, right? And so everything gets saved there as an object, as a key value store. And then Kubernetes tries to check that and tries to create whatever you told it to create on the, that's saved on atcd, tries to create that on your nodes, okay? And that's something very important here because the Kubernetes works, the way that Kubernetes works, it's called desired state, right? So Kubernetes is smart enough, right? It's smart enough to see that it constantly, it's constantly checking if whatever I told it to create, for example, nodes or application or services, whatever I told it to create there on atcd, checks on your nodes and see, okay, is this application running already? Do I have free replicas of that application, right? If I do, okay, good, that's the desired state. If I don't, if whatever is on atcd, whatever configuration is there, doesn't match what's running on the cluster, then, okay, I need to fix it, right? They need to match, right? And so if, let's say I only have one replica of my application and I need free, Kubernetes is going to check that and it's going to tell the cluster to create two more applications, two more containers, right, actually it's pods, but it's going to create those to match those in atcd. So I need to have free replicas running on my cluster and it doesn't matter on which worker node, depending on your configuration as well, but I need to have that matching, right? So it's constantly checking. If you check the logs of Kubernetes, it's constantly, there's many different components that's constantly checking to see the desired state. If whatever is on atcd should be reflected on your whole cluster, on your worker nodes, right? So you can see here that the API server and atcd, there are two major components of your Kubernetes clustering. That's why you need to protect those. And we're going to talk about security, the security part of it and the best practices as well, but be very careful with those components, right? If an attacker has access to your atcd and they can change objects there, right? Basically, whatever they change to, Kubernetes is going to just follow along. Just Kubernetes is going, okay, if you want me to create a deploy a malicious container, I'm going to deploy a malicious container, right? It's not checking who put that information there. It's only checking if the information atcd is matching whatever is created on the cluster, okay? And on the top left here, we have the keep controller manager, right? And the keep controller manager is exactly this object that has a bunch of different controllers, the pod controller, the service controller, and other controllers that keeps talking to the API server and checking on atcd. Is it matching? Is it, oh, do we have the right number of pods? Oh, we have five here, but we need 10, right? And so it's constantly checking that each specific controller is doing that. The cloud controller manager on the top right of the control plane is the one that talks to cloud providers, right? That's the reason why Kubernetes can work with different cloud providers. You have the cloud controller manager if you want to use like disk volumes or load balancers, right? Kubernetes doesn't have those objects, doesn't have those services, so you need to use the ones provided from your cloud provider, and we're going to do that on EKS today. We're going to deploy a load balancer to expose our application to the internet, okay? Good. And on the bottom, right, is the Kube scheduler. The Kube scheduler is exactly the one that tells the worker nodes to deploy the pods, right? To start the containers, right? It talks to the Kubelet on each worker node to schedule a pod. It can be scheduled like instantly, right? And it can be to Kubelet to talk to the Kubernetes runtime engine, create those pods. Okay, so those are the five main components of the control plane, right? Now let's take a look at the worker nodes. Here on the right side, there are three main components on each worker node. I have a Kubelet, I have a Kube proxy, and I have the container runtime engine, okay? So basically the Kubelet is the agent that runs on each node that talks to the API server and starts any containers and also gets some statistics and health information about those containers, right? And I'm talking, I'm saying containers here, but it's actually pods, but I'm going to explain that once we move to the next slides that the smallest unit of Kubernetes is actually pods, right? There is also the Kube proxy component that handles all communication, networking communication inside the cluster, but also externally, right? We've outside the cluster as well. And the Kube, the container runtime engine, which by default was Docker, and there was a lot of discussions about when they're talking that Docker was going to be deprecated in Kubernetes and all that stuff. I don't know if you heard about this story, but you can still use your Docker containers on Kubernetes. Nothing is going to change. It's just that the reason being that Kubernetes is to be able to run Docker, Kubernetes developed kind of another component called the Docker shim to be able, because Docker doesn't follow the OCI, the open container interface and Kubernetes was created to run any OCI standard containers, right? Okay, so you can deploy that. The Kubelet is going to talk with the Kubernetes runtime engine, and that's what's going to really create your containers and run that on your nodes, okay? Any questions about this architecture here so far? So we're going to interact, right? What we're doing here today on this workshop, we're using the Cloud9 instance to talk to the control plane, right? We're using that and we're going to tell the control plane to create some objects for us, right? We just, with the Ikea CTL command that we just used that's creating the cluster. Just quickly see that, yeah, it's still creating. With the Ikea CTL command that we used to create the cluster, it's Kubernetes is doing that and the way that it works on EKS, we don't have access to this control plane, right? Because AWS handles that for us and that's a good thing because you also don't need to worry about the security aspect of the control plane. All you need to do, you talk to the API endpoint of your control plane and you tell it to create applications, deploy containers and all that stuff. If you're still having issues deploying the cluster, if you're having, if it's saying that's not available, it doesn't have any resources on that availability zone, try to change regions, right? If it's still not working under different names, if it's still getting errors, I'm going to show once my cluster is completed, I'm going to show you the outputs off the console and see if you're getting that, but try to change different regions if not working on the region. Sure, the OCI is the open container initiative, right? It's like a standard for creating containers, right? And the Docker doesn't apply, it's not in compliance with that standard as far as I know, please correct me if I'm wrong. And so Kubernetes couldn't run Docker containers by default. They had to create another software that's called the Docker Sheen to be able to do that. So the problem is that this component that allows Kubernetes to run Docker containers is maintained by the Kubernetes team. Yeah, sure, sure, okay, sounds good. So what's going on is they didn't want to maintain that component because that component was, as they call it, the Docker Sheen, S-H-I-M, it's not a good practice what they did, right? Because Kubernetes should be able to run OCI standard containers by default, but unfortunately Docker is not compliant with that. So they said, oh, I know we're not going to maintain the Docker Sheen anymore, and the Docker Sheen was going to be deprecated, right? So that's why this caused a lot of stir on the internet and on Twitter a few months back because people thought, oh, maybe, like Kubernetes is deprecating Docker, I'm not going to be able to use my Docker containers and Kubernetes anymore, what's going on, right? Because pretty much I would say 90 to 95 or 99% of Kubernetes clusters are running Docker containers and not the other ones, right? You can have different runtime engines, you can have the container D, Cryo, or Podman, there's different ones, but since Docker was the default one, people usually use that, right? And because Docker is very well known and people have used Docker before, even before Kubernetes, that's what they use, right? So I think that's it with the OCI, there is a blog post, I think, let me see, Docker Kubernetes deprecated. There was a blog post on the Kubernetes website that talks about that whole issue. Let's see, okay. Yes, let me post that thing here. Yeah, Dr. Sheen, FAQ. Okay, let's move on. If everybody got the overview of the architecture here, let's move on with that. So as I said before, right? When we deploy a container on Kubernetes, we're not deploying a container itself, we're deploying a pod, right? So the pod is the smallest unit on a Kubernetes cluster, right? The pod can have one container inside that pod or multiple containers, right? So as the diagram here shows, right? You can have one, two, or many, right? And the reason being is that the idea of containers is to run only one process on that container, right? So if I have my application run on my container and I need something to, let's say, collect the logs and ship those logs to a central location, right? I don't want to install that application inside my application container because that's going to break the principle and the idea of containers, right? So I deploy another container on the same pod, usually on the same pod, and that other container talks to my first, my application container and collects the logs from there to ship it to another location. So that container is usually called a sidecar container, right? It can be used to collect metrics, logs. It can be used as a security container as well, some security tools that deploy the way that they're deployed as a sidecar as well. So there's different usage for that sidecar container, right? And since they're on the same pod, right? They share the namespaces there. So other objects that you can have on Kubernetes is a deployment, right? You can deploy a pod by itself, but you want to use the features provided by Kubernetes, right? And the way that deployment works, it gives Kubernetes or your Kubernetes objects scaling updates and rollback abilities, right? So let's say if I need to create another pod to handle traffic and there is a lot of traffic on my application, I can easily do that with deployments. If I need to scale up or scale down as well, right? If I'm not using all those containers and I can save money and reduce the number of containers that are running, then the number of pods, I can do that as well with a deployment, right? Deployment is just an API, a Kubernetes API object, right? It's not something that exists on the node itself. It only exists inside my cluster. It's just a configuration and the way that they use. There's many different objects that I'm gonna show on the next slide. We're not going to be able to cover all those objects here, but just to give you an overview, okay? And on top of that deployment, we have something called namespaces. And those are different than the Linux kernel namespaces. These are Kubernetes namespaces, right? And the way that the namespaces work here is that they work like if you're just a logical separation of different applications that I have on my cluster, right? It's basically folders inside my cluster, right? There's no kind of a security boundary between those namespaces, right? So I can have a namespace for my developer environment, a namespace for my QA environment and a namespace for my production environment. I can also have a namespace for my developer team one and my namespace for developer team two if they're both using the same cluster, right? And we're going to talk about namespaces as well here, which is another part in another slide. And on top of the namespaces, there's nodes, right? Nodes is basically the server that's running my application. So it can be a VM, it can be an in-situ instance, it can be also a Fargate on AWS or other kind of serverless applications or serverless runtimes or workloads. So it can be anything. It can also be running on-prem and can be bare metal as well. There's also some companies that provide Kubernetes running on bare metal. Okay, and let me go to the API objects. So as I said, there's different API objects that you can create on your cluster. You have a pod as I mentioned, right? There's replica set, which is used by the deployment to replicate your pods. There's demon set, stateful set, right? The deployment itself as I mentioned, there's service that's used to expose either like a port or my application to the world. They can expose that through a load balancer as well. There's job and Chrome job, which are used to just the job container. It's basically a pod or a container that runs only once the job. And the Chrome job is basically a container that runs according to the Chrome tab, right? I apply the configuration there and it can run periodically like every other minutes, every other hour, whatever I choose and whatever I configured on the Chrome tab configuration of that container. There's also config maps and secrets. My config maps and secrets are usually used to store information, right? Config maps is for configuration information that are not really sensitive and secrets are supposed to store secret information, right, sensitive information. But I'm just going to say here that secrets are not really secret in Kubernetes because the way that secret works by default is that all the sensitive information is stored on HCD and it's stored on encoded as base 64, right? So you're probably aware that basic 64 can be easily decoded and so that's not really protecting that information. And also on HCD by default, all the information stored on HCD is stored in plain text, right? It's not encrypted, right? So there is another issue there as well. So there's different ways that you can use secrets on Kubernetes. You can maybe use like a third party solution like a hashicorp vault or use the one from AWS as well. It doesn't matter. So just be careful. If you're using secrets with Kubernetes, don't use the default settings, right? Apply, there's some configurations that you can apply to at least encrypt your secrets on HCD. And yeah, there's ingrass and many different objects, right? And those objects are all, they all work. They can all be applied as a YAML file and I tell that and apply that to my cluster using kubectl and that depending on what I'm creating on that file that gets treated on the cluster exactly because of the desired state, right? I don't care how Kubernetes is going to do that. I just tell it, okay, I want this object created on my cluster and then, okay, apply that to the HCD. Once I send that information via kubectl that goes through the API server and gets stored in HCD, right? And now the controller is checking that, oh, something changed here and oh, they want me to create this object, okay? Now deploy this object and everything happens really quickly, okay? Any questions so far? Let me see here. I think my cluster got created. So I'm just gonna talk about kubectl and then we're going to go back to the clusters. Anyone's still having struggling or still having issues, please let me know. So basically the kubectl as we did on the Cloud9 instance, right? This is the CLI2 that allows you to control your Kubernetes cluster, right? It's a config file. There's a config file that can be found on your home directory under slash.cube and I can show you where that's stored on our Cloud9 instance, right? It's very similar to the Docker CLI for Docker containers. Let me see another question here. Is an API object like request parameters to the appropriate API endpoints? Request parameters. Not sure I understand the question. API objects on Kubernetes, they're just like different objects. They have different, how can I say, functions, right? So basically when you talk to Kubernetes and you talk to the API server, there is an endpoint for each object, right? So let's say my Kubernetes cluster and like slash API slash pod slash create, right? So there's different endpoints. Everything gets convert from the API request and it gets to the SED, right? Once that's stored in SED and depending on which type of object that gets created on my cluster, right? Not sure if you can try to rephrase the question, please, so that I'll try to answer that better. I'm not sure if I understood correctly. But yeah, so the syntax of the Kube CTL, right? It's very similar to the Docker command, the Docker CLI and the reason being as well that they did that so that people that were using Docker already could easily shift and start using Kubernetes without facing too many challenges, right? So yeah, that's it. See, yeah, that's before we move on. Let's go back to our cluster and let me go to the cloud names that's here. Okay, share, second, okay. So you can see from the command that I did, right? Create cluster, it did a bunch of things here, creating the cloud formation and everything that should be working, right? So to make sure that everything is working, as I said, if you face any errors, if there was something going on or there was not enough resources, either try to change the name of your cluster and try again because it could be just a temporary thing or the availability zone that EKS CTL chose for you. It wasn't, there wasn't enough resources and once you try again with a different name, it's going to try in a different availability zone or you can just change regions and do the same thing and see if that works. Unfortunately, I don't know if there is a way around that if someone that works for AWS can tell me about it, but as far as I know, there is no way. Unless you create your own subnets and VPCs and configure that, that you know that there are resources available, but that can be like a soft limit from AWS that I'm not very aware of. Okay, so let me just clear this. See, okay, good. Do I see my screen? Or is there another question there? Okay, no questions. So basically, let me do QCTL, get nodes. Okay, so I can see here, right? This is the first command. I'm getting all the nodes that are part of my cluster, right? And there's only one node, which is this instance here, right, that's running and it's selling the Kubernetes version here, 120. And yeah, it's ready, right? So the instance is ready. If I go to the AWS console, I can see another instance created there as well. Besides my cloud nine instance now, right? Let's see, okay. Here on the home directory slash.cube, there are some stuff that was created, the config file, right? If I kept that, there is some information about my cluster, how to access my keys and all this stuff, right? I'm not gonna show everything, but basically that's it, it's there. You can see that there is some certificates, the IP of my cluster and all that stuff. One thing that you need to be aware of on EKS clusters is that by default, once I create my cluster, the API endpoint, the API endpoint for my QB API server, that's public by default, right? I know that's a long URL and it's hard to guess, but there are attackers, people on the internet scanning the internet for looking for those API endpoints. So if you don't need that exposed to the internet, be aware of that and make sure that you configure that to be private and we're going to see that very soon. Okay, so now that my cluster is created, we're going to deploy the other objects, actually the objects of my Kubernetes cluster. Right now, my Kubernetes doesn't have anything, right? So if I still keep CTL, get pods, no resources found. Pod dash, hey, there is some resources, right? You're from the default resources of Kubernetes of the EKS cluster, right? There are some stuff called DNS, AWS, Qproxy running, but there is no application run, right? You can see that they're all running on the Qube system namespace, which is the main namespace of my control plane components, right? Let me see, Qube, get pods, NS, or namespaces, right? So these can be like this, namespace, or NS as a shorter version, right? And as I said, namespaces are kind of like, basically folders for organizing stuff on your Kubernetes cluster, right? And by default, I usually have those four namespaces when I create my cluster, either the managed or unmanaged version. Okay, so now let's create the objects, right? And let me go to this file here and I'm going to describe everything that's there before we apply this object there. So I'm creating a new namespace on the cluster objects, so you can see here, and this is our different, every, the way that Yama works, right? Every three dashes here, it's basically a separate file or separate object, but I put everything together, right? So that's easy to understand. So I have one object being created here, that's a namespace called web app, right? So basically I'm creating a folder to deploy my web application. This is an RBAC configuration, I'm creating a cluster role and a cluster role binding, telling that the service account, and I'm going to talk about RBAC and service account later, but basically telling the service account that I'm going to use to deploy my web application, my pods running on this web app are running as admin, and so that's a bad thing, and I'm purposely doing a misconfiguration on my cluster, but that's for the exercise, that's for the workshop, right? By default, you don't want to do that, you don't want to do that. There is also a deployment, right? And I'm using deployments to deploy my web application, I'm using, I'm creating like a Drupal deployment here because we're deploying a Drupal web application, and you can see here down below that the specs of the containers that I'm using, I'm downloading Drupal 8.5.0, and since there is no specification of which container registry I'm downloaded from, that's getting that from Docker Hub, right? So once I deploy that, apply that object to my cluster, Kubernetes knows that, okay, I don't have that image here or whatever you want, so I'm going to download the by default, it uses Docker Hub to look for images, right? Unless you want to apply, of course, that's not a good practice, that's not something that you want to do because if someone compromises the Docker Hub or put some malicious image there, then you're downloading an image to your cluster, right? So the best practice is to have your private container registry and store images, your approved images there for you to download on your Kubernetes cluster. Okay, sounds good. So, and there is also a service, I'm creating a service here, which I'm going to use services or objects that I can use to expose my containers, right? And I can expose just like, there's different types of service. The type that I'm creating here is a load balancer, which is going to create an AWS load balancer and I expose my Drupal web application, right? Which is running on port 80, and it's going to expose that on a port 80 of the load balancer as well. So as I said, there's different types of services, there's cluster IP, there is node port, there is a load balancer, and I think one more. Yeah, I forgot the other one, no problem, I'll remember. And this is basically, I'm adding just kind of a flag here that I use, I use this kind of setup to deploy a CTF challenge, right? So I have a flag here, I'm creating a secret, and I'm storing that secret as a flag named CTF on the kubesystem namespace, and this is the result of the flag here, right? The value of the flag. And it's basically base 64 encoded, it's just a flag that we're going to practice and try to get that once we compromise the cluster, okay? Sounds good. So I hope you understood all these objects and we're going to apply that to my cluster, and you're gonna see that that's going to be very quick. I go back to the environment, and we go back to that project that we downloaded from GitHub, and that's where my cluster underline objects is. Yeah, just keep in mind that for the first file, I don't know why I did this, but the first one I use dash, and the second one I use underline, so let's see if the command is correct. Yeah, it's using underline, so this one. So once I do this, Kubernetes apply dash F, right? So dash F means that I'm passing, I'm feeding it a file, and so it's going to look whatever is in that file and apply that to the Kubernetes cluster. Just find that, and you can see that all the objects that were on that file were created, right? The namespace, web app, the service account, the deployments, the service, and even the secret. Everything is created now. So if I do now Qtpctl get namespaces, I can see that there is a web app namespaces there on the bottom, right? If I do Qtpctl get pods main dash n web apps, so now that I just want some information from a specific namespace, I'm using the parameter dash n to tell Kubernetes, I only want the pods from this namespace, right? See, there's one pod running. What happens if I get this, like Qtpctl get pods? No resources found, because if I don't tell Kubernetes that Qtpctl a specific namespace, it's going to look into the default one. The default one is this one here at the top, and it doesn't have anything there. That's why it's not going to find it, right? So you need, that's something that people that are starting to work with Kubernetes may forget, and may think, oh, I just created this, where is it? If you don't specify anything, then all the objects that you create are going to create on the default namespace. But as you can see from the object that we use here, I created this new namespace, and I told to create all these objects on the web app namespace, right? Except this one, which is on the Qtp system, okay? See, okay, let's see another question. Where are these API objects going to? Do these get sent to HCD? And then some pulling services check for new additions, resources using APIs. Yes, yes, so the API objects they go to HCD as like a key value store, right? So there is this store on HCD, and the controllers, the Qtp controller manager with their controllers, they check at HCD and see what changed. Do I need to create this object? Oh, oh yeah, I need to create this service or this load balancer or this pod. And then it talks to other objects, like if it's a pod, talks to the Qtp scheduler and tells, okay, create this pod on my cluster. I don't have that on my cluster right now. And I need, and HCD is telling me, HCD is the source of truth, right? HCD is telling me that I need this cluster, these objects there, so create that for me. So some objects in Kubernetes are like, how can I say, they have a physical representation, right? So for example, a pod, it creates itself there on the, when it's created on the cluster, I can see it, I can see that there is a pod and there is like something running, there's a container, and all that stuff. Some other objects are just kind of, how can I say, they're kind of, they're more abstract. They exist there, they exist in Kubernetes and stuff, but there is no kind of physical representation of that object besides the configuration at HCD, right? So it's just like some stuff that Kubernetes do to kind of encapsulate that object in some specific configurations, right? Such as deployment, stateful set, is just a way for you to help you deploy containers in like different, with different characteristics, right? Basically that. So yeah, I hope that answered your question. If not, yeah, let me know. Okay, so now I have a web application running and we expose that web application to the internet via the load balancer, right? So now I can get the service off my web application, right? And you see that there is a service called Drupal-SVC load balancer running here and this is what we need, right? The external IP. So this is the IP of my web application that should be running there. So if I click on that, if I copy that URL and open on my browser and I'm gonna show you right away, let me share the screen again. Change here to my other tab. Okay, here. Yeah, so you can see here that there is a Drupal web application set up on this URL, right? And I need to install it, I need to configure and that's what we're going to do, right? And but yeah, basically, okay, it's running. I'm exposing this Drupal container to the internet with only that cluster, only that container, right? And with the service load balancer, of course. If you go to the AWS console where the load balancers are, you're going to see that there is a load balancer created there as well. I'll show that later, but yeah. So that's what's being used. My container is running inside my cluster, my Kubernetes cluster, I'm using the load balancer to expose that and I'm accessing that through that long URL. So don't use the same URL that I'm using because your URL is probably different than mine, right? Depending on where you deploy your cluster. So let me install here. I hope that nobody messes with my Drupal. What I want to do here is choose the language, use the standard one, right? We need to configure the web application needed running so that we can exploit it. I'm going to use SQLite, which is the database here so that I don't need to set anything. And yeah, it's going to install and there's some just a few minor configurations there. But basically, we all need to do that if we want to exploit and play along with our exploiting the application, right? So dodge coins, I can set anything here, like this doesn't matter, this configuration is just to complete the configuration of my website, but yeah, cloudvillage.devcon.com. I'm not going to use any of that. So don't worry about remembering the username or password or anything because we're not going to use that. We only want to do that to set up our Drupal web application. So I'm going to use just a strong password just in case that anybody tries to mess with my application. But yeah, that's it. So, yeah, okay, it's fine. So yeah, now I see the Drupal web application is running and you can see that it's working. I'm logging in as admin. I can post, I can do anything. It's a regular Drupal web application. It's just using an outdated version because that outdated version has a vulnerability, right? And now we're going to exploit that vulnerability. Okay, no more questions. So yeah, let me go back now to the slides. I'm going to talk a little bit about how we're going to exploit it and the vulnerabilities that we're going to do. And then we go back to this web application. True, okay. See, change the slides, you're good. So what we're using today, and as we mentioned in the beginning, right, we're using threat modeling to attack this web to see what vulnerabilities are in the application and then POC it, right? So that's one of the things that I do during my daily job, right? So analyzing new technologies, especially cloud and container technologies, check the possibilities of any kind of like a hypothetical attack and create a threat model for it and then try to POC it, right? Try to create a proof of concept for that as well. One of the things that we like to use is the MITRE attack framework, right? And as I mentioned, the MITRE attack framework is a very well-known framework among the Infosack community, right? It's like a knowledge base of tactics and techniques based on real-world scenarios, right? So you have the MITRE attack for enterprise, you have the MITRE attack for Linux, for Windows, and you have even one for cloud, which is now called IAS, Infrastructure Service, for the all-day cloud, major cloud providers. But we didn't have one for containers and Kubernetes until this year, right? So what we did was there was a one matrix released by Microsoft in April last year, April 2020 that they kind of used the MITRE attack framework structure of tactics and techniques, this matrix approach to create their own matrix. So based on the data that they were seeing on Azure, on their AKS clusters, they created, okay, so these are the techniques that attackers are exploiting in a Kubernetes environment, right? And so, but that's not an official MITRE matrix, right, a MITRE project. So in December last year, the MITRE team for cloud and containers, they released like the publisher blog post asking for help from the community to collect data about real-world scenarios and what attackers were doing on those environments, right? So either previous attacks that happened, real-world attacks that happens to organizations such as Tesla and I think there's, I don't recall others now, but yeah, there's many others, EKS as well, they used as a baseline. So they're trying to gather real-world data and real-world information or honeypot information from what attackers were doing there. So since we do this research on a daily basis, we had a lot of data that we reach out to MITRE to provide them with that data to help build this matrix. So the work started basically last year, beginning of this year and in April this year, they released the official MITRE attack for containers and I kind of add and Kubernetes as well because that's not how MITRE calls it, but there are some techniques here that are very specific to Kubernetes environments, right? Or orchestrated container orchestration environments. But yeah, basically this is the MITRE, the official MITRE attack for containers that was released in April this year. So you can see that there is the common tactics there and there is those techniques, there are various, some of those already existed on the MITRE framework, but they didn't have the context of container environments, right? So they add that reference or the technique specifically or that actor and some of the techniques are brand new. They didn't exist before. And so that was a kind of a cool project that I did that I helped build with MITRE and there's the link there with the matrix, but it's all official matrix for the MITRE attack framework for containers is this one. And yeah, there's different techniques here, different tractors actively exploiting those environments on 100 pots that we deployed. It took less than 24 hours for the attackers to compromise our environment and we didn't publicize that. We just deployed the Kubernetes environment with a vulnerable web application and in the last 24 hours, the attackers had compromised the cluster, right? And even broke out of the cluster and compromised the cloud environment that we set up the cluster in and they started deploying big instances to mine cryptocurrencies, right? And that's usually the end goal here with Docker and Kubernetes environments when attackers compromise those environments like 90% or 95% of the times, the end goal is to either compromise the containers that are ready running or deploy new containers to mine Monero cryptocurrency, right? That's usually what happens most of the times. Okay, so what I did with the trap modeling part was even before this was done, this diagram here was done even before the Maya framework was released which was done last year. And what I did was exactly create this trap model diagram based on the Microsoft attack made the Microsoft Kubernetes trap matrix. And so I created like a scenario where, okay, which are the steps that an attacker can do to compromise a web application running on a vulnerable web application running on a Kubernetes cluster, excuse me. So yeah, here we have the initial access, right? The web app running in a pod. So we have that already, we set up that already on AWS, right? And the web app has an application vulnerability and you're going to see that really soon. We're gonna talk about the vulnerability that this Drupal, outdated Drupal application has. And the attacker exploits this RCE, right? They exploit this vulnerability and then they get a shell inside the pod, inside the container, right? And from there, they can do other things, right? And so this kind of the diagram that I created to help me understand what an attacker was able to do and that's when I give the POC to validate that. And of course, then that's where this, kind of this workshop came from, right? From this diagram there as well. So I don't know if you can see the whole diagram if it's too small, but I can share just the diagram link later with you. So yeah, attacking Kubernetes, if we have set up this, like I called the website, dodge coins, right? If you have the Drupal set up now and you should see kind of a similar screen, a screen to that, to that screen, okay? Okay, so what we're going to do, right? What are we going to do today is exploit this. Yes, yes, I can share the attacks in error diagram. I don't have the link handy right now. Yeah, someone is asking if I have these attacks in error diagram uploaded anywhere, sorry. But yeah, I do have it. I just don't have the link handy with like a higher resolution version, but I can share that together with the slides, no problem. So yeah, the initial access here, right? So that's when we start now attacking Kubernetes. And I know it took, I'm sorry it took too long to get here but I wanted to everyone follow along and be everybody on the same page to set up the environment so that you understand the basics of Kubernetes. So now that we can start attacking it, right? And that's what's important here. Like once we understand the basics, now it's just practicing different scenarios and different attacks and maybe you can do that on your own later. So two main entry points of a Kubernetes cluster, right? One of the entry points here is a vulnerable web application, right? And we're going to see why is that? This vulnerable web application has done RCE, right? A remote command execution. As you can see the CVE from 2018 and we're going to use that exploit code on that GitHub to do that, to exploit that vulnerability, right? Other ways that Kubernetes clusters got compromised before was the exposed dashboard. So in previous versions of Kubernetes there was a dashboard that was created by default that allowed you to kind of manage your cluster in a like graphic user interface way, right? But that got deprecated and it's not deployed by default anymore. You can still do it but it's not deployed anymore by default. And that dashboard has some vulnerability, some issues. That's how one of the cluster attacks that happened a few years ago on a Tesla, that's how they compromised their environment. It's the exposed dashboard. And so that's not the case anymore. And even the reason why on the, if you look at back at the Kubernetes threat matrix by Microsoft, they kind of deprecated that technique, right? Because since it's not deployed by default, let me go back. You can see here as the initial access the exposed dashboard is kind of deprecated, right? There's other options, right? On the initial access, there is using cloud credentials, compromise, container registries and all that stuff but I'm going to focus on the application vulnerability here. There is also the Kube API server, right? And if the Kube API server is exposed and it's not properly configured, right? You can even deploy pods through the API server, the API endpoints, right? So, yeah, same thing with the Dr. Demon API, right? If that's exposed and there's no authentication, there is no, and the person with access to that can have access, can reach the API endpoint, then they may be able to do some things or get sensitive information there, right? The Kube API server endpoint is public by default, right? Incent managed services like EKS. Sally asked you the question here, do you have any issues with AWS having vulnerabilities in your clusters? With this workshop, right? We're not, we're just deploying a vulnerable cluster and we're compromising it ourselves. We're not stalling any malware, we're not running scans on AWS and yeah, we had some issues in the beginning, but like with our Honepods, but we changed the way that we deployed and also the environments. And yeah, I don't think we're gonna have issues with that decent area today, so yeah. But if you do, please let me know and we're going to shut down everything later. So it's not gonna be up for a whole lot so that you don't get your cluster compromised by like someone else, right? We don't want that. Okay, so one of the main things, one of the main issues as I said is the API endpoint, right? Reaching the API endpoint externally, that can be a problem. You can see two examples here accessing on unmanaged cluster, the API server of Kubernetes runs on port 6443 and on EKS, it runs on port 443, right? And if I have the URL of that API endpoint, I can just do a call request to that URL and I can get some information about my API endpoint. So let's try to do that first. Before we go into exploiting that, let's try to do that first. Let me go to the EKS console, EKS console, let me share my screen here, sorry. Keep forgetting that. So yeah, I'm here on my AWS console and I went to EKS, the Elastic Kubernetes Service. And I'm gonna check my cluster here and I can see that I have a cluster running and it should be at least only one if it's a brand new account or something like that, right? And that's the cluster that we created for this workshop. So this is kind of the console of your control plane, right? Oh, it's telling me that I'm using an updated version. There's a new version available. I can see my configuration here, right? So this is my API endpoint and that's usually exposed by default. Let's see if we can open a new tab, you can see that, yeah. Okay, let me show you here. We can either do a curl or just open a new tab in the browser. I'm gonna show you here, okay? So yeah, you can see here that I got like an error, right? Let's see API version or just version. Oh, yeah, yeah. So yeah, just accessing the URL there, I got an error, right? And I got like, this is kind of a JSON error and a 403 telling me forbidden, user system anonymous cannot get path slash, right? And okay, this is, okay, I don't have access to that and that's fine. But you can see that this API response is very specific to Kubernetes, right? And you can already tell that by the format that there is a Kubernetes cluster running here, right? And if I go to slash version, which is usually open to anyone by default, I can see the Kubernetes version that's running. I can see the goal version of my Kubernetes and the platform and all that stuff. So this is kind of a information leak, right? That it's exposed by default in many EKS cluster, right? If you deploy the cluster like we did by default with the default settings, it's going to expose, once someone knows this endpoint, of course it's a long URL, but you can see that this is running on EKS, right? And basically what changes is this kind of sequence, this string here, right? And so people can try to, attackers can try to scan the web and scan for any kind of open clusters that may be running a vulnerable version or are exposed by default, right? So they want to look for that. So yeah, be careful about that. Let's see, what else can we do here? Let's go back to the, let me go back to the cloud nine test. Now we're going to download the exploit that we're going to use and use it, right? And basically that. So let me grab that again. One second. So this is the URL of the exploit. Let me share here with everyone. Let me share the screen too. Sorry. Okay. So yeah, this is the GitHub of the exploit for the CV, right? It affects Drupal, right? 8.5, so 8.50, that's the one that we're using. Just a couple of things here before you just kind of clone this, right? If you git clone and download this to your cloud nine, you need to have Ruby running installed. It should have that on your cloud nine, but it might require some like dependencies, which is a gem. And I'm going to do that with you together. If that happens, we just install the dependence and we can run, right? So all we need right now is the load balancer URL from where your application is running. So let's go back there. Let me go to the cloud nine. Sorry for the switching screens. And that's safer. Okay. So yeah, I'm going to need that. So just save that for later. Save it like a notepad or something because you're going to need that for exploiting the cluster, right? And you can do that from any other machine. I'm just going to do through the cloud nine because it has Ruby installed already. And so it's going to be easier for me to do that. So seeing, I'm going to add that command Excel to the Google Doc and get clone and just get out of this. So I downloaded the exploit, which is which is returning Ruby from GitHub. And now I can see that here, right? If I take a look at the description there, basically I just need to run the exploit and set up a target, right? The URL of the vulnerable Drupal application, right? And that's the URL of the load balancer that we have already, right? So let's try to run that and see if that works. If it doesn't, then we install the dependency on Ruby and should work as intended. So yeah, basically DrupalGadon.rb and the URL of my load balancer. And let's hope that works. Yeah, it's missing, yeah. Highline, see it's missing other pen is here. Cannot load file, Highline import. I think it should go, can install Highline. Yeah, so another command. Let me add that to the Google Docs. Yeah, I did that already there. Okay, so it should work now. Let's, fingers, let me just clear this. Fingers crossed. Oops, not yet. Why didn't work, maybe? Let me open the terminal again, just in case. Okay, let's try one more time. See if that works. If it doesn't, I'll do from my own machine, I'm from, mm, why is that, okay, install it, okay. And still not work, oh, maybe I need, okay. I need the path, probably. Yeah, let's try, let me show you from my machine and then I'll figure out how to solve that for everyone. Okay, let me open a terminal here and I'll share my screen to you in a second. Yeah, that works. Let me share my screen better. Forget about that. This is the problem with live workshops, live demos. I'm gonna share my terminal, okay. So yeah, I went to the Drupal Get-on folder where we're exploit is, right? And because I have the Ruby and the dependencies already installed, that worked already here and basically I did Drupal Get-on.rb and with the URL of my load balancer, right? Where my application is running, right? So I think there is some issues with the path of Ruby and on the Cloud9 and we can figure out that soon. So let me show you what's going on here. So the exploit is running, right? It's looking for the Drupal stuff. It's collecting, so you can get information about my web application, where it's running and all the stuff and I get a shell here, right? So, let's see. On the mainline, see, it's running as the user www.data from the web server or where this container is running, right? And let's see, let's see. Yeah, I can see from the files already, right? If you've known Drupal before and know the file structure of Drupal, you can see that there is a Drupal application running. And yeah, so this is already inside where I am right now with the exploitation of the web application. In a shell, inside the pod, that's running the Drupal container, right? So what else can we do here, right? What else can we see? How do I know that this is a cluster, like this is running in a cluster, right? So some things that can help you understand where is it running the environment, right? It's checking the environment variables, right? I do and I see a lot of environment variables, right? I don't want all of them. So let me do and find, grab, hash, i, cube, let me do that. Oh, now I see there is a lot of Kubernetes environment variables and I can see them all together here. And I can see the IP address of my API endpoint, my API server, right? You can see the port that's running, right? And so there's a lot of information here. So this is already telling me that this is running on a Kubernetes cluster, right? Let me just add that to the, sorry, to the Google Docs. And grab, hash, i, cube. What other things that I can do here, right? I'm inside a Kubernetes pod, what can I do, what else can I do, right? It's not just like, it's running inside a container, right? And, but what information can I have? Another thing is that Kubernetes stores their service account tokens on every pod and that account token is used for the pod to talk back to the API server, right? And so if I go to any, they are all on the, always on the same location, all in the same directory. So if I go here to, let's see, run, run, secrets. I think I need to increase the font a little bit. Secrets, Kubernetes.io, service account. See if it's going to show me, I don't know if it's because of my shell, no. See, my shell is not great. I should probably deploy a new shell, okay. Yeah, now I can see it here. I can see there's three files here, right? The certificate, the namespace, which is basically where the vulnerable web applications running and the token, right? And this can be used for me to talk to the API server. There's another tool that can be used as well, which is also, it helps getting information about the containerized environment. And this tool is called in my container. Post this, I'm going to post that to the Google Docs, that's right, secrets, and Kubernetes.io. This is like, yeah, okay. And step nine, we're going to look at the MI container. So what we're going to do is use, download this to the part that I'm running and execute it, right? So let me go back here. One second, where is it? Okay, yes, let me share that. So let's see if that's going to work. I have to come in somewhere here. Yes, yeah, here, then basically, actually I want this one. Yes, I posted that to the Google Docs so that everyone can follow. I'm going to try to do that through my shell, it works. So basically I went to the folder, I use curl to download the binary of MI container from the GitHub, right? And yeah, change it permissions there and execute it, right? So you can see, this is the output. This is the output of the MI container tool. And it shows that the container runtime is cubed. It shows it has namespaces, PID, namespace through, username, space false. It shows if there are any protections here, Linux security modules activated, like AppArmor or SELinux or SACOM, right? So you can see that the AppArmor is unconfined through some capabilities that you can see here as well and also syscalls that are blocked. So I know that as well. So this gives me a lot of information already, right? So it's good for an attacker to kind of do like a recon of the internal environment, right? I can even see if I can curl, I think I can even curl from here. Yeah, see? The same thing that I, oops, not gonna work. So the same thing that I did on accessing the tab, the URL, I'm doing that from inside the pod. And since those are environment variables, it's going through, it's getting those and getting the Kubernetes cluster information, right? So that's interesting. Another thing that I can do is, you probably heard about the API metadata, right? The instance metadata from cloud providers, right? They all have one, AWS has one, Google has one and Azure as well. So from a compromised pod, if that's enabled, you can reach the API metadata as well. And I'm not gonna show my API keys, but I can show you some stuff that works, let's see. Where is it? One second. Yeah, let me get, okay, let me grab that first. Let me find my sheet sheet here. Sorry about that. Okay, I'm not finding that. So yeah, I'll show the API metadata soon, don't worry. Let me just, so another thing that you can do is, you can, from the pod that you're compromised, right? You can talk to the API server too. And as I said here, like I just did a curve to the API server, but if I do it like just slash API, you can see that I got 403 error as well, right? So I got forbidden because I'm not sending any kind of authentication on any kind of token. It's not working. But as I showed that there is some credentials, some service account information, the certificate and the tokens, inside the pod that I compromised, I can use that to feed into my curve request to talk to the API server. Let's see, go back there. So I'm gonna create two environment variables here. Let's see if they work. I'm gonna create the token environment variable that's using getting the token from that directory and the namespace as well, getting the namespace from that directory. And now, now I'm going to do a curve request, but with that curve request, I'm sending the header authorization bearer token to this API server. It didn't work, maybe it didn't work mine. So I need to copy, probably copy that. Oh, fine. Yeah, I'm gonna show my token, that's fine. We're almost, I think we're almost at the end now. I don't think we're gonna have time to cover everything. But yeah, let me try to run that again and instead of token, I'm going to send that here, right? Let's see, unauthorized, hmm, something changed. It's not working, okay. No problem there, it's fine. Let's go back to slides and I'll figure out what's going on. Okay, yeah, so let's see where we are right now on the slides, okay, okay. So we've done the reconnaissance, we, oops, the slides are not sharing yet. If someone add slides, can you see the slides on my screen, on the screen? Okay, thank you. So yeah, we did some internal reconnaissance, we inspected the Kubernetes environment, we inspected the environment variables, right? And the service account token, we even used a tool from outside, right? Externally on GitHub downloaded that tool and executed, yes, yes, probably, there is some issues. I'll check it out, I just wanna get stuck because we're almost out of time and I wanna complete that, but I'll take a look here if we have more time. Yeah, so we did an overview of that, right? There is many stuff that you can do here on your container. There is also a possibility of doing, because the way that this cluster is configured, there is also a possibility of doing even a post-exploitation or persistence on the cluster via a privilege escalation, right? So there is this technique posted by Duffy Cooley, who works at Isovalent now, he used to work at Apple, where he created, he posted this kind of command using Kube-CTL to deploy a container with the privilege container that has access to the worker node, right? And on the right side here is the structure of the JSON format that he is using to override that container that he is deploying. Basically, what's happening here, and he's using host PID equals true, right? He is using the privilege container through as well as a security context. He's doing a lot of different techniques to access the worker nodes, right? And that's one of the ways to do that and how that happens because I'm allowed with that service account that we created, which is giving me a lot of permissions, I can do that, I can deploy a new pod and that's one of the ways of escalating privileges in a Kubernetes cluster is deploying a privilege pod on that cluster if I have missions to do so, right? Of course, I would need to download and install Kube-CTL on that pod first to run it and do the privilege escalation. If we have time, we can try to do that as well. That can be a little kind of tricky on managed clusters because the way that they handle those privilege escalations, but yeah, let's see if that works soon. Let me talk about the defenses before we do some other attacks so otherwise we're not gonna have time, right? How can I protect my cluster from attackers, right? Isn't Kubernetes secured by default and where do I start, right? Basically, we can see that Kubernetes is not secured by default by many reasons, right? So let's take a look at some different things that you can do to protect your cluster. One of the basic things is taking a look at when you installed your cluster and this applies only to unmanaged cluster where you have access to the control plane. But one of the things is just integrity monitor, right? Understanding what files are created when we install Kubernetes, the recommended ownership and permissions and those are from the CIS Kubernetes benchmark. And if you have the integrity monitoring set up on that node, anything that changes on those files, either permissions or ownership, then you should be alerted, right? Something suspicious is happening because usually those don't change very often and those are the recommended permissions. The CIS Kubernetes benchmark, if you haven't heard about it, is like a very extensive documentation with best practices for setting up your cluster on Kubernetes environments and setting up and configuring it correctly. There's also another kind of recent document that was published by the NSA called the Kubernetes Hardening Guidance, I think. It was released a couple of days ago, so it's pretty recent. But as far as what I looked from the document has some issues, right? They talk about pod security policies, which is something that we're gonna talk about soon, but those are being deprecated or deprecated already. They don't mention much about the encrypting at CD, right? Which is the database of Kubernetes. They only have a small example at the end and they're using the Docker files with the tagged latest, which I don't think it's a good idea using the latest tag and that's something that you learn when using containers. So yeah, I think the CIS benchmark document is a better one to use for deploying your cluster securely. There's also the same thing with the worker nodes, right? It's just different files that are installed on the worker nodes with different ownerships and that's from the recommendation from the CIS benchmark 1.51. There might be a newer version, but yeah, take a look at that on the CIS benchmark's website. I talked about keep system namespaces already and how they can be dangerous. Like as I said, there is no security boundary between namespaces, right? So the keep system namespace, which is the main namespace of your Kubernetes cluster and it's like an inception, right? Kubernetes uses Kubernetes to run Kubernetes. So all the main pods of your control plane are on the keep system namespace. And so if an attacker has access to those, then it's likely that they can do a lot of damage, right? So be careful with that keep system namespace. You don't want to give access to that namespace to every user on your cluster. And the idea is that you can use RBAC, which is road-based access control to protect that namespace from being accessed. I already talked about the API server, how that shouldn't be exposed and how we can get some information from the API server and see if there is a cluster running, right? Yeah, there is some information that you can get running some comments if you have access, right? In this case, we don't have access to the control plane, right, shell access. So we can't see the configuration of the API server, but yeah, you can do that if you do, if it's an unmanaged cluster. The kubelet, right? So we talk about the kubelet on the architecture diagram. It's the agent that runs on each node of your cluster, right? Make sure that our containers are running in the pod. And two main security settings for the kubelet is restricting kubelet permissions and rotating the kubelet certificates. There is another thing that there was a blog post that I published earlier this year talking about how attackers are using a kubelet exploit to compromise different clusters. Let me see if I can, yeah. So the way that it works just as a quick overview and I'll post the link for the blog here. It's not like I'm not trying to promote anything, it's just that that's a technical blog on what we've analyzed from a very famous actor. The way that they do is they compromise your environment and once inside your environment, they scan your whole internal network for Kubernetes clusters running inside your network, right? And they specifically target the kubelet, right? With a kubelet exploit that's being known for a while, okay? So take a look at that blog post if they can share the link. But yeah, there's something that's happening on the wire in the wire as well. As I said, the Kubernetes benchmark is a guidance for establishing secure configuration posture for Kubernetes. There is over 120 security checks for your Kubernetes cluster and it has been developed by cloud native and Kubernetes security professionals with much more experience than I have. So Rory McCoon from Aqua Security and Reese Rice from Isovalent and many other contributors, right? And there's also specific ones for EKS and GKE. And I think there's one for EKS, if not already. So because the way that the managed services work, they are a little bit different, a little bit different than the standard the unmanaged Kubernetes. So yeah, it's better to take a look at the specific documentation for that. And that's, I think that's the best guidance out there for protecting your Kubernetes clusters. Let's see, if you don't wanna run, yeah, and if you don't wanna check your benchmark manually, right, because it's a long document it has a lot of checks that you need to do. And the way that the benchmark works is that you have the setting that's recommended. Okay, here's the setting. Then it has how to check if that specific security setting is properly set up. And then if not set up, it has another command for you to run to change that setting to the proper configuration, right? So the benchmark is a PDF file, it has a lot of stuff and like many best practice configurations, right? But it's very hard to do that manually and especially if you have multiple clusters that's gonna be probably unfeasible for you to do, right? So Aqua Security, which is also a company that works with container security products. They provided, they created this tool called CubeBench which is open source and it's free for everyone to use that they check whether Kubernetes deployed securely or not, right? You can validate it against your CIS Kubernetes benchmark, right? And it's developed in Go. So it's open source and let me post the link here on from GitHub, CubeBench. If you look there is a dash, yeah, found it, yeah. Okay, so you can use that, right? So I just have an example here. I don't think we're gonna have time to run CubeBench on our own cluster but it's very easy to do that. And you can see an example of an output here showing that my cluster has a lot of vulnerabilities or maybe like it failed a lot of best practices from the CIS benchmarks, right? A lot of fails and warnings and whatnot. And it checks everything from the master node, the API server, the LCD and worker nodes as well. So it's interesting to see if you have closer running on your organization and you wanna see their status and how they look, if they were configured or not, it's a good tool to use and get a quick overview of the if they're compliant with some best practices from the benchmark. Other stuff that you can do is use some image scanning, before you deploy your containers on your cluster, the idea is to scan them for vulnerabilities or outdated dependencies. And there's different image scanning tools out there. I'm not gonna talk about them. There's different open source and enterprise ones. All those here are either free or open source. So you can take a look at those. There's even like a native Docker scan command that you can use now, which is based off a SNCC. But yeah, there's different others. Okay, now you scan the container image and you deploy that image on your cluster, right? But how do I know that someone has compromised in my cluster, some compromised that container, the pod, right? That's when the cloud native runtime protection comes in, right? And probably if we had like a tool such as like Falco, which is an open source to as well that belongs to the CNCF, now it was donated by Sysdig. And it probably would detect some of the attacks and the shell that we got inside the container, right? So what Falco does is Falco parses Linux kernel CS calls at runtime, right? It's using a technology called EBBF, extended Berkeley packet filter, which is kind of a technology that many container security companies are using to create their own tools, their container security tools, right? And it has a lot of visibility on the system. So you don't need to install agents and like mess with the container, sometimes deploy sidecars and all that stuff. So Falco is a rule engine, right? It has a easy and powerful rule engine. You can create your own rules for your vulnerabilities. It generates alerts based on the threat detectives. I'm even using the Falco sidekick teacher today that I got as a contributor to the Falco project. And it detects unexpected behavior on a cluster, right? And the sidekick is another project that runs with Falco and it gives you better kind of visibility on what Falco has seen, right? On the logs and everything. Here, we're almost out of time, but yeah, there is other settings that you can apply to your pods, right? Of course, CPU and memory is limiting resources to avoid denial of service. That's something that you can do. You can apply a security context as well. There's a different set of settings that you can apply on a security context. Some of those are here. And as I said, the Linux security modules, right? The kernel features, SACOM, APRMOR and AC Linux can also be used to apply that to your Kubernetes. Be careful because Docker has some default profiles for those for APRMOR, for example, and SACOM. But when you use that on Kubernetes, Kubernetes doesn't inherit that from Docker, so you need to apply that as well. And yeah, pod security policies is something that got deprecated. It's not being used anymore. It was a way to apply the security context as a whole on a cluster level. And there is new features now, new tools that you can use some alternatives such as OPA Gatekeeper or Kyverno, which are open source tools as well, that you can create policies as code to say, okay, only deploy this container if it doesn't have any high or critical vulnerabilities, right? There is a new pod security now. I forgot to update the slide, but there's a new pod security, it's not PSP, but there's a new version that's being implemented by the Kubernetes SIG off and SIG security team, right? It has been approved already. Yeah, I won't have time to talk over our back and at SIG, network policy, and yeah, let me finish with the basics and I'll share those slides with you. And if we wanna talk about those other things, our back, network policies and audit logs, we can chat later, I'm available on social media, Twitter or LinkedIn. But basically what I wanted to give you here is just an overview of Kubernetes, an overview of the security issues, and also an overview of how to secure your cluster, right? It doesn't mean to be, it's not an extensive workshop, it's not an in-depth workshop, so keep that in mind. So I just wanna leave you with a few basic rules with Kubernetes, right? First thing is update your environment version earlier and often, as I said, there's new versions being released very frequently, now three times a year, and the version 1.22 is the most up-to-date one recently, that was released recently. Don't use the cluster admin user for your daily work, right? Treat it like root, right? So the reason being why we got to exploit that vulnerability and then get a shell on the pod and then do bad stuff on the cluster was because the service account of that pod had too many permissions, right? You can even deploy new pods and select privileges and all that stuff. So be aware of that. If you can, use managed Kubernetes services such as AKS, EKS or DKE because they have better security defaults and you don't need to worry about the control plane because they take care of that for you. And as I said, check out the CIS Kubernetes benchmark for more security best practices. I hope you enjoyed this workshop. There's some references here and I can share the whole command list that I used and you can try to replicate that on your environment. And I'm sorry if some of the commands didn't work but I'll take a look and make sure that's fixed and before I share with you. So yeah, thank you.