 Hey, can everyone hear me? Okay? Does anyone first off does anyone not have a piece of paper like this? If you don't put your hand up and my colleague will come and give you one of these So whilst he's doing that what we're going to try and achieve in the next 70 minutes is To perform an audit of a cluster, right? It's almost impossible to be able to secure a Kubernetes cluster in 70 minutes so what we're going to do is we're going to I'm going to take you on a journey of Where you should look What you should be thinking about when you're performing said audit and then finally what defense mechanisms can we put in? So that once we have a good understanding of how secure insecure our cluster is When new workloads and new resources are Trying to be deployed to our cluster. We want to be able to stop the bad ones from getting in right? You don't have to keep repeating this order over and over again So start with my name Steve Wade. I'm head of engineering at Ksock and I have my assistant slash boss Jimmy with me You were mostly here before so So a brief agenda, what's the mission we're trying to complete so the first 10 minutes is going to be a cluster setup So everyone's going to set up their own cluster and They'll be performing an audit of that cluster then we're going to perform an asset inventory So we're going to look at what's running on the cluster What are the things that are important to us when we're performing an audit Then we're going to talk about workload hardening. So how can we How can we make the workloads that are currently deployed more secure? What are the things that we should look out for? Then we're going to move on to our back. So everybody knows, you know our back anywhere is a bit is a bit of a mess so What are some of the easy attack vectors from an our back standpoint that allow you to Elevate your privileges see things you shouldn't be able to see and do things you shouldn't be able to do And then finally we're going to finish up with some defensive guardrails. So we're going to try and deploy some workloads And some of them are going to get in and some of them are not going to get in So let's get started In on your laptop in a incognito or private browser If you all browse to this link It should take you to something that looks a little bit like this Do a quick zoom in Sure Is the internet behaving right now? I guess we're about to find out. We're about to find out. Yes Everyone successfully on the site Okay, so if you click on the getting started link on the left-hand side there Don't double click on this link open it in a new tab because we're going to flick back and forward between The URL one more time. Yeah, sure So when you get to this Blue button if you right click and open in new tab because we're going to use this website to drive The workshop that we're about to go through You should receive a login prompt if you log in using the credentials on your piece of paper Using incognito is really important here. So you don't get mixed up with Your corporate g-suite account or your personal g-suite Oh Adam's Adam will bring whatever to you When you get here, just click I understand Can you paper if anyone needs any help? Just please put your hands up and Adam would you me or come and assist? Do we have one more? We hit our 60. Okay Is anybody following along playing along? You can you can do this in your own Click through when you ask when it's asking about the cloud shell click start cloud shell When you get to this section down here, this is the important piece So there is a repository that we're going to clone into cloud shell That's going to make things a lot easier. So don't just hit confirm. You need to scroll down Click trust repo Hopefully you all trust me that I'm not going to go and do something for you Hit trust repo and then finally hit confirm I'm going to stop at this point and make sure that everybody gets to this point where they have a cloud shell running Before I move on Anybody not got to this stage up Adam Can you come in and assist this gentleman at the back here? He's struggling to sure Or andrey No problem Okay, so the first thing we're going to do is We are going to run make in it. So what make in it is going to do is it's going to ask us to authenticate When it asks you to authenticate, we're going to go through the the flow use the same credentials that are on your piece of paper Then we're going to install some default workloads and a couple of binaries and packages that we're going to leverage throughout This workshop and then we'll get started with the workload and asset configuration So what you're going to want to do is flick between these two tabs. So this A little button here allows you to be able to copy and paste into your clipboard. So firstly Copy and paste this one It will ask you for an authentication code. You simply need to take this URL here Open in a new window Log in again with your credentials Copy this token at the bottom Enter the token here Click authorize And then reconnect All right Once this has worked once you've gone through the flow, you should be able to see this little prompt down the bottom The key point here is that you see this in yellow This should be your project ID If this is not your project ID, put your hand up and I'll go through the flow again so that we can make it work. Okay, so At the top here You want to select the organization Select secure keeping eddies.com Click through on here Once you've clicked through on there run gcloud auth login again And you should repeat the flow So it'll ask you to authorize you'll get the token Copy the token back in Sure, so from the top here select the organization Select secure keeping eddies.com select your project Once you've selected your project come back To the terminal do a gcloud auth login You'll go through the flow Token should pop up copy and paste the token into the terminal Complete the prompts And then you should see this little yellow section at the bottom Okay, anyone not at this section Can you zoom in? Yeah All right, let me come home Can I see it? All right, are we good to continue? Cool. Okay, so Once you perform this if you do a kubectl get pods Oh, hold on If you rerun in it it's gonna because we need to go and get a kubectl config and I think now we've finally authenticated We should be able to do this. So run dot slash init.sh again Okay, so what I did there I run dot slash init.sh you'll go through the loop again now you're authenticated We will obtain a kubectl config for this cluster And then we should be able to start To validate that your kubectl config is working if we run kubectl get nodes You should get an output like this just a single node Anybody not at this point stuck on setting the project still? Okay, where did you where did you get to? Okay, so did you so No problem if you have you got this link? Okay, so go to tinyurl.com seccon workshop No problem then What you're going to want to do is make sure you open that in an incognito browser Then on this button when you open it open it in a new tab because we're going to flick between the website and the cluster Once you're in there You want to select the project so at the top here Go to the top like so I went Right at the top here. You helped the guy in the purple. Yeah, just gets gets it up. No problem okay So the first thing that we are going to perform is an asset inventory So we want to know What's running? What are we dealing with? What are the things that we need to start looking at? from an audit perspective So I'm not going to go down this list extensively of the things that you should be looking for However, some things that are useful and I would recommend you look for is The version of kubernetes that's currently running Reasons why this is important There are cve's in some kubernetes versions Um, there are also deprecations as well. Um in some of the api specifications. So Not necessarily a security thing to be aware of but when it comes to upgrading your kubernetes clusters It can be an important thing to note right, so If there's a deprecated api you're still using that api You upgrade your kubernetes cluster and that api doesn't exist The first time you deploy your resource it will work When you try and upgrade that resource so you upgrade a deployment or you upgrade a replica set it will not work So one of the things you really want to focus on specifically if you're A platform engineer and responsible for the kubernetes clusters that you're running at your organization is to try and be Ahead of the curve of the application developers right because what you don't want to do is Upgrade your kubernetes cluster and then the application developers can't deploy their applications. That's not a that's not a good look So there are some links here to some official kubernetes cve Streams so you can see all of the cve's kubernetes does have cve's That you can go through the versions that they were fixed in I'm not going to go through these in detail, but feel free to to take a look at these in your own time Then from a From a managed provider perspective There are also a number of things to note here So they're like eks and gke And and aks on as you they they all have limits right there's there's edges and boundaries that you can't go outside of I've left the links here. They are important things to note specifically When when you're dealing with security groups and you know your applications need to talk outbound to something is running in those cloud providers, so Hands off of people that are running eks or running an aws Google cloud as you None of these because you're running on premise All right, or no kubernetes at all Okay, so again some other things to note from a from a networking perspective There are also limitations there as well To be to be aware of the subnets that you're running in Some of the plugins the the specifically the aws eks cni plugin that Allows you to have an extension to actually leverage the amazon security groups But they can they can sometimes get a little bit mangled So i'd recommend that specifically if you're running aws just to to read through these plugin configurations So the first thing we're going to want to do is when we perform an audit is we want to know What are all of the different api resources that are currently running? So the first command that we're going to run is qcto api resources dash o wide What that's going to do is print out a load of noise And it is essentially every api version that is available in your kubernetes cluster and the resources that that api version provides you So this gives us a good initial understanding of what's currently what's currently possible to run it This is not what's running right? These are the options that are available to us to deploy so we can see here that we're able to deploy Some some roles or some pod security policies or some pod disruption budgets so when we go deeper into doing our audit We should be looking at these types of resources and trying to find The resort the number of resources that are currently running for each of these api groups From there we can go further and look from a security standpoint at the configuration of each of those resources So essentially it's a it's a layered approach. We're going to start right at the top Going to get a good overview and understanding of what possible resources can run Then we go to the resources themselves and then finally we go to the configuration of those resources So this is our initial starting point. So again This was api resources dash o white second thing we want to run and we want to know What containers do we currently have available? What what containers are currently running? So again, I'm flicking between the asset inventory page and the terminal itself So I'm now down here on list all container images When we print this out, we can see the number of containers That are currently running and what type of things are currently running So we can see here for example We have nginx 193. We have an unprivileged nginx. We have a couple of things that are Running sudo as well so again from a security standpoint we could think about Scanning these images what kind of vulnerabilities do these individual images have right as as an application developer myself sometimes Security isn't always top of mind right releasing that feature to production is more top of mind Potentially than the security of the underlying container the application. Yes, that's top of mind but The artifact that's currently running in in the production cluster or You know even your development clusters. Maybe not top of mind. Maybe you've got uh, you know a Generic configuration you use something like helm or customize and Your sre team or your platform team are providing you with this configuration And they just say all you need to do is add your image in here and set a few variables in a way You go right, but that whole configuration that they have could be completely insecure Right, and you're you're replicating this over tens or hundreds of workloads where you've got Tens or hundreds of insecure configurable workloads, right? And there could be many reasons for that and we're going to dig into some of them Hands up people who are doing container image scanning A handful of you tools that you may want to use things like gripe things like trivy clare Um Make sure you you're running them in ci make sure you're running them before you push them to the registry Some people that I know push them to the registry and then scan them Well, if you push them to the registry and then scan them they're available Um, so they could be on your Kubernetes cluster before you even get chance to scan it the other thing I'd recommend doing as well is um Using the image digest and not the tag itself So often people familiar with uh time of change time of use what that means So Imagine I tag an image 1.2 And I configure that in my community's deployment And I'm about to deploy it I tagged the images 1.2 remember Then someone else because all of these tags are You know that you can override them someone else pushes an image 1.2 So now I'm ready to deploy my application. Well, I'm deploying theirs and not mine so When I'm using it it's now different from the time I actually changed it right because it tags can be overwritten However, an image digest is a specific point in time If you use the image digest when it when we deploy that workload to Kubernetes It's going to use that digest configuration, right, which is a The makeup of the image itself and not the tag Right. So from a from a security standpoint I'm not really happy with the images that I'm seeing here, right? I'd much prefer Digests Something that is you something that is unique and I can validate right? I can easily come in here not for the engine x ones But if I had my own registry I was running at my own company Or I was a rogue employee. I could easily come and override someone else's tag It could be doing all kinds of nonsense bitcoin mining It could be Browsing and trying to work out and plot the landscape of the kubernetes cluster It could be doing anything and That application developer has no idea, right? They just see the tag and they've deployed it and maybe it's working Maybe it's not maybe I've added another process in there that they don't even know existed And then from an application standpoint it's working They can hit an endpoint They see the website and I'm over here doing bitcoin mining on the side so The next one we're going to do is we're going to Look at all of the resources that are currently running in our cluster So again here we can do kubectl kubectl kubectl Depending upon how you how you say it Get all and then we're going to pass the dash dash all namespaces So what this is going to do is it's going to give us a good understanding of everything that's running in every namespace in your kubernetes cluster Granted from an arbach perspective We need the ability to be able to see everything in our cluster, but because we're talking about security I decided to give everybody cluster admin because that's incredibly secure and we are now going to be able to see everything so if we run this you'll see Kubernetes displays it quite nicely for us. We have the replica sets that are currently running We have the deployments that are currently there Demon set services all of the pods Etc. Etc. However One of the things to note is this is not giving you absolutely everything, right? It's giving you everything from a kubectl standpoint So there is a tool Called Get all which kind of Actually provides you all of the resources right and what we're going to do is we're going to run This now and what you'll see is it prints out a lot more than what the default kubectl Get all this so when you're doing a cluster audit Don't just take what kubectl tells you as gospel, right? There's other tools that are around that actually give you a much deeper and richer response So now in our terminal If we just run Kettle It takes a little bit of time and now you can see there's a lot more printed out, right? We've now got storage classes. We've got things about our back We've got priority classes So there's a lot more richer understanding now of what's going on from a kubernetes perspective, so I Use this tool more extensively than I do the kubectl get all because this is the real get all kubectl get all just you know the bare minimum so Now we have an understanding of the images that are currently running the resources that are currently there we've talked about Images and why tags are not not great and we should be using image digest So that's something from an audit perspective that we should be going back to our application developers and saying and proposing to them move to image Digests don't use image tags We've got a good understanding now of what's running. So we've got some deployments in there. We've got some demon sets We've got some pods So now what we're going to do is start to look at some workload configuration So now if we copy This link here, so now I'm on the second number two workload configurations I'm copying and pasting The the command click yes And now we're going to go through an interactive flow. So What I want to talk about is some of the ways that Applications and workloads can be insecure, right? And there's there's many different Ways that they can be insecure. We're going to talk about how they become insecure And we're also going to talk about from a defense mechanism What are the things that we can do to be able to secure them? so the the classic one pods running his route everybody has at least one of these Pods running in the Kubernetes cluster running his route So If you hit enter we're going to deploy we're going to create a deployment called engine X And we're going to use the well-known engine X container. We just keep hitting enter so we think Engine X probably may be secure. Well The standard engine X container runs as route So if I can get in that container, I'm going to have all kinds of fun And I could do do whatever I wanted So the default engine X container is now running as route So what what options do we have available to us to make sure that that can't happen right from a from a cluster Audit perspective containers running as route, especially if I'm a rogue employer I managed to get into a Kubernetes cluster or you've you've given me too much access I You know I can I can go rogue so We we have the ability to use security contexts so we can set Run as non route to true So what does that look like from a deployment perspective now? We are down at the bottom in the security context section And we were saying that for this image. We want to force you to not be able to run this as route So if we click enter here Now what we'll see is that can the pod can't run right so the pod actually runs as route But we've set the security context to not run as to run as non route. Therefore, Kubernetes is not going to deploy it so We can see that's the case we do a describe of that pod and We look for the error and keeping at ease is telling us that you know We're trying to run it. We have a container that's running as route And we've specifically told it that it can't run as route So security context is going to be a key thing that we're going to be wanting to look for When we are looking at workload configurations, right? What a lot of people would a lot of people may or may not be doing is Getting their deployment configuration set up not knowing or not worrying about the security context because there's There's some things you have to think about right not everything some things are going to have to run as route right potentially like or You could have public images like the engine X1 that runs as route So there's a lot of work from an application development perspective to Get that application to not run as route. So what I would recommend from an application developer perspective is This concept of having base images So if you're running Java or you're running go or whatever language you're running To construct base images that meet and comply to a specific set of standards in your organization Right, one of the obvious ones don't run it as route But run it as a specific type of user put the binary in a specific location put the config for the application in a specific location Set the user and group IDs right all of these things allow you to be able to define Workload configurations that are standardized across your organization and then we can start to look for anomalies Because anything that doesn't have a user ID or a group ID of a specific number We know that that doesn't conform to your company standards and we can start to alert on that right With a with the ever-changing communities landscape and the ability and ease for you to be able to deploy hundreds of applications in your for your company Consistency is going to be key Right, you don't you don't want to have to go on A long root cause analysis for you to try and figure out what's going on with your application Right, if there's sets of standards that your applications must to conform to it's better that the workload cannot be deployed Then it gets deployed and is vulnerable right a little bit of friction with the application developer To make sure that we have secure workloads running in our platform is a lot better Than being audited and being and you know having to tell everybody that all of your containers run as root and Everyone's just copying and pasting configuration around so Now what we have We've seen sorry is that we have a public image that is running Running as root and now what we want to do is stop it from running as root, right? And there's a number of options for us to be able to do that so Nginx Inc actually launched or created a unprivileged container image So newsflash this is the one if you're running Nginx that I would recommend that you use you don't use the normal public Nginx image Right highly insecure so It's the same deployment different image. We're going to set runners non-root to true. We're going to deploy this Now we see that this pot is actually running right so this means that this container this container Obviously is not running as root So if we keep clicking through we can now see that From an ID perspective that we are now running as the Nginx user, right? So this is a more secure image than the well-known Nginx image So there are sometimes images that are available to you that are more secure than the ones that you know and laugh right, so don't just go on docker hub and type in the thing that you You you want to leverage like Nginx as an example and just take it verbatim right do some due diligence into how secure the the Container is that you're actually deploying or maybe have some standards and Processes in place so that you can't even deploy certain certain workloads with a certain tag or sometimes from a specific registry so Okay Sure, so yeah at a per resource basis So what I would recommend again is is to try and create application templates So if you're using something like how more customized The more standardized you can make the application from its from its configuration standpoint the easier This is going to become to manage So we can force workloads in Kubernetes to to not to not run as root, right? That's that's great But we can also set the user ID and group ID as well in that configuration So again same thing same security context now, but we're going to force the user and group that we run this application to run as Why do we want to do this? Why why is this useful to us? So? Imagine I broke out of this container and I got to the underlying host imagine I was running as root I Can do whatever I want now on the underlying host If I set an ID a user ID or a group ID that is not going to be on the host Even if I break out it's highly unlikely. I'm going to be able to do something. That's why these you it these uids and uids are so high I deliberately set them high because they're unlikely to clash if anyone manages to get out of the container on to the To the physical node itself So we can set these again with security context you would endured if we hit enter here We are going to see again that that application is able to run and We can see that we have set successfully the user ID and the group ID to What we specified again from a standards perspective these things are really important build up a standard within your organization Of what you want to set these two and roll it out across the board Consistency is key when it comes to keep it at ease. It's far too easy to deploy a load of rubbish That's insecure right and then if you're the the poor person that is in the platform team Or you have to perform an audit you have to go and you know dump all of this Onto all the application developers and then they'll all do something completely different and you'll be back to square one Right you and it's just rinse and repeat over and over and over again So again images must be designed to work with runners user and runners group, right? We can't just set them to anything and they're just gonna they're just gonna work So let's try deploying the public engine X image and Setting the user and group configuration so all we've done here is on the image specification We've switched back to that well-known engine X image We're in crash loop, right? We can't we can't run this image with this specific configuration This is a much safer option. This is the position that we want to be and we want Workloads to not run that don't meet our standards Again, we can see it needs some privileges because it's you know, it's it's owning files It's writing files all over the place We've set the user and group and the user and group is unable to be able to achieve that So this is where used from an audit perspective You're going to start having conversations with your application developers and saying look I've reviewed. I'm trying to try to set some standards here and your applications not not working Let's have that back and forward and you know move your application configuration to the standard that we want to set so important things to note here are It's gonna the reason why we're taking such a high user and group is It reduces the risk to run Or sorry, the the user to exist on the underlying host It's It's important that These configuration options are set because we We don't want to have to keep repeating the same set of audit all over again, right? I have to keep going back to the application developers and And telling them that they have to keep changing some things again Do do have to run as route and if you want if you need them to run as route You can set the group ID to zero. There are a couple of links there All of these material is going to become available afterwards. So don't worry about trying to write those URLs down For reasons why these configurations are important Anyone have any questions so far or we get to keep going? okay, so The next one is privilege privilege escalation, right? So if I can elevate my privileges I can switch to pseudo I Can do all kind of things right I can do an app to get update for example and start installing all kinds of packages And just go rogue on them on the container start curling endpoints Etc. Etc. So Again, we're going to create a deployment that's going to use this highly secure docker pseudo image and When we look at it, it's got a user and group set, right? So from an initial investigation perspective this can this container image is looking a heck of a lot better than the engine Next one right some user and group stuff set Mate, maybe maybe we're good, right? This is looking pretty good if we were thinking about what we were just looking for on our order However, now I run pseudo ID and I actually have the ability to better elevate my privileges To to the root user now I can do all kinds of things, right? So from an audit perspective when you go into the container and you get access Try and switch trying to elevate your privileges see what you have available, you know What do you have availability to be able to do just because they've set those initial things and from the from the offset It looks like you're in a good position always try and test the boundaries. So Now what we'll do is we'll set the allow privilege escalation configuration to to false and we should not be able to elevate our privileges So again exactly the same deployment. We're again under that security context We're going to set allow a privilege escalation to false and Now if we deploy This and we try and perform a pseudo we do not have the ability to be able to do that So again from a security standpoint Stop people from being out of switch to pseudo, right? Maybe some of your applications require pseudo To be able to see certain directories. This is again another conversation with your application developers start to Put your configuration that your application needs in specific directories allow the user that that That container is running as to be able to read and maybe write to that directory So one of the things that I would recommend from a container perspective is having a you know a slash app Directory which is where your application binary runs Slash config for where your config files live and then slash data for where your data lives make Now we start to have a standard right we can allow the user whatever we've Whatever you wouldn't user IDing and group ID that we've specified we allow them to be able to use those specific three directories and nothing else We're now starting to get to get some secure standards. So enable service links. This is an interesting one This is a very convenient way if I get inside your container To be able to start to plot your Kubernetes landscape from a Kubernetes landscape What I'm talking about is the services that are currently running within Kubernetes So by default Kubernetes will add in environment variables into your pod to make it convenient in inverted Airquotes for you to be able to discover other cute other Kubernetes services. So what we're going to do here is we're going to run a busy box pod And if we go inside that pod and we do an end you'll see that the service that I just created I know everything about it. I know where it is. I know the IP address. I know the port that it's currently running on From a hackers perspective. This is a dream, right? Imagine you've got hundreds of hundreds of these services. All I have to do is get into one I do an end. I get all of your services. I know all the endpoints. I know all the ports I just start running attacks against them and I figure out and I can plot your Kubernetes landscape from the inside So it only takes one one pod That allows this or has this configuration by default for me to start being able to plot my whole entire Kubernetes landscape Luckily for us, Kubernetes has a way of being able to stop this from being displayed not a well-known Configuration options. So under the spec here, you can set enable service links to false and what that will do now Is when we run the same container image, but we have this set When we do end we only get the Kubernetes one, right? We have to get the Kubernetes one to be able to talk to the Kubernetes API if we need to be able to interact with Kubernetes. However, the service that we previously deployed we cannot see anymore So from an auditors perspective think about You know what that what someone could do with the information by default, right? So Kubernetes by default is giving me everything The application developers should know the endpoints, right? They should know the services talking about consistency There's no need for us to be able to display all this information in you know inside the container Setcom profiles. So by default Workloads that get deployed into Kubernetes do not have setcom profiles provided to you Pretty insecure probably want to set some setcom profiles Luckily, Kubernetes provides us With the ability to be at a set second profile. So what I've done here is Deployed an engine X image and just looked at the setcom profiles that it has available and we can see there that there are None However, let's keep going We can set setcom profiles by configuring an annotation on the workload To set a specific type. So Kubernetes actually provides you with a couple of defaults out of the box That you you may want to use or you can create your own They are a specific type of resource in Kubernetes and then you at your workload level Set an annotation and then specify the setcom profile that you want to leverage Dropping capabilities, right? We don't every container and every workload that's currently running in the Kubernetes cluster Does not need all the capabilities that it could possibly have right? Applications barely need any Some some public images will require more So again, we have the ability to be able to set this With within our security context Or you could use upstream images like we were using before this engine X unprivileged which Set some of these for you by default and and reduces what capabilities you have available to you. So Here again, we're back to our normal engine X image and We are going to drop all of the capabilities within engine X, right? We're not going to allow any of them This is the default position that I would go with drop all of the capabilities and slowly But surely add the ones that you need, right? Don't allow all of them and then work backwards So this is an iterative process Excuse me that you'd have to go on with your application developers to understand what capabilities they need within their application So if we deploy this and drop all the capabilities We can see that engine X is unable to be able to run and then we look at the logs We can start to understand what it needs, right? It's it's churning some files here So we need to allow it to be able to have the capability to be able to churn things Again, this is an iterative process, right? You're going to have to keep trying and trying to run the application Dropping and allowing each type of capability until you get the holy grail and it actually runs Your application developers are going to love you because this is probably going to be like a 10 15 30 minute exercise for every single application So how do we reproduce this locally? So we run The docker container itself and we use this cap drop flag and we set them to all So it's going to drop all of the capabilities We again are in the same position so we can see that it needs a churn. So now what we can do is We can Still keep the drop all but now add the capability of cap churn run it again see what happens I'm not going to go through this iterative process. It's like there's four or five of them. It actually needs And then finally you'll get to a configuration standpoint where you'll know the exact number of capabilities that your application needs to be able to run Again, it's time-consuming. However, it makes sure that the container configuration is as secure as it possibly can be Yeah, continue dropping them and adding them until you until you reach the holy grail and the application actually runs So this is the actual nginx configuration with the very insecure running as a root image with the Capabilities that need to be added. So again to reiterate when you're performing the audit one of the things that we should be looking for is Them dropping all of the capabilities and only adding the ones that they need to add if this configuration is not set at all They have all the capabilities by default We can see the nginx image successfully running now with those capabilities finally added Now we also have images available to us whereby we can drop all of the capabilities because it doesn't need any right This is the this is the ideal end state I don't want to necessarily have to add all of the capabilities line by line in my Kubernetes configuration So by default the nginx unprivileged container Doesn't require any new capabilities to be added. Therefore, we can drop all this is the ideal configuration To be able to get to this standpoint. So again, we can deploy that workload And now we can see that that workload is successfully running. So one of the things to to bear in mind now is What is the application actually doing? What does it need to do inside the container? Maybe it's just running a binary right and it just provides an API endpoint or maybe it actually needs to write some configuration at startup For it to be able to know where the database connection is or where a third-party application is so by default all of your Pods have a writable root file system right and go in there and write whatever I want Obviously we don't want to be able to do that again same thing again when I was talking about having that common directory structure within your application Configuration this is why we want to start to be able to do this right We want to make the root file system by default read-only and then have directories where the application can perform its work Doesn't need to be able to write to all of the directories that are possible on the root file system so What I'm doing here is I'm running our nice docker pseudo image that we run earlier and I'm running a pseudo app update Right and now because the root file system is writable I can install all the packages that I want I can run anything that I want to be able to run right from a hacker's perspective This is a dream. I can install all kind of tools. I Can work out what's going on in your cluster start to sniff traffic between services like the list is endless what I can do here All right, we don't want to allow every application to be able to have a writable read-only file system So luckily enough for us Kubernetes provides us with a security context where we can set this To true right so we can set the default position to have a read-only file system Again, we're going to run the same image. We're going to use the security context and set the read-only root file system to true And when we try and do a pseudo app update, it's not possible Right, we don't have the ability for us to be able to do that Again, we can also have and try and use this with our with our public image that we have available to us set the read-only file system to True container is unable to run right because it if you remember before it needed to be able to write to the to the underlying file system I'm going to skip through this So what we can do is we need to be able to find and locate the directories where this application needs to be able to run and Then once we have those we can mount those Mount empty directories inside of our container at the specific Directory location that the application actually needs right. So if we Have application configurations where we have a standardized directory structure. This becomes very easy, right? We have the same directory structure. We create some empty volumes. They are able to write and read files from those Okay, let's cancel out of here. Let's go to RBAC so we've talked about some workload configuration and how we can Secure our workloads, but our back allows users and workloads to be able to Access resources that are currently running in the Kubernetes API So what I'm going to demonstrate here is the unnecessary use of a list permission So communities by default allows you to be able to get list and watch things What we are going to do is we're going to create a service account inside of Kubernetes We are going to Provide a role whereby the only thing that we can do is we can list Secrets in that specific namespace We create a role binding whereby we're going to bind that service account to that cluster role We're going to create a secret running inside of that namespace We're going to create a deployment We're going to use the service account as the service account that we just created so remember it can only list secrets So now I'm inside this running container I'm going to run this command here This command is going to try and get that specific secret that we've just deployed well You'll notice is that I can't get the data from that secret. I'm getting forbidden However, if I use the Kubernetes API To now do a list Right and list all of the secrets that I currently have available to us I now have the ability to see every secret inside that namespace and the data within that secret So be very careful from an audit perspective of when you're using that list watch also has the same problem So you really want to make sure that the user Uses or your application uses gets more than it does list list is giving you way too many permissions Right. I can see every single secret Inside of my whole entire namespace. I may only want to be able to get that specific secret The next thing we want to be able to do now is after we've gone and Looked at some of the RBAC that we have available to us. We've looked at some of the workload configuration We now actually want to be able to perform an audit So things that we want to look for is things that we've discussed and wanted to be able to stop before so Some high-level ones containers running as privileged containers that are allowed to perform privilege escalation containers that can run as roots and containers that do not have a read-only file system So what we're doing here is honing into Workloads more than we are our back our back isn't isn't there's a From an audit perspective is still very important, but we're focusing here on the workloads that are currently running So there is a tool called cube audit by Shopify. I don't know if you guys are familiar with that That will allow you to be able to run this and start to look at the configuration of your the workloads that are running in your Kubernetes cluster and see where the insecurities or vulnerabilities are So we are going to we've deployed keyboard it and we're now going to run Cubord it privileged in our namespace that we've currently provided So what privilege is doing is it's looking for any privileged containers that have the ability to be able to run and what we can see if we scroll up is it lists out each individual deployment and Provides you warnings around your configuration So we can see here that it's recommending that we We set the we set privilege to false on all of these applications that we've previously deployed There are there are a load of Cube audit flags available to you you can run a cube audit in cluster mode Which essentially runs it inside the cluster and constantly runs an audit for you Gets a good understanding of your Kubernetes landscape and provides you a report out the back of that You can download that as a CSV or JSON file and you can use that to really dig into To your your audit I wanted to show guardrails, but I'm I'm conscious of time So I'm going to leave it down. I'm going to open the floor for questions Thanks very much again Jimmy I myself Adam and and Andrew will be at the expo hall boo G 32 Come along to the booth see what we're doing Yeah, thank you very much anyone have any questions. Yeah, so the the URL that I provided This URL here is going to maintain online It's the whole workshop end-to-end it will go through with you. You can clone the repository All of the demos all of the scripts that we've currently run are in that repository You can create a GKE cluster and run through this whole thing yourself This one here. Yeah. Yeah. Yeah, it's all in there the workshop all of the workshop configuration is is in here The step-by-step is also yeah step-by-step is also in there So if you if you want to be able to run the repository locally There is a make file task just run make run from that Directory that will spin up the website for you locally if you want to run it locally And if you don't you can use this URL her this URL is going to be up for the whole of kubekwana and further Anyone else have any questions? Perfect. Thank you. Yeah. Yeah, let's thank us because once again. Yeah. Thanks. Yes. Thanks, Jimmy