 Hi there, welcome everyone! We're here with Dylan and Allison, who presented here at DEFCON Safe Mode on Lateral Movement and Privilege Escalation in GCP. Compromise any organization without dropping an implant. And I have to say, one of the highest theatrical production values I think I've seen so far in a video, including the hacker with the hoodie doing the whole, you know, hey, let's just pound on things, but with the real hacker. So it was pretty entertaining to say the least. So welcome, both of you, to DEFCON here at Safe Mode. Thank you. It's great to be here. Thank you. Yeah, so first thing I want to kind of ask, just to get things started. So really, right, you like, has anything changed since the DEFCON, since you recorded your DEFCON talk and it was published? Any major changes or anything you want to share with the DEFCON community? Yeah, so there are actually a number of changes that Google's been rolling out, all of which have been super positive. Oh, it looks like Allison's video cut out. I guess she'll rejoin. I'll be answering the question, though. So I guess most notably, the service accounts or the roles and permissions, which can make use of the default service account without act as are being end of life in 2021. And that was a direct response to this talk. And what that means is you will need the act as permission in addition to the other permission in order to do the action of attaching a service account to those services, even for the default service account. So that's going to change things a little bit in terms of what the overall security story is. I'm not 100% sure because it will lead to the proliferation of the act as permission. Ideally, folks will be applying that at the resource level for these privileged scope service accounts, but like we showed in the talk, people will often apply it at the project level. So it sort of opens the opportunity up for you to make yourself more secure, but it also kind of opens up more foot shooting opportunities. Another interesting change was that the very quick clip that I cut over to GitHub for and showed a bunch of people uploading their GCP service account credentials. I noticed that when I ran the same search on GitHub, those keys are no longer there. I assume someone went through a lot of effort to get those removed. And I was just thinking in the back of my mind, you know, they can go through that effort, but they're still in that Arctic vault. So those keys are immortalized in some capacity, but it's good that they took them down. It makes it more tricky to just immediately exploit lots of people at the gate from the talk, and I'm glad they did that. Awesome, awesome. Yeah, so kind of another kind of thought provoker. So as I watched your video, I liked kind of at the beginning, how you're like, here's how AWS identity access management works, right? And kind of here's the GCP model. You know, based on your experience so far, like I guess, do you think one is better? Right? Do you prefer one model or the other? Do you think they both have their place? Kind of what's your take on that? That's a good question. There are definitely elements of the GCP platform that are unique and nice to use that AWS doesn't have. But in terms of the IM story, I would say generally speaking from what I've observed, AWS more follows the principle of least privilege than GCP. And in that context, I guess it depends on your perspective. If you're a brand new developer and you want to move really quick and you want to have everything open by default, GCP is an interesting choice. If you're a security engineer and you want to follow this privilege and lock things down, AWS from my observations tends to do that more. So actually there's a question in the chat that kind of goes into that. So I'll go ahead and ask it. It was for those of us with existing and sprawling GCP environments, given that the org policies aren't retroactive, what are some good approaches for tackling remediation for those deployments? That's a really good question. And I think we wanted to go into a little bit more depth into that subject in the video, but we didn't have the time to really dig into it. Remediation is really tricky. So you can enable org policy when you're first starting an organization or later. And for all new things going forward, they'll get the benefits of it. But for all the existing projects that are already using the default service account, it's really hard to win yourself off of it for a couple of reasons. First is that if you switch service accounts on a live VM, which a lot of services are powered by, that VM needs to actually be stopped and started again to get the new service account. The second is that the default service account is kind of like it's not just like a set of permissions that's applied to your services, but it's also an IM policy that's shared across all of your services. And so you have two different services using the same policy. If you want to make modifications to that policy, it's going to affect both services. So that makes it a little bit tricky. And then the way service accounts work is you can both use them as keys and you can use them as attached identities similar to AWS. And so you have to kind of enumerate all the different use cases and kind of systematically knock them out one at a time. So you have to see how many keys are exported. You have to see is it attached to any instances. You have to check the stack driver logs to see whether or not they're in use. All of that can be really tricky, but some tools that do help with that are the Asset Analyzer API to see where the default service accounts are in use. And then the other one that's handy is the role recommender. So you can get some idea of based on the last 90-day look back period for a given project, how many permissions has the default service account actually used. Now, that's not the whole story for a number of reasons. One of which is the role recommender will only apply those recommendations sort of, you know, if the role binding is at the project level, it'll make the recommendation at the project level. It won't make the recommendation down at the resource level. And it also won't recommend that you split your IAM into two different, you know, identities. If maybe multiple services are using the default service account, it'll make a recommendation based on the summation of all those services. It won't say this service should have this scope of permissions and that service should have that scope of permissions. But it does kind of go a long way in terms of very quick remediation that you can do for looking backward for the things that work policy can't cover. Cool. Yeah, so I kind of had a thought too. So as I was watching this, I was like, man, this seems like a cool way for, you know, an attacker to maybe gain persistence in one of these, you know, GCP environments, like, you know, setting up an account and, you know, maybe before some of these changes like, hey, could you set up an account, give it privileges and then does it just fly under the hood? I guess what's your thought around that? Like, could you see that as a potential attack vector or would you see maybe other kind of attack vectors for hiding in the IAM noise of GCP? Yeah, I mean, I think a lot of what we showed is that this lateral movement, privilege escalation, persistence, it can all be done without dropping implants on a server. Like you might for act as want to spin some things up, introduce them into the environment, but for servers that have maybe end points off for monitoring malware and things like that, it wouldn't set those things off. So I think it is good for persistence in that way. We'll probably need new tooling to look for these types of attacks. One of the services that Google offers to kind of help with this is the, I forget exactly what it's called, but it's a service that will monitor your stack driver logs and look for bad things and then alert you if it sees bad things. And so it's watching those cloud APIs, which will be auditable. And examples of things they've used in the past are like known bad IP addresses for like crypto miners that they ever show up in your stack driver logs so send you an alert, but I expect these types of cloud attacks will become more and more a piece of that story. We also briefly showed a lot of phishing. I think that's also a great way for persistence because if you get somebody to click and allow link on wide scope open permissions, you can get the permissions of the user without having to get a service account credential and it functions the same way. So that's another way that you can sort of use these same techniques and get persistence. Something to note on this too is service account keys. So if someone is able to create a service account and give it access to a resource or something, a project that the service account keys last for 10 years. So if they're never rotated by the organization or deleted, then you have that key for a very long time. Wow, definitely interesting. Oh, another quick note on that. The long lived exported credentials last for 10 years like Alice mentioned, which is interesting. The short lived instance credentials, those last I think closer to 10 minutes, but that'll persist after an instance is terminated. So if you're an attacker and you're exploiting something like an SSRF or even remote code execution, grab instance credentials, even after the defensive person deletes the instance, the attacker can still persist another 10 minutes. Okay, so we had another question pop up here in chat and kind of along what your follow up to what you're asking or explaining says, what about abandoned service accounts? Say they're deleted. Do you end up with abandoned ACLs that exist on the resources and projects? That's a good question. Alison, did you want to take that? So there's actually something that is kind of interesting about this where we've run into some bugs actually where a service account has been deleted and their IAM bindings were also removed. So if a service account is deleted, the IAM binding can still exist there. So maybe also a user, if you have a user in your org and you remove them from G Suite, that IAM binding will still exist in your GCP projects with user at gmail.com or whatever that is. But there's also interesting things that can happen due to the undelete capabilities for service accounts where if you delete a service account, IAM binding, like you asked, it still exists where that IAM binding was made. And if you try to undelete your service account, you're like, oh, maybe I actually need that and you had, or you try to create a new service account with that same name. The IAM binding will reference the UID of the service account that was deleted. So you can have this weird like caching thing happen where you'll get like permission denied errors. If you're using the service account A and I give it an IAM binding of my project and I delete it, then I recreate a new service account, service account A. It tries to reference that existing IAM policy. It doesn't work because it's referencing the old service account that was deleted by its unique ID. So yeah, they exist and this can bring up some really weird bugs. If you're trying to create new service accounts with the same name or undelete your service accounts and you're not updating the IAM policies in parallel. Cool. Yeah, so we kind of talked about this a little bit and you mentioned this in your talk too, kind of from the defender perspective, right? Like they just rolled out like the IAM analyzer, there's the recommender. You know, I guess, are there any other things or advice you'd give to blue teams looking to secure kind of the GCP environment, right? Or the IAM offerings, like detection opportunities, like the tool you mentioned, log analysis, like, are there any good things to try to really dive in, like, hey, you ingest this log feed and, you know, hey, we'll see all your IAM transactions or anything like that, any advice you give. That's a good question. A lot of people will recommend that you can pipe your stack driver logs over to BigQuery so that you get a structured, queried language that's a little bit easier to work with than the filters in Stackdriver. Easier to write detection rules for and such. I think that definitely the security API that exists at the org level has some very interesting introspection that you can do across your whole fleet, and they provide some suggestions and recommendations around other types of lease privilege, like pockets that were left open to the Internet and things like that. There's also a lot of other org policies that we didn't cover that also will improve your security story but just aren't necessarily related to the content we talked about, like allowing your developers to add users outside of your organization to roles inside your organization, but that's allowed, so you can add a random Gmail account to a project. You can disable that via org policy. So it's definitely worth checking out those high-level org policies as well. And I think that we mentioned this as well in the talk, but we had general positive experiences providing feedback to Google, and in some cases they actually put us in touch with PMs where we could explain these issues directly to them, and so in that regard, we strongly recommend people to engage with Google around pain points and tell them what an ideal workflow would look like, and in our case, they were very receptive and worked with us on those things. Yeah, I was actually going to dive a little into that too, so again, working with Google that closely, it's always an interesting thing of when you're approaching someone with base-level issues like that, how do they respond? Do you have any other tips for folks who'd be in similar situations, like reaching out to a Google or another company? Anything you learned out of that that you'd do better next time? Well, I think what was interesting is there's kind of two ways to engage with Google. There's the customer way, and then there's the vulnerability way. So you can submit to their bug bounty, and they get a very binary, this is a problem, this isn't a problem. If it's a problem, they'll pay you and they'll fix it. If it's not a problem, they'll close the ticket and that's the end of the story. That's the bug bounty way. And you're interacting with the security engineer when you do that on the other end of that ticket. When you go the customer route, you're talking with a PM around them building products and giving them feedback about existing products. And that's a continuous story that doesn't end and it's not binary. You have to sort of go through in-depth how security problems manifest themselves and things like that, which you might not have to do if you're talking to a security engineer. And you get to continue that conversation indefinitely. It's not a binary, this isn't a problem, this isn't a problem, okay, ticket closed. So we pursued both of those paths concurrently. And I think that probably depends on your particular use case. But there were some things which we submitted as bugs through the bug bounty and we thought that they would just be recognized as bugs and fixed. But they were actually just closed out as working as intended. And for those things, going the customer route actually proved to be more effective. That's great advice. Great advice for other people. I guess the other question I have too, what's next, right? You kind of really, I guess, gone into a lot of the problems here. You've done a lot of improvement on this front with Google. What's your next for the research front? Are you going to continue to refine more attacks here, looking at other things? What's next for the both of you? So recently I did notice someone sent me some good resources that Rhino Security wrote up and they released some proof of concept tools. I think it would be cool if the community helped bundle all of those tools into one framework. Of course, I'm selfishly hoping that's G-Split so that we can kind of have a metasploit for cloud. So I thought that that might be fun to consolidate some of that stuff. But beyond that, we don't have a lot of plans to do more of sense of work in GCP. What about you, Allison? Yeah, for public research-wise, I don't think I'll be elaborating on this topic for a while, but definitely something that I'm personally interested in, learning more about and seeing the state of how different cloud providers, Google and AWS tackle I am, because it's something that's really difficult and I think it's going to be changing very frequently for a long time, because it's such a hard problem to try to tackle. So no public research for the near future, but definitely interested in seeing what happens. And I have a question that kind of came up when I was watching your talk as well. Kind of going back to the tooling, you mentioned that you all developed a framework, G-Split, is that something that has been, are you planning on releasing anytime in the future? Oh yeah, it's already live. We linked to it in the talk, and you can check it out. It's on GitHub and Docker Hub. So it's just a little Docker container you can run. It was definitely released more on the proof of concept side than production code side. So some people found some bugs in it already and expecting more, but it does the things that we videoed. So those demos, you can reproduce them with the tool. Cool. And then it looks like we have one more question in the chat. From your talk, I got the idea that in AWS, the account holder owns the permissions, but doesn't that cause a problem for research owners who need or want to provide permissions when the account is not under their control? I think this kind of gets at the heart of what we open the talk with, which is whether or not your IAM model is resource centric or user centric. So is it basically the resource owner that controls who can access the resource, or is it the identity owner that controls what resources that identity can access? And I think that's sort of the main difference between the two IAM stories in AWS versus GCP at least that we opened with. We talked about some of the challenges that come up with the resource centric model that's in GCP. We didn't really touch on the challenges of the inverse of that in AWS. And to be honest, I'm not an expert in AWS IAM by any means. And so I'd be curious to know if someone else wanted to do a deep dive on those cons are. Yeah. It also depends on how you're managing your infrastructure. It depends on like, are you allowing engineers to manage different infrastructure configurations, or do you have kind of a more centralized way that you're provisioning access within an account to something to think about? Cool. Yeah. I guess one thing I didn't really ask there. I don't remember is like, why GCP? What kind of drew you to this area for research? Why did you look at this and the weaknesses here? Well, I think some of the defaults were definitely a little eye-opening. I guess the full story of this is someone briefly compressed down in the back of an Uber ride on my way home. I was sharing a car with someone. A bunch of crazy things in GCP that I found hard to believe. Yeah, all these things run as almost full administrator by default. And those things kind of prompted me to poke into it a little more. And then I think my poking got Allison poking. And then we just followed the rabbit hole as far as it would go. Yeah, there is one other question here that just came in the chat too. And I don't know if you know the answer, but which model does Azure take? Is it more resource or identity centric? Actually, not sure. Not the only one with Azure. Very much. It also wouldn't be able to speak to Orville Cloud, IBM Cloud, or Alibaba Cloud. But I'm very curious to know the case for all of us. Future research for others to look into. Definitely. Awesome. Yeah, so we're getting near the end of our time here. But really a couple last final ups. First, what should people look into if they're interested in learning more kind of about the GCP identity access model? And where would you recommend someone start if they're trying to figure out how does this work and maybe if they want to dive in themselves? Yeah, there's actually the documentation I think is really great to start with. Really just read it in its entirety. It's a lot of information. But from like a hardening perspective, Google also recently released a new blog post that outlines some of the, in one kind of place, not in each individual services, IAM docs, some of the different steps you can take to harden the IAM of the different services that we brought up in our talk. Oh, yeah. And on that subject, that blog post is also an answer to earlier questions of what they've done since the talk. That blog post was done in response to the talk. Awesome. Yeah. So someone said GCP is like gone cloud pining, which seems pretty apt. Yeah. So really, I kind of opened up the last user. Anything else you really want to share with anyone? Any other last thoughts, things that you think you folks or really should know about GCP before we move on? Well, I'll just share that I am completely new to Twitter. I made a Twitter account last week and I shared the handle in the video. But if folks want to connect with me there and make me feel less lonely because they don't have a lot of followers, feel free. What Twitter account is that? Oh, it's insecure nature and it's linked in the talk and I can also link in that chat. I guess my final thought would be don't trust the documentation. I know I said just read the documentation but don't trust the documentation. Like when you're going and assessing things or building things, take kind of like a reverse engineering approach to analyzing things in the cloud. So test and verify the behaviors. Maybe just take what's in the documentation or what you think it would work, how you think something should work as the truth. Oh, yeah. And there are also a couple more privileged escalations. Like I mentioned that we just didn't have time to go into. We covered most of the ones that compromise like a large set of identities, but there are privileged escalations for getting access to storage and those types of things. They're covered both in our earlier talk that we gave at B-Sites that I recommend folks check out. And like I mentioned, Rhino Security has done some pretty good write-ups recently on it. Awesome. Well, thank you both Dylan and Allison. Really what I'd suggest too, if you have any particular links or summarized list of links you want to throw into the Discord chat room that'll be helpful for everyone. There's lots of folks in there thanking you for the talk. So definitely well received by everyone here. But yeah, just want to say thank you again. Thank you for joining us for the DEF CON safe mode talk. Thanks for presenting in these weird times and hopefully see you in the future out in Vegas when we return. Sounds good. Thanks so much for having us. Thank you so much. Yeah, thank you. It's a pleasure. Yep. Take care. Yeah, thanks.