 Thank you for coming on a Friday afternoon, no less. My name is Maya Levine, and today's talk is going to be about identity security. The name is I Am Confused. And so I have a question for you all. When it comes to identity, are you confused? If you are, it completely makes sense, because things are so complicated and evolving so quickly. Identity in the cloud is a relatively new frontier. But it is essential and non-negotiable to be investing in it and getting it right. Identity is like the perimeter of the cloud. And almost every cloud breach that we've seen in the past few years has taken advantage of mismanaged permissions, secrets, and identities. As an industry, we are struggling when it comes to identity management, especially when it comes to how many permissions we're granting versus actually using. Cystic threat research found that 98% of permissions that are granted are actually unused. And that's a huge scope for an attacker to take advantage of, especially considering a lot of us are utilizing default roles, which generally have a lot of permissions granting to them, versus creating our own custom specific roles. And there's quite a few challenges with identity security, including the fact that ownership of identity posture between dev ops and security is often unclear. And this can lead to mistakes that stem from going too fast. In addition, human and machine identity management is not easy. I really like the metaphor of a key under the mat. If I'm a robber and I'm scoping out your house and I notice you're gone on vacation, the first place I'm going to look is somewhere like under the mat where I know people tend to leave their keys. Similarly, attackers are very good at secret harvesting. And they know where to look. And this is everywhere from serverless function code, IAC software, things like Terraform state files. They often contain credentials and secrets and otherwise sensitive information that gets overlooked because they're kind of obscure. In addition, SAS applications are a huge attack surface. Credentials are being left everywhere from repos to GitHub to AD to Slack. And developers can underestimate the power of read-only access. That's all attackers need in order to read a secret. The point here is that attackers are actually better than we are at secrets management because for them, getting a credential with great access is their golden ticket. It's what will enable them to escalate their privileges and escalate their attacks. While for defenders, this is usually not the highest priority item. And humans make mistakes. As a kid, I used to watch this show with my grandmother with this kind of scary British lady. And I now think of it as that humans are actually the weakest link in any cybersecurity posture of an organization. And this is matched by the statistics. Social engineering is a part of 98% of all cyber attacks. Verizon reported that 82% of breaches are caused by some kind of human error. And specifically for identities, MFA abuse actually works really well through social engineering. And I'll talk about that in a few of our examples. Probably the most difficult challenge is just the speed at which attackers can move in the cloud. The average cloud attack takes 10 minutes. So when you think of how long you have to both detect, correlate these signals and initiate some kind of response, you need to be moving at this type of speed, which obviously is challenging. For identities in particular, we can really use that information to help with correlation. If you can track somebody doing a bunch of malicious activities across accounts or detection boundaries, you can paint a picture of an attack much more easily. So, for this presentation, I want us to learn from things that we've actually seen in the wild. All of the attacks that I'm gonna go through here were either real breaches affecting organizations or were caught on honeypots by security threat research teams. So these are all techniques that attackers are actually using. And for this first attack, I think it's a really great example of how humans make mistakes, and that's very understandable, given how easy some things are to miss in the cloud. This attack started with attackers looking at Jupyter Lab notebook containers that were deployed in a Kubernetes cluster, and they exploited vulnerabilities on those to get their initial access. From there, they utilized scripts to search for AWS credentials in a bunch of locations, including instance metadata, docker containers, and file systems. And they're using tools like AWS CLI and PAKU to try to get access to more credentials. And they actually exploited vulnerabilities in IMDS v1 to successfully get credentials. With those credentials, then they moved into the reconnaissance part of their attack. So here, they're looking at what can I access with the credentials I already have, and what other credentials can I get access to? And they were able to create new users and access keys because of a small mistake that most CSPM tools would not catch. So it can be really hard to keep track of the various rules in cloud service providers. One example is what is case sensitive or case insensitive. When it comes to AWS policies, the action part is case insensitive, but the resource part is case sensitive. And you can see here, the victim described a restrictive policy where they had admin with a lower case A. But then they accidentally created a user that had an upper case A. Maybe it was a typo, maybe they just didn't know about the case insensitivity. And that was the user that the attacker used to take advantage of and create access keys with admin permissions. So this is a great example of people really trying to do the right things. They had a restrictive policy to try to limit the ability to create an access key, but one small typo kind of ruined it for them. With their permissions, the attackers were then able to create access keys for a bunch of users, run EC2 instances with crypto miners and steal data. So some takeaways from here, least permissive is not just a buzzword. It is imperative that we are looking at the permissions that we are granting and narrowing the scope to what is actually necessary. It's also really important in your organizations to identify who is the clear owner for IM, for CIM, for however, whichever acronym you wanna be using. And it doesn't matter who it is, right? It could be your infrastructure team who's actually deploying the resources, your IT team, your security team. What's important is that they know that they are responsible for catching things like case sensitive mistakes and other things that are a little more tricky and require a little more thought into implementing correctly. Now the second attack is really a compilation discovered by the Cystic Threat Research Team of different patterns in attacker playbooks that we've seen attackers doing in the cloud. So most attacks begin with some kind of initial compromise by getting credentials either from open buckets, GitHub, or public repositories. And then they do enumeration in the cloud, looking for resources, looking for more credentials. And it's important to not overlook things like terraformed state files and serverless function code because attackers are targeting those things and looking for credentials stored there. So whoever the clear owner of IM is in your organization should be looking at these spots as well. Where possible, they'll abuse policy misconfigurations in order to get to their end goal, which is often profit driven. They'll deploy crypto miners, but sometimes it's not profit driven. So they can target existing resources and for example, connecting directly to an EC2 instance and use them as a jump box to launch more attacks. When we think of lateral movement, often we think of moving from one account in the cloud to another, but we witnessed an attacker move laterally from an enterprise cloud account to the compute infrastructure, in this case EC2. And from there, attackers can pivot to on premise servers if the servers in the cloud are connected to them. So here the attacker leveraged an API called send SSH public key to gain access to the EC2 instances. So they pushed an attacker supplied SSH key and then anyone that had access to that key could connect via SSH. Now this type of lateral movement can cause issues for defenders because it often involves crossing a detection boundary. When you go from AWS into EC2, CloudTrail will no longer provide the information about what the attacker is doing. So defenders need to monitor both the cloud control plane APIs like CloudTrail and your EC2 workloads at runtime in order to understand the full scope of the attack. It's important to understand how your cloud accounts and your on premise accounts are connected and you can't just assume that they are isolated. We've seen in more than one attack where they pivoted either from on premise to cloud or vice versa. Employing a secrets management system will reduce your likelihood of having a credential leak. So you wanna keep your credentials and your keys in a centralized location, provide an API to dynamically retrieve those keys and make sure that those keys and credentials are not gonna be inadvertently left in files. And a CSPM solution is needed in order to harden and prevent the misconfigurations that attackers will take advantage of. But you also need runtime security to be able to detect things at real time. So you want to apply to both cloud logs and activity that's occurring on your compute resources in order to track an attack like the one I just went over. In the next attack I'll go over, I think it really highlights what the dangers of overly permissive allowing overly permissive for things like IAC services. So here the threat actor used a well-known VPN service called CyberGhost to hide their source IP and they use the assume role API to obtain additional privileges and continue going through the kill chain. So with those powers they began enumerating inside the cloud account looking for interesting information and they were able to join a group that had special privileges in cloud formation. So here they called the API create stack and they tried to add a cloud formation template called and I kid you not evil template. So they didn't really hide the intention there very well. But here was a case where it wasn't overly permissive. The credentials that they had access to did not allow them to actually run this malicious template. So the victim here kind of got saved. So the takeaways for me here are first, you can't secure what you don't know about. So this is especially true in the cloud given the dynamic nature of resources. You want to have an inventory of all of your cloud assets and this includes Lambda functions and policies with security statuses. Second is you need to think about how to implement IAC security. So both in terms of detecting your drift and how you can shift your security checks using IAC templates as early into the developer pipelines as possible. This is both for misconfigurations of controls and other things. And finally, real-time threat detection across your runtime workloads and cloud. I mentioned this already, but it's important to emphasize that threat detection needs to be real-time. If a cloud attack takes 10 minutes to execute, you don't want to know about an event 10 minutes or more later. And you want the visibility in both runtime workloads and in the cloud logs. For the next attack, I think this is an example of social engineering at its finest. So here, the attack began with a criminal group known as Scattered Spider and they targeted MTM Resorts International which is a global hospitality and entertainment company. They looked through LinkedIn for users that they thought would have high privileges in Okta and tried to take advantage of password reuse here. Once they got credentials, then they utilized them to gain access to MGM's environment and establish a foothold. So they actually successfully tricked the help desk with voice phishing into resetting a user's MFA and that's what granted them deeper access. And the kind of access they got is the stuff of nightmares for most of you in this room. They configured an additional identity provider within MGM's octa-tenant using inbound federation and that allowed them to have persistence in the network. If you see these three types of admins that they had, this is not good access that you want attackers to ever get in your environment. With that kind of access, they very easily were able to pivot onto the VMware infrastructure of MGM and deploy ransomware on the ESXi servers. And here they actually called in another criminal group called Black Hat in order to deploy that ransomware. So as if we didn't have to worry about just attackers themselves, know that ransomware as a service is a big part of the criminal industry and even if attackers don't have the technical chops to implement it themselves, they can always just call somebody else in to do that. The impact here was huge. It affected things, services like hotel room keys, dinner reservation systems, point of sale systems, slot machines, which if you've ever been to Vegas, I'm sure you know is a huge selling point for profits there. And though the incident response team ended up terminating the octa-sync servers once they realized what was happening, the impact was already huge, estimated at around $8 million a day in financial losses. So there's a lot of takeaways on this one. The first is that you can't synchronize your credentials between your on-premise and your cloud resources. This opens up a lot of attack vectors. The worst part of this breach was that MGM's IDP was configured in a way that allowed scattered spider to pivot into their VMware infrastructure, and that's where the ransomware was deployed. IDP admin accounts need to be managed with the highest level of scrutiny possible. So always check a user's device and compliance before allowing them access to your IDP and enforce dual access wherever possible. As I mentioned, ransomware as a service is a big business. It's something to just be aware of. And here, MFA device reset, this was a significant choke point in the attack. So if they limited MFA to a specific phone number, none of this attack could have happened. You also maybe want to alert on any MFA device changes and have multiple authorizations for critical actions. In the next attack, I think this is a really cool example of how attackers can establish persistence while not being noticed and really blending in very well. So here, the attacker gained access through a misconfigured Kubernetes API server that allowed unauthenticated requests from anonymous users with privileges. They sent HTTP requests to list secrets, API requests to try to gather information about the clusters, and they actually attempted to delete deployments in various namespaces to disable competitor campaigns. And then they established persistence using RBAC. So what they did is they created a new cluster role with admin privileges, and then they created a service account that they called the cube controller in the cube system namespace. And then they created a cluster role binding that bounds that cluster role to that service account for persistence. And here, they achieved persistence without setting off any alarms by blending in with the API audit logs. This looked like legitimate activity of a cluster binding. And at this point, even if the anonymous user access got disabled, they created a persistence that allowed them to further exploit the cluster. And that's exactly what they did. They utilized AWS access keys exposed in the cluster to go further into the cloud service provider account. And they deployed a container using daemon sets to run Monero crypto miners. And they used a container image from Docker Hub that was made to look like a legitimate Kubernetes IO account. So even though the image was eventually detected as a crypto miner, again, they're trying to blend in by naming it things that we're used to seeing. And when inspecting this image, we saw that it was pulled quite a few times. And there's a lot of evidence that this has been a very widespread campaign. So here, what started everything was a misconfigured Kubernetes API server that allowed the unauthenticated requests from anonymous users that had privileges. So I've mentioned least permissive. Again, I'm gonna mention it again because overly permissive is one of the biggest gifts that you can give to attackers. You can't always control how they get into your environments. I mean, for example, there's zero day vulnerabilities that we don't even know about that they might. But once they're in, if you're giving them just admin access and just kind of giving permissions like Oprah back in the day of like you get this and you get this, then you're gonna run into real problems. You're enabling them to do whatever they want in your accounts. If the victims here had Kubernetes audit detection policies in place, they could have detected the new cluster role creation and the cluster role binding. So you should build a system that will alert on things like these and add your exceptions of things like GKE automation. And it's worth mentioning that runtime analysis should have some element of machine learning to help you here. The standard way of detecting crypto miners is looking at known mining pool IPs or by the file name, if we know that it's a crypto miner. But you should have this kind of malware machine learning that can detect a crypto miner based off of CPU and memory and network activity. The next attack is a very great example of how hard coded credentials can be a gold mine for attackers. So here, the attacker gained access to Uber's IT environment by getting credentials to their VPN infrastructure from the dark web. So that's another source of credentials for attackers. They will get from where they can find their credentials. Some of them will sell it to other attackers on the dark web. And then they use those compromised VPN credentials to repeatedly attempt to log in. Every time they tried, this generated an MFA notification. And so then they reached out to the contractor on WhatsApp, pretended to be part of Uber's IT support, to encourage them to accept. And when they finally did, the attacker had access to the Uber VPN. From there, they found a network share that contained a PowerShell script that had hard coded credentials to Uber's Privilege Access Management or PAM solution. Worst case scenario, everybody. That really, really just is the type of hard coded credentials that I'm sure when they found it, they did a little dance because it's a gold mine for them in what they wanted to do. And those credentials were not rotated in a long time. That makes exploiting them much easier. Using those credentials, they authenticated to the service and they accessed secrets from the PAM service. And if you look at that bottom row here, again, this is the type of admin access that you don't want anybody outside of your organization to ever have. And with those elevated permissions, they were able to compromise critical company systems, including SSO and consoles and cloud management that stored sensitive data. And it wasn't very hard for them to then exfiltrate data. So they downloaded internal Slack messages and information from a finance tool that Uber's finance team did and exfiltrated that data out. So here, a big takeaway is that while MFA is definitely required, it might not be enough. Don't think that if I have MFA, check identity security complete. And the biggest takeaway here is that hard-coded and embedded credentials pose a huge risk to your organizations. It was the harvesting of those hard-coded credentials that gave them access to the PAM solution, which gave the attackers the keys to the kingdom to wreak havoc inside of Uber's IT environment. So I suggest potentially, if you don't have the capacity to find all of the hard-coded credentials or to find credentials that you inadvertently left in files and all of that, you can turn to third-party services. There are bounty hunters that that is their job to look for these kinds of exposed credentials. And wouldn't you rather have them notifying you about it rather than a breach happening in your environment? Now, employee education is so important. If people can be tricked into clicking on a phishing link in an email, they can certainly be tricked into accepting a push notification from their own employer's MFA application. So the more you educate your employees, the less risk that you're gonna have of this kind of social engineering working. And identity hygiene is its own kind of branch in CIM tools, but a big part of that is that you need to rotate your access keys. AWS actually suggests rotating it at least every 90 days. Now, in this next attack, we are gonna look at what makes storage resources more vulnerable to things like ransomware attacks and data exfiltration. So here, like many attacks, the attack started with an initial compromise of a long-term credential. And then they did reconnaissance and they're trying to find other users and buckets and available keys. Here, we saw them actually, the Cystic Threat Research Team saw them making calls to the AWS Simple Email Service, or SES. And we have found that SES is actually a very interesting attack vector that we're seeing more often for threat actors for phishing. From there, they tried to create persistence by creating additional users with names like Root and Admin. And finally, they moved on to the data exfiltration part. So the first thing they did was they checked the versioning on S3 buckets. If you have versioning enabled, it allows you to easily restore data if it was deleted. And in this case, versioning was enabled, so they disabled it. And then they were able to exfiltrate data out of the S3 bucket, delete that data, and leave a ransomware note on the S3 bucket itself. And this was actually detected through AWS billing because the victim here happened to not have cloud trail enabled on this environment. So when you have a data leaving an AWS bucket, this incurs an egress cost, and that's actually how this whole attack was confirmed. So for this attack, the takeaways are that you should be utilizing cloud trail or the equivalence in Azure, GCP, or whichever cloud service provider you have for storing data for detections. And you can store data in S3 buckets that allows for longer data retention and consider enabling cloud trail for data events, but be very selective about where you do that. Only do it where your most important data is stored because it can generate a lot of events, which obviously can be very expensive. Where you can, you should be employing IAM roles instead of long-term access keys to minimize the risk of credential abuse. So audit your access keys, ensure they're not publicly accessible, and regularly rotate them as well. And things like disabling bucket versioning and deleting data are the kinds of actions that should be behind MFA. This is a case where they had versioning enabled, but the attacker was very easily able to just change that setting. Now, the last attack we're gonna talk about, I saved for last because I thought it was one of the more creative out-of-the-box things that I've seen attackers do in the past few years. This was reported by Microsoft. They called it Strawberry Tempest and it involves a criminal group called DEV0537. Here, they employ a ton of different methods to compromise user identities. Everything from foam-based social engineering and sim-swapping to insider threats. They went so far as to post ads in telegram channels saying we are looking for insiders to provide us credentials for VPNs or credentials that can access the network and we will pay you if you give that to us, which I think is a little cheeky to post it in a public telegram channel. The dark web, they use password stealers and they looked at exposed credentials in public repos. So pretty much every possible method of initial compromise of identities, they were employing here. And once they did compromise identities, they did research on those people to try to find things that they could then answer if security questions came up. After gaining their initial access, the group then conducted reconnaissance to try to get more credentials and more intrusion points. They used tools like AD Explorer for enumeration and they escalated their privileges by exploiting vulnerabilities in servers like JIRA and GitLab and Confluence. They even convinced help desk personnel to reset credentials by having English speakers call in and then answer questions like, what is your mother's maiden name? What street did you live on? So it's clear that they had done a lot of research in that initial compromise section. And they exfiltrated data using dedicated infrastructure. So they chose VPN egress points in the same environments as their targets to avoid being caught on things like impossible travel. They exploited cloud tenant privileges to create admin accounts, manipulate email transport rules, and delete other admins. And here's what I thought was just very creative. They deleted resources to trigger incident and crisis response processes and then they joined the crisis communication calls to spy on and understand what is the incident response workflow of this organization and how can I initiate my extortion. So many takeaways on this. First of all, require trusted endpoints. Nowadays, we work on our phones, we work obviously on our laptops, mandate the use of compliant and healthy devices. That will kind of eliminate some of the ways that they were able to exploit credentials in that first phase. Leverage modern authentication for VPNs. So use things like OAuth or SAML that is connected to Azure AD for VPN authentication in order to enable risk-based sign-in detection and just a tighter integration for risk detections. And establish operational security processes. First of all, check who's on your incident response calls and make sure they're part of your organization and are supposed to be there. Definitely keep your resource plans closely held and not easily accessible to whatever incident response plans you have in your org. So I went through a lot in this past 30 minutes. I will try to summarize some of the key takeaways. I think there's more than just what's on this slide. But advice to improve your cloud security, identity security programs will include certain tools that you should be investing in. So obviously CIM which is cloud, I think infrastructure and titles management. There's two elements that I see for this which is the least permissive part. So your tool should be able to tell you which permissions are excessive and weren't used based off of things like cloud trail logs as well as identity hygiene. Do you have MFA enabled? Are your access keys rotated and things like that? You should be implementing network segmentation in order to make it harder to move laterally for them. And be aware of your connections to on-premise resources here. And invest in some kind of cloud detection and response that can correlate that detection signals from both your cloud infrastructure and your identity providers. Your IDPs like Okta, if you can detect weird things happening there, that is an important signal in understanding the full scope of an attack. Now on the processes side, education and training is incredibly important. Forbes found that if you have some kind of training on phishing in your organization, then the amount of users who fall victim to social engineering scams drops from 32% to 5% after a year of training. So educating your employees, remember that humans make mistakes and that it's natural and understandable for them to, means that you should be investing in helping them not to fall for things. And shorten your session links for admin accounts in big SaaS apps and wherever you can, just make it so that admin permissions and admin actions that are really terrible have extra layers of security like dual authentication. And improve your MFA and your secrets management overall. So for MFA, require registration from a trusted location, limit it to potentially one device, alert on devices from unused location. And with secrets management as well, you should be investing in making that as secure as possible. So if you're saying I am confused, I want you to know that is a perfectly reasonable way to feel. It's a very difficult, tricky thing to get it right, but it is necessary. And I hope that if anything, the takeaway from this talk is that attackers are investing in identity and so should you. Thank you so much for your time. If you want to provide feedback, there's this QR code.