 Hello, everybody. My name is Maya Levine. I'm a product manager here at Cystig. And today I'm going to talk to you about real cloud security breaches that have occurred and how you can prevent them from happening to you, basically. And I want to do a quick disclaimer. If your company is one of the ones that are mentioned here, I apologize in advance for the PTSD that I'm probably bringing on. In all seriousness, I do feel that as an industry, we should be as open as possible about why and how breaches have occurred. I never shame anybody for being the victim of a breach, especially because they're only becoming more and more common. In fact, a Verizon 2021 report showed that cloud assets are actually starting to become more targeted than on-premise assets for breaches. And if we think about some of the more major attacks in the cloud versus on-premise, let's start with ransomware. On-premise, it's usually a phishing email. But in the cloud, it's all about storage databases. And there's many different paths that attackers can take to get there. Could be misconfigurations or overly permissive policies. And when we think of supply chain compromises, it's arguably a lot easier to execute those in the cloud because of the modern way that applications are being built with many services and it's just harder to tell kind of what is using what. And finally, crypto mining, cloud is probably the perfect breeding ground for crypto mining attacks due to the infinite amount of elastic compute that most users have access to. So jumping right in to our first attack, the focus on this one is ransomware. And unlike in Austin Powers, it's never really a laughing matter. In this case, the victim was Onus, which is one of the biggest cryptocurrency platforms in Vietnam. And attackers were able to exploit a log4j vulnerability in a cycloserver of Onus. And they left backdoors behind and named them K-Worker for the purpose of disguising as the Linux operating system's K-Worker service. And that was used as a tunnel to connect back to their command and control server via SSH, which is a wise way to avoid detection. So the attackers established a remote shell and discovered a configuration file which held AWS credentials. Now these credentials had full access permissions. So using them, the attackers were able to access S3 buckets and continue with an extortion scheme. The next day, Onus discovered that customer data had been deleted from their S3 buckets. And so, of course, at this point, they deactivated all of the access keys. But at that point, it was already too late. They received a ransom request from the attackers via Telegram for $5 million. Now they declined this request and had to go and disclose the breach to their customers and find and remove all of the nodes that had backdoors in them. And ransomware has been around for years now. That's because cyber criminals know that companies hate seeing their operations grind to a halt. The financial incentive is definitely still there. And there were a few failures which allowed this specific breach to happen. The first was that the system was not patched for log4j. But this really wasn't Onus' fault because they actually patched as soon as Cyclos released one. It was just too late. By that point, they had already been compromised. So I think it's more noteworthy that they had very overly permissive access keys. This is really what enabled the attackers to overwrite and delete the information from the S3 buckets. And the impact here is that two million Onus' users had their personal information leaked, including government IDs and password hashes. And if we think of GDPR, for example, this could result in massive fines if they were within the scope. This chart here is showing the fines levied by GDPR in the past few years. We can see that the numbers is getting higher and higher. So that's another thing to worry about when it comes to ransomware attacks. So what can we learn from this attack? The first is that you should patch as soon as you can. But like I mentioned, in this case, log4j went from announcement to being weaponized faster than vendors could release patches. So if you are waiting for a patch, consider other mitigating controls. Something like a web application firewall could have filtered out log4j. And finally, overly permissive is a gift that you are giving to attackers. If they manage to find a way into your environment and everything is kind of free for all for them, then they're really gonna wreak havoc and move laterally and escalate very quickly. For attack number two, the focus is on compromised credentials. And unfortunately, it didn't really take me very long to find a real-life example of this on the internet. Yeah, it turns out people think it's okay to post their credit cards on social media with the security codes. And I am not saying that developers would do anything like this. But there are many ways that credentials can be compromised. And there have been known cases of accidentally posting credentials in public code. Now, the MDR expel discovered this attack on one of their customers. And it all started with a root user's access key that was leaked. It was never determined how. But the attacker used these access keys to generate EC2 SSH keys. And then spin up EC2 instances with the run instances API. They created new security groups and allowed inbound access from the internet. And then these instances began to communicate with the known cryptocurrency server, Monero Hash. So one leaked root access key allowed attackers to spin up SSH keys in AWS, which allowed them to install the crypto miner remotely. And I do wanna mention that abusing a leaked access key, especially a root access key, can allow an attacker to create a very large amount of new instances. Likely way more than the victim would ever create on their own. And likely, so many that will generate a bill that is the stuff of nightmares for most people in this room. So the takeaways here are really understand that secrets management is a critical part of operating in the cloud. But alongside that, you also need real-time monitoring of your environment to understand if malicious activity is occurring using those secrets. And the real-time part is super key because it actually doesn't take attackers very long to generate huge bills if they have the right access. And in this specific attack, real-time monitoring could have notified the victim that suspicious SSH key pairs were being generated. And that the instances were communicating with known cryptocurrency servers. For attack number three, I really love the example of a poisoned well. This attack is a supply chain compromise via a malicious image distribution. And just like an unsuspecting villager who goes to drink from a well that has been poisoned, developers are unknowingly using images that are actually malicious, that have been planted as traps in public repositories. And this happened back in 2020. An attacker utilized an Amazon machine image or AMI, which is basically just a pre-packaged EC2 instance. In this case, it was a Windows Server 2008 instance. And they took their malicious AMI and put it in the AWS Marketplace, which is a very public well. And by embedding a crypto miner in their AMI, they basically can generate passive income if anybody happens to use this instance. When it gets started up, a script placed by the threat actor will run crypto miners in the background. So for an attacker, this is definitely a low effort, high reward type of attack. And in SysTig's latest cloud native threat report, we found that it's not just Amazon images that people need to worry about. There's a lot of places where malicious images can be put and a lot of different malicious image categories other than crypto mining, which as you can see is definitely the top of all the categories because of how easy it is to execute for attackers. But I think it's interesting to look at how global events can affect these kinds of trends. So the most popular incentive for attacks is obviously financial gain. But the second most popular one that we see is espionage or political objectives. And the goals of disrupting IT infrastructure or utilities have led to a fourfold increase in DDoS attacks since the start of the conflict between Ukraine and Russia. So DDoS agents are often added to botnets which attackers can use as their DDoS as a service operation. What's interesting here is that containers were actually used in this conflict to quickly crowdsource participation in these attacks. A container image would be set up with all of the tools that an attacker needs to join a malicious campaign with basically no prior technical knowledge. And this is something that we haven't really seen before. So really only look for trusted sources when it comes for images that you're using. And even if it's coming from a trusted source, runtime security tools should be installed on these instances to ensure that no malicious activity is occurring. Now, for attack number four, I want everybody to consider what happens when you build something with really faulty foundations. In the case of a home, I would say this is probably worst case scenario. But in the case of modern applications, using open-source software and other dependencies as your building blocks, as your foundations, if you're unknowingly using something malicious, this can lead to really bad effects down the line. And PyTorch, which is a very popular and widely used Python-based machine learning library, fell victim to this kind of supply chain compromise in December of 2022. So it turns out that PyTorch imports some of its dependencies based on the PyPy index. And they assume that the most popular one is the one that they should use. So an attacker uploaded a poison PyPy dependency that hid under the real dependency name, Torch Triton. And this Trojan version of Torch Triton behaved exactly the same as the original, except it also had extra code to exfiltrate sensitive information to a command and control server. Now, this issue persisted for five days, but thankfully never made it into the stable version of PyTorch. And really, this happened because of the blind trust of the PyPy repository index for a dependency. This kid learned the hard way. Blind trust, not a great idea. In the cyber world also, I think we should hold that. Really, there's a lot of layers of dependencies in modern applications, especially ones that are using open-source software. And that means that there's a lot of vulnerable points. And a clever attacker was able to exploit this trust relationship to get their own code into PyTorch. And the malicious version of PyTorch was downloaded over 2,000 times in this five-day period. That is not an insignificant amount. And we're only gonna learn of subsequent incidences and the true impact here as time goes on because some of that exfiltrated data included credentials and keys. And as I previously stated, that can be a starting point for another attack. So keep in mind that any code that isn't totally under your project's control runs some kind of risk. And this isn't just limited to open-source software. Commercial closed-source software is also vulnerable. We can think of SolarWinds as the most obvious example. So you need to operate under a trust-but-verify mentality, meaning trust a package enough to use it only if you can verify that it's behaving as it should. And for this, you need security testing. Static analysis can be really good for helping you define unwanted code, but there are ways of fooling it. What's harder to trick is dynamic or runtime analysis, which looks at the actual behavior of an application. In the case of PyTorch, they could have detected the connection to the command and control server. And I think it's worth noting that the best way of doing this verification is as close to the developer push as possible. The earlier you find out, the better it is. And I'm sure there are many sessions this week on shift left, so I'm not gonna get into it. All right, attack number five. We're focusing on a threat actor group that's just like Wiley Coyote. They're not too sophisticated, lots of brute force, explosive methods, and really have an affinity for TNT since they used it in their name. Looking at a very quick, very summarized history of this group, they started in 2019 by targeting Redis. There was a high rate of misconfigurations and vulnerabilities, and the exploit was part of Metasploit. So they targeted them for crypto mining. And a few years later, they moved on to the next easiest target. People leave their Docker API endpoints exposed, which allows anyone anywhere to run Docker images on your computer. And so this attacker would run custom images that deployed miners. The next year, they started an even more advanced campaign, and they expanded their attack vectors to many different sources, including Kubernetes now. So you can see that they're getting comfortable with the cloud at this point. They're using the same methods as before, but the scope has really expanded. And the final iteration that we've seen was interesting because it wasn't just about crypto mining anymore, it was also about credential theft through crypto worms. At a very high level, what they did was talk to the AWS metadata endpoint, which stores all of the credential information for that resource. And they're looking for what else they can access. They're looking for AWS and SSH keys. And they actually used open source tools like Lasagna to scrape system memory for credentials or passwords. And what you're seeing here is a screen grab that a researcher from Germany managed to take of Team TNT's control panel. This is showing different workers and their uptime. And I wanna call attention here because 600 hours is a very long time to mine without getting caught if you're mining for free. SysTig has estimated that for every dollar that an attacker makes, the victim is billed around $53. So think of that when thinking of the financial impact. Another crypto jacking campaign that was discovered recently has very similar infrastructure to Team TNT. And it got its name from a domain called kiss a dog, hence the very weird matching hoodie photo I found. The domain was used to trigger a shell script, payload on the compromise container using an encoded Python command. And the attack chain subsequently escapes the container and moves laterally into the breach network while simultaneously taking steps to terminate cloud monitoring services. And the ultimate goal here was crypto mining. But what I wanna talk about and draw attention to was the fact that they did container escapes. That was a huge concern in the early days of security and was thought to be mitigated by proper configuration. But the impact is that we can't discount the possibility that attackers can escape the container now, which makes this statistic that SysTig found all the more alarming. Now, if you don't have container running as root, container escapes are still possible. But with this, it's a lot easier for attackers to get root on the host, which is obviously something that we wanna avoid. So, try to avoid containers running as root. Realistically, some things will need those abilities. But you should be really deliberate about it. Is this a system agent that actually needs it? And if it does, you need to actively monitor it to see that it's not doing anything weird. Also, consider not letting your instances initiate new ones. Because that was one way that team TNT was able to spread. Once they accessed an EC2, they would spin up new ones if they could and then mine on each of them. For attack number six, the hackers abused free tier accounts. This is what I imagine their face was when they realized that they could make money off of that. This was an attack discovered by the SysTig threat research team. It's a very extensive and sophisticated active crypto mining campaign. And we called it Purple Urchin. This is where a threat actor targeted some of the largest CICD service providers, including GitHub, Heroku, and BuddyWorks to run and scale this massive cloud operation. The activity that we observed were calling freejacking. This is when attackers abuse compute that's allocated for free tier accounts. Now, the efforts that Purple Urchin invested here are pretty abnormal. With an extensive list of service providers and open source tools beyond what we're even showing here. So this is not that low effort, high reward attack that I was describing. The initial investment in getting this level of automation required a lot. And it all started with SysTig's container analysis engine capturing suspicious behavior associated with a Docker image. Everything looked a bit obscure, so we decided to dig in a bit. And it turns out that this container acted as the command and control container, as well as the stratum relay server, which is what receives connections from all of the active mining agents. So this container acted like the central hub of operation for this attack. And how did these attackers manage to automate creating so many free tier accounts? Each GitHub repository that was created was typically used within a day or two. And each repository had a GitHub action to run Docker images. So a shell script was responsible for creating the GitHub actions YAML workflow in each of the threat actors repositories. And it tried to obfuscate and hide the actions by naming them with random strings. Now, in order to push this workflow file to each repository, the script also added SSH keys for use with the GitHub CLI. So it creates a GitHub repository and it pushes this previously created GitHub workflow to the master branch of the new repository. And the results of this whole automated workflow was the creation of a GitHub account and repository and the successful execution of many GitHub actions to run mining operations, which is what I'm going to discuss now. How did they automate that mining part? Well, they had another script that went through the list of previously created GitHub accounts and used curl to pass a pre-made Docker command to each repository's action, including that IP address of the stratum relay server so that it could report back. On the GitHub side, it just receives a Docker command and runs it. So it starts the mining container. The fact that these attackers used their own stratum mining protocol relay helped them to avoid network-based detections that are looking for outbound connections to publicly known mining pools. And to pull off an automated operation of this scale, they employed quite a few techniques to bypass those bot protections that are in place to prevent exactly this. OpenVPN was used to make sure that the source IP was different for every account. And they used programmatic mouse and keyboard inputs, as well as speech recognition of audio files to bypass CAPTCHA. And a container with IMAP and post-fix servers to handle the emails for account creation and verification. And unless you work for these companies, you might be thinking to yourself, why do I care about this? Like how does this affect me? And the answer is, it ruins a good thing for everybody because we can't expect that these CICD service providers are just going to absorb these costs. Cystic estimated that every GitHub account that Purple Urchin created cost them about $15 a month. And at these rates, it would take a threat actor $100,000 to mine a single Monero coin. And these threat actors are not even mining Monero. They're currently mining cryptocurrencies with low profit margins. So it's possible that this operation is really just a low risk, low reward test before Purple Urchin moves on to higher valued coins like Bitcoin and Monero. And it's also possible and probably scarier that Purple Urchin is preparing to attack the underlying blockchains. Because proof of work algorithms are vulnerable to a 51% attack. So as I mentioned, this kind of activity could put free trials that we all know and love at risk. There might not be free trials for personal use and it might make the cost for enterprise and business accounts go up. And for you personally, just know that you can't rely solely on malicious IP detection. Remember that the stratum mining protocol relay made it so that we couldn't detect that they were going to a known mining pool. Now for the very last attack, I want everybody here to consider that sometimes attacks are much more malicious and in depth than how it appears on the surface. This attack was also discovered by Cystic's threat research team. And we call it Scarlet Eel. I guess we like sea animals. And the attack began with hackers exploiting a vulnerable public-facing service on a Kubernetes cluster that's hosted in AWS. Once the attacker gained access to the pod, the malware was able to perform two initial actions. The first was downloading and launching a crypto miner. But during that installation, we also observed a fast script running simultaneously on the container to enumerate and extract additional information from this environment, looking for credentials. After they gained that initial access to the account, they then gathered information about other deployed resources, focusing specifically on Lambda and S3 services. In the next step, the attacker used the credentials that they found in the previous step to move laterally and contact the AWS API. They're trying to enumerate this account further and continue to kind of expand their reach. And here they were able to do three things. The first was they disabled the cloud trail logs to evade detection. The second was they actually stole proprietary software. They managed to do a data expiltration. And the third was they found the credentials of an IM user that was related to a different AWS account by discovering Terraform state files in an S3 bucket. Now those credentials that they found in the Terraform state file, they used to basically repeat that kill chain again. So they're trying again to enumerate and find other ways to spread. But fortunately in this case, they were unsuccessful due to a lack of permissions. So while cloud security is relatively young, we are seeing that cyber attackers are really learning about the cloud and starting to experiment with what is possible. This attacker had great knowledge of cloud native tools and was able to adeptly move and navigate and escalate their attack. So I'm personally concerned that we'll get to a stage where the knowledge of attackers of cloud native tools and services will surpass that of the average person that is using the cloud to build their applications and host their infrastructure. This Terraform state file is a good example. Most people don't know that Terraform leaves a state file on an S3 bucket with credentials on it. And it's that kind of thing where if you don't know to handle it with care, attackers might be able to use it to their advantage. And I urge everybody here to not dismiss crypto mining if you see it in your own environment as totally normal just because it's so widespread because the attack might not have stopped there. If the victim here didn't continue to do incidents response, then they might not have discovered that there was actually data that was exfiltrated. So keep the ability to disable or delete security logs to as few users as possible. The fact that the attacker disabled the cloud trail logs made the incidents response harder. And Terraform is a really great tool but it should be handled with care. Terraform access should only be given to those who really need it. And Terraform state files should be stored somewhere secure not here in what seemed to be a very easily accessible S3 bucket. And finally, you can't assume that just because attackers can't alter anything that there won't be any impact to your organization because with read only attackers can still read credentials or Lambda code and there is still an impact. So consider that when you're thinking of lease permissive you can't just go read only across the board and think that you're safe. Now to wrap up all of that, lots of different attacks and kind of things that we've seen. Where do we think that cloud breaches, trends in general are gonna be in the future. Crypto mining is probably only going to increase in popularity which seems a little counterintuitive because a lot of cryptocurrencies are crashing in value but it doesn't change the fact that crypto mining attacks are low effort, high rewards. Attackers don't need to do much and they can passively earn income. But what we will probably see is that the scale of these crypto mining attacks will increase. If the value is less, you have to mine more to make the same amount or more. But it's not just true of crypto mining attacks. We're seeing that threat actor groups are almost like a startup. They're getting to that stage where they are ramping up their operations and doing attacks at a really larger scale. We can also expect the sophistication of cloud infrastructure attacks to increase. And I just mentioned we don't wanna get to the stage where attackers are more knowledgeable about cloud native tools and services than we are. So we need to make sure that we can keep up and that we are aware of all of the different kind of methods that they're using to breach our cloud environments. And this one's a bit obvious but supply chain compromises can really have devastating effects. And most people I think when they hear supply chain compromise they think of zero day exploits. The ones that we don't know about that attackers are using against us. But even the ones that we do know about like log4j can go from announcement to being weaponized far faster than attackers can use it to export, then we can release patches for it. So all of this to say how can we cope? I've mentioned it quite a few times but that real time visibility is key because even if we can't always control whether attackers breach our environments at the very least we know that most attackers are doing the same kinds of actions once they do breach an environment. So we can really look for those repeating actions and at least be notified of a breach when it does occur. And our time and effort and manpower is a valuable resource. There are more things to fix than time or ability to fix them. So Sysdig found that 87% of container images have high or critical vulnerabilities. How can you narrow that down and figure out which ones do I fix first? The answer is kind of simple. You go for the ones that attackers are most likely to go after first. You go for the low hanging fruit. And to do that you can apply different filters to prioritize. So of all those critical or high vulnerabilities which ones actually have a fix available but aren't patched yet? Which ones are actually in use at runtime? And which ones are actually exploitable? Add all of these filters up and start with those and work your way down. And last but certainly not least is least permissive. Sysdig found that 90% of granted permissions are not used. That is an excessive amount. And I think we can very much do better. And I have seen that not every organization wants to prioritize least permissive. Again, limited manpower, limited time. What do you focus on first? I urge you all to reconsider if least permissive is not a major goal of your organization. Because as I mentioned, unfortunately we can't always control whether a breach occurs. But if a breach does occur and your environment is basically the wild west free-for-all in terms of permissions, attackers are gonna be able to do so much more and wreak so much more havoc than if you are reducing the scope of your permissions to what is actually needed. So with that, I'm gonna end the session. I appreciate all of you for showing up. And I'll be here if anybody has any questions. Thank you.