 My name is Maya Levine. I'm a product manager at Cystic. And the goal for me for today's session is for all of you to walk out with really clear takeaways on how to avoid becoming a headline-grabbing security breach. So I'm going to talk about actual breaches that occurred in the cloud and real takeaways for you to learn from them. I do want to apologize if your company is mentioned in this presentation, if I bring back any PTSD moments for you. I do think that there's no shame in being the victim of a security breach. More and more people are realizing that it happens. And I think that as a community, we can only grow from being open about how and why it happened. So given that, before I go ahead and talk about these specific examples, I want to take a step back and compare the differences between cloud and on-premise threats using different types of breaches. And the first one is ransomware. On-premise, the typical ransomware execution begins with a phishing email sent to a user. They either click on an attachment or on a URL link. Malware gets downloaded to their endpoint, and then it spreads throughout an organization system. In the cloud, it's a little bit different. The goal is less about these endpoints and user identity kind of directories and much more about storage databases, things like S3 buckets. And there's way more paths to get to these in the cloud. You could have an S3 bucket be completely tight down in terms of security and permissions, but you have a compute resource somewhere that's overly permissioned to allow access to this S3 bucket, and that's an entry point into there. So in the cloud, it's a little bit more complex. Similarly, supply chain compromises are arguably easier to execute in the cloud because of these different open-source softwares, dependencies. It's just much more complex architecture and obfuscated, and it's harder to tell what is using what. And so that modern way of doing things kind of gives these attackers different entry points inside. Additionally, CI CD pipelines are favorite targets of attackers because of the amount of stored credentials and the access granted to perform automated tasks. That being said, although on-premises, the services are typically more contained, it's not immune to supply chain compromise attacks. SolarWinds is a very good example of that. But when it comes to crypto mining, cloud is really just the perfect breeding ground for these types of attacks because of the infinite amount of elastic compute resources that users have access to. So attackers can piggyback on an organization's compute resources and basically generate money for themselves, never having to foot the bill. And this trend is actually increasing yearly and largely through targeting container-based cloud resources. Now, for each specific attack, I like to draw a real-life metaphor or example to kind of drive the point home. And so for the first attack, which focuses on a supply chain compromise through a malicious image distribution, I love the metaphor of a poisoned well. Like an unsuspecting villager who goes to drink from a well that has poison in it, unsuspecting developers can use images from these public sources that they think are totally fine, but end up drawing real problems for them. And so this attack happened in 2020 and it started with an attacker utilizing an Amazon machine image or AMI. And AMI is basically a pre-packaged EC2 instance. In this case, it was a Windows 2008 server instance. And the attacker took the AMI and put it into the AWS Marketplace, which is a very public well indeed. And anytime somebody happened to use this image, a scripted run in the background that would execute crypto miners. Now, let's talk about mining. Why are we seeing this trending? Oh, wait, this is the wrong kind of miners. We are talking about crypto miners. Crypto miners are a low risk, high reward way for attackers to make a profit. They basically generate passive income. If anyone happens to use this instance, they'll be making money on their behalf. And it's much easier to, it's very easy to change cryptocurrency into other forms of currency. In addition, it's not considered as much of a damaging attack like ransomware, so it's less likely to gain unwanted attention. And the most important thing for crypto miner attacks is scale. The more miners are executing, the more money the attackers are making. The impact on the victim in this attack is that they're unknowingly running a crypto miner, possibly multiple if they're running more than one instance at a time. And this is pretty simple. As your CPU usage goes up, so does that AWS bill, which you are responsible for most of the time. So the main takeaways for this attack is you should really use images from only trusted sources. And even if you are using a trusted source, you should also have static and runtime security tools on those instances to make sure that there's no malicious activity occurring. Now attack number two focuses on compromise credentials. And when I was looking at the internet for a real life example of a compromise credential, unfortunately, it did not take me very long to find that many people think it's a great idea to post their credit cards on their social media with the security code as a given gift to people who want to use these. Now, I'm not saying that many developers would do something like this, but there have been known cases of developers accidentally putting access keys or credentials in code that is posted publicly and not encrypted. So keep that in mind. There's many different ways that credentials can be leaked. For this attack, the MDR expel discovered an attack on one of their customers that all centered around a root users access key that was leaked. It was never determined how, but as I mentioned, there's many ways that that could happen. The attackers got their hands on this access key. And as far as compromise credentials go, a root users access key is about as bad as it can get. They were able to use it to generate EC2 SSH access keys and they would spin up EC2 instances with the run instances API and alter the security groups to allow inbound access from the internet. So one leaked key made it so that attackers could set up SSH keys and install crypto miners remotely on these instances. So sooner or later, these instances were found to be communicating with the cryptocurrency server, monerahash.com. Now, why attackers execute this type of attack and the impact on the victims is largely the same as the first example. However, I do wanna emphasize that a leaked access key can allow an attacker to create a very large amount of instances. Likely many more than the victim would have done on their own. And this makes the potential loss even greater. Think of the type of AWS bill that would be the stuff of nightmares for most people in this room. So the takeaways here are that secrets management is a very critical part of operating in the cloud. But alongside that, you really need real-time monitoring in your environments to basically notice if these secrets are being abused in some way. And the real-time part is key here because it doesn't take attackers very long to generate enormous bills if they have the right kind of access. In this specific attack, the expel was able to pick up on a few things in their real-time monitoring. They picked up on the fact that SSH keys were being generated from a suspicious IP address, one that had never been used in the environment before. And they picked up on the fact that these instances were communicating with a known cryptocurrency server. So that's just examples of the kinds of monitoring that you should be having in your environment. The next attack, attack number three, focuses on ransomware. And unlike in Austin Powers, it's never funny or for a laughable amount of money. And this attack focused on Onus, which is the biggest cryptocurrency platform in Vietnam. The attackers were able to take advantage of a log4j vulnerability on one of their cycloservers, that's a payment software that they use. And so they got in and they installed backdoors naming them K-Worker in order to disguise themselves as the Linux operating system's K-Worker service. Now I do wanna mention that Onus patched their cycloservers as soon as a patch was released by them for log4j. But by this point, it was already too late. They were already compromised. And so the attackers used a tunnel and established a remote shell and were able to discover a configuration file which held AWS credentials, full access AWS credentials. So they were easily able to use those to access Onus's S3 buckets where they conducted their extortion scheme. Onus discovered that sensitive customer data had been deleted from these S3 buckets. And at this point, they obviously deactivated these access keys. But again, it was too late. The next day they got a ransomware request for five million US dollars from the attackers via Telegram. Now Onus declined this request and so they had to both tell their customers about this leak and go through all of their cycloservers looking and deleting backdoors. Now ransomware attacks have been around for a while now. Why are they still being talked about? Why are they still relevant? Mainly cyber criminals know that companies hate to see their operations grind to a halt. In fact, sometimes it makes more fiscal sense to actually pay the ransom than to deal with the financial implications of downtime and recovery. So attackers know this. And there were multiple failures which allowed this specific attack to happen. The first is that the system was not patched for log4j. That's what allowed the initial access. But like I mentioned, this wasn't really Onus's fault. They patched it as soon as a patch was available but it just wasn't out in time. So I think the point that is really important here is that they had very overly permissive access keys and credentials. This was the key, the literal key that gave attackers the ability to easily access, overwrite, delete, S3 buckets. And the impact here is that over two million Onus's users' information was leaked, including their government IDs, other personal information, and password hashes. And if we think of something like GDPR, companies don't just have to worry about the PR fallout and all of that. There's actual fines being implemented now. This is showing you a chart over the past few years of the GDPR fines and we can see that these are increasing at what looks to be an exponential level. So in addition to all your other worries, you now also have to worry about potentially being fined for allowing these leaks to happen in the first place. So the main takeaways for this one is that for proper vulnerability management, you need to patch as soon as you can. And in this case, this wasn't possible because log4j went from announcement to being weaponized very quickly. So if you're waiting for a patch, consider other mitigating controls. Something like a web application firewall could have filtered out log4j. And what I really want you to take away from this is that overly permissive is a gift that you are giving attackers. If they manage to find their way into your environment via whatever method they can, if you have overly permissive throughout your roles, your groups, your resources, that is just enabling them to move laterally very easily within your systems. And in a report that Cystic just released on container and security usage, we found that 90% of granted permissions are not used. 90%, that is something that's a bit excessive, right? Something that is a very easy way that we can tackle, something that happens in almost every cloud breach that we see, which is lateral movement. Now for attack number four, I want everybody here to think about what happens when the security controls that are supposed to be in place don't work as intended. And I think that's a good example when you think about object storage being misconfigured, mainly because customers are trusting us to store their data in a responsible way. Now, Pfizer was found to have multiple exposed files on a misconfigured Google Cloud storage bucket. These files contain transcripts between Pfizer customer support and customers of various Pfizer drugs. The kinds of transcripts with very sensitive information, including home address, phone number, email, and medical and health data, especially that pertained to these Pfizer drugs that they were using. So VP and Mentor found the exposed files and reached out to Pfizer who then secured the bucket. And the impact here is that this is the kind of information that if criminal hackers were able to access this data, they could very easily use to conduct phishing campaigns. Think about it. If I was trying to pretend to be Pfizer customer support and I knew your personal information as well as your medical health history and a history of all these conversations you had with us, how hard is it for me to just pretend to be Pfizer and follow up on the next kind of thing in this customer support chain? Also, this is the kind of information that can be utilized for identity theft campaigns, for credit cards, or for other things. So basically, this is the kind of thing that you don't want to be on the wrong side of when it comes to data privacy laws. And one thing you should definitely do is never leave a system that doesn't require authentication open to the internet. So in this case, the GCP bucket had incorrect permissions that allow the data to be publicly readable. Also, I think it's worth mentioning that if you don't need to log certain data of your customers that's sensitive, maybe don't. You are making yourself liable if you are storing this information and this gets misconfigured or gets leaked somehow. If it's absolutely necessary for you to store this, then at least make sure that this data is encrypted. Now, for attack number five, we're gonna talk about a attacker who is able to pivot from a cloud environment to an on-premise resource. And this is surprisingly easier than you would think. Much easier, in fact, than carrying a normal couch up a flight of stairs of an average Manhattan apartment. And this attack all began with a Russian-speaking cyber criminal group called Circus Spider. Back in 2019, they released a ransomware variant called NetWalker. And they did what many cyber criminal groups have begun to do, which is sell it in an as-a-service model. So people can rent or use this malware for a fee or for a percentage of the profits. And this kind of as-a-service model is becoming more and more popular. Now, what NetWalker actually does is it encrypts the files on the local system and then maps network shares and enumerates the network for any additional shares, attempting to access them using the tokens of the victims that are already logged into this system. So in this way, it can basically spread and infect as much of the system as it possibly can. Now, Equinix suffered an attack that utilized this ransomware variant. Officially, they had a configuration management deviation, which in one of their cloud environments. Now, what this most likely means is that somebody in the company, maybe a developer, spun up a cloud environment that was completely outside of the normal security policies and monitoring. And so it was not configured correctly. And this allowed an attacker to access this environment via RDP. Now, once in there, the attacker discovered an instance that had access to on-premise resources. So in this way, they were able to make that pivot from the cloud environment to Equinix's on-prem where they infected many systems with ransomware. Now, once they made it onto the on-premise system, the security policies picked up on their presence and Equinix was actually able to contain this attack. Additionally, they had really good backups. So they were able to restore all of this data from their own backups and not have to pay a ransom or kind of deal with it in any other way. So nobody wants this kind of attack to happen to them, obviously. But if a ransomware attack does happen to you, this is kind of the best possible outcome. You're able to contain the attack and restore it from your own backups. And I really just wanna emphasize that you can't protect what you can't see. If you don't know it's there, how are you gonna be able to make sure that it is secure? And the cloud makes it really easy for us to spin up new resources and entire environments. That's part of why we really love it, but it does present real security challenges. If it happens in the cloud, is it invisible to your security operations visibility? In a lot of organizations, cloud environments are managed by DevOps or by a different group that's not IT. And that makes getting it integrated with these security policies in a stock pretty difficult. And so in this case, this instance that allowed the access to the on-premise environment was unknown to security. It wasn't monitored or configured correctly. And the impact here is that approximately 1,800 systems in Equinex ended up having ransomware installed on them. But again, they had good backups, so they didn't lose any data and there wasn't any customer data leaked. So that's all pretty good. Now, this incident came down to a lack of visibility, specifically security visibility into what was happening in cloud environments. And so you should have an inventory of cloud assets for all of your assets and make sure that security policies are being applied to all systems. These unmanaged and shadow systems are commonly the start of breaches. And so also keep in mind that even if you are in the cloud intentionally, your on-premise assets should be in the scope of discussion of how to protect yourself. Cloud and on-premise are not separated isolated use cases. They are almost certainly connected and therefore can be abused. And this one's kind of simple, but I think it's worth mentioning. Backups. Backups so you don't pay up. Even kind of rhymes are the same word at the end. I really am a firm believer of prepare for the worst case scenario. And if it does happen, at least then you have that already kind of taken care of. Now, attack number six really focuses on access without authentication. And when we think of how we authenticate into somebody's home, we usually use a key, right? We open a door with a key or maybe we'll ring a doorbell and wait for somebody to let us in. But largely here completely circumvented all of that and found a different way into this house that didn't require any authentication at all. And that's something that we want to avoid just allowing access without authentication. And security researchers discovered that some of Peloton's web-based API endpoints had a flaw which allowed access without any authentication. So unauthenticated users could then access Peloton user data including users who had set their profiles to private. Now, once this was reported to them they got rid of that unauthenticated part and made it so that any authenticated user could then access this API endpoint. But that wasn't a real fix because it's free to create an account and become an authenticated user. As of today, there are over three million Peloton authenticated users. So it was reported again kind of like this was not actually fixing the problem and then Peloton fixed the actual vulnerability here. Now, the impact here is that information could have been leaked but because this was found by researchers who properly reported it it doesn't seem like there was any public leaks of this data. However, this doesn't mean that somebody didn't collect this information for their own private use. This is the kind of information that again could be useful for phishing campaigns or fraud campaigns in the future. So a takeaway here is that you should have secure coding practices built into the development process. This kind of vulnerability could probably have been discovered by an application penetration tests by a third party. Or you can also consider API security tools in order to detect an attacker abusing the calls or trying to pull down large amounts of data through them. Now for the very last attack that I'm gonna talk about I love the example of using a recipe that is inherently wrong or flawed. When you're cooking for your friends for Thanksgiving or whatever you're expecting that the recipe that you're working off the base of what you're doing in the kitchen is right. Everything that you're doing is kind of built off of this base. And if this base, the recipe itself is incorrect you'll end up making a monstrosity that will make it so that you never get asked to cook ever again and maybe won't even get an invite back to a Thanksgiving meal. But when we think about that in terms of security I really think that it relates to supply chain compromises of open source software. Cause similarly, open source software is like the base that you are building your applications off of. And if there's something wrong with it that can kind of trickle down and cause problems down the line. So PyTorch, which is a very popular and widely used Python based machine learning library suffered an attack in December of 2022. It turns out that PyTorch imports some of their dependencies from Python package index or PyPy as I was told it was called and somebody might be pulling my chain but I think that's right. And the problem was that PyTorch assumed that whatever is the most popular one is the one that they should include as a dependency. So an attacker uploaded a poison PyPy dependency that hid under the real dependency name Torch Triton. And this Trojan version behaves exactly the same. The code was all the same except it had additional code that would exfiltrate sensitive information to a command and control server. Now this issue persisted for about five days but thankfully never made it into the stable version of PyTorch. And this happened because of the blind trust of the repository index for a dependency. There's a lot of layers of dependencies in modern applications especially ones that are open source based that there's a lot of vulnerable points and the attacker here abused this trust relationship in order to get their own code into PyTorch. So blind trust maybe not the best. And the impact here was that this malicious version of PyTorch was downloaded over 2,300 times in this five day period. That is not an insignificant number and we have yet to know what the true implications of this was because some of the data that was exfiltrated included compromised credentials and access keys. And as I previously mentioned, those are great starting points for further attacks. So takeaways. Any code that isn't totally under your project's control runs some kind of risk and this isn't just limited to open source software. Commercial closed software is also vulnerable to this. Think of SolarWinds again. So you should trust the package enough to use it only if you can verify that it's behaving how it should be. And in order to do this, you need stringent security testing. You can use some kind of static analysis to determine if there's unwanted code but these tools can be fooled. So what is more useful is dynamic or runtime analysis that actually looks at the behavior of the application. In this case, we could have detected the communication to the command and control server. And the best way of doing this verification or where you wanna put in the security testing is as close to the developer push as possible. This is because the earlier you discover an issue, the easier it is for you to fix it. So to wrap all of this up, some high-level trends that we can expect. Crypto mining will continue to rise in popularity. This might seem counterintuitive. Lots of cryptocurrencies are decreasing in value as we speak, right? But it doesn't change the fact that these attacks are very low effort to execute and can reap really high rewards. Again, it's a passive income kind of structure and I know that those are buzzwords now. Everyone's trying to find some way of making passive income, but attackers have found it. So we can only expect crypto mining to get more popular and we can expect the scale of attacks to rise. If the actual value of the crypto coin is decreasing, you need to mine more in order to make the same amount of money. And this isn't just for crypto jacking or crypto mining attacks. We're seeing that these cyber criminal groups are almost like businesses. They're at that stage in the startup cycle where they're starting to scale up their operations. And we can expect that for all types of attacks. And I kind of wanted to just touch on again how the fact that these cyber criminal groups are able to do this as a service model for the malware that they create, it makes advanced attacks accessible to even those who aren't that technically savvy. And this is part of that kind of scale that I was mentioning. So we can expect that these attacks are gonna become more and more commonplace. And this one is pretty obvious, but supply chain compromises can really have devastating effects. And often when people talk about supply chain compromises, we hear a lot about zero-day vulnerabilities. Ones that attackers know about but haven't been disclosed publicly yet. And while those are really scary and definitely worth noting, I think it's more kind of terrifying that these publicly disclosed vulnerabilities are still being weaponized. Log4j went from announcement to being weaponized in less than a few hours. And that kind of turnaround time is just way faster than most vendors can provide a patch for. And so I think that that's kind of really worth noting. And all of this to say, instead of being all doom and gloom, there are ways to cope. And one really important way that I've hammered on a lot is having real-time visibility. And the reason why this is so important is because you can't always detect the compromise itself, but often you can detect the signs of a compromise in your environment. When attackers have compromised an environment, they typically are doing the same kinds of actions. So you should know what to look for and be notified about in real-time. And another thing I'm trying to hammer on especially is least permissive. For too long, we have been like Oprah to developers. You get a wild card, you get a wild card, you get a wild card. That is not a sustainable security model. You are sacrificing security for operational speed. And that kind of compromise that you're making might come back to haunt you in the future. I really urge you to reconsider your organization's security priorities and see if you can make least permissive one of your top goals for your organization. Because again, if you don't, if attackers are able to enter your system through various compromising methods, if you have least permissive, then you've basically given them just a key to access whatever they want. And then this is kind of self-explanatory. But prepare for the worst case scenario. Part of that includes backups. It includes phishing training for all of your employees. Just having that mindset of the worst will happen. May not make you the most popular person, but it will make you prepared. That's about it. Thank you so much for joining this talk. You can come by the Cystic booth if you have any questions. If you want to learn more or I guess there's a survey here, you can go ahead and do that. And at this point, I'll see if there's any questions from the audience. All right, well, thank you again for joining this late session and hope that it was useful and that you all learned something. Have a great rest of your day.