 Awesome. Thank you. It is, I'm going to just stop screen sharing for a second and say it's great to be here at the Linux Foundation. I am a Linux user going back to the early 90s when you would, I'm dating myself here, but when you would download a giant pile of floppy disks to install Slackware 0.8 on your machine. And I think one of the awesome things about, you know, the last 20 years in computing has been the rise of open source operating systems, open source languages, the ability to build things without being captured by some commercial entity who, you know, who wants to just kind of maximize revenue from you, but instead it's a collaborative effort. And that has really defined, in my view, the last couple of decades of computing. A little on me. I am now the chief architect at Sneak. Sneak SNYK stands for so now you know. We are a security company started out as an application security company. But with the acquisition of Fugue, which is the company I founded along with my co-founder who's here today, Andrew Wright, we're extending into the cloud as well. So our perspective is to have a single view of security from the application all the way through the deployment, etc. I'm not trying to give you a sales pitch, just trying to tell you a little bit about where we're coming from. Prior to founding Fugue, I'm a long time programmer and software architect. I was the CEO and CTO at Fugue in various phases of its development. But mostly I'm interested in building secure systems from a developer perspective, not as much from the kind of after-the-fact security perspective. We believe that most breaches, and we're going to get into in great detail some real breaches today. We're going to, as it says on the tin, we're going to deconstruct them. Those breaches are very much concerned with the cloud control plane API. And so that'll be the focus today. As Linux Foundation mentioned, we love it when there are questions along the way. So we will have some time at the end. This session will be about 50 minutes, give or take. But if there's something you want to explore or challenge or, you know, all I've been doing for about a decade is cloud security, something you want to dig into that I'm not digging into, go ahead and throw it up there. And Drew is going to help me keep track of what you guys or what you folks are looking for as well. So, all right, I'm going to go ahead and share my deck. I promise there aren't too, too many slides. We're going to spend a lot of time at the whiteboard and looking at things like DOJ filings and such for these breaches. But we do have a few slides. Okay. So the way hackers act and what hackers do is what security needs to be about in a practical world, right? We often in security think in terms of like, you know, risk management and so on and those things are good. You know, we need to do that. But what we're really concerned with is what hackers are actually doing. And so this session that's very much our philosophy at Sneak is we look at, you know, real breaches, real hacks and devised ways to protect against them, which is I think the only practical way to go about this. So there have been some huge changes due to cloud computing technologies versus the data center that have obsoleted most data center forms of security and protection. So if you look back at the old school data center, you know, it says hardware here. Obviously, there's hardware in the cloud, but you're not touching it. So in the data center, you're procuring hardware, you're going through some, you know, change control board process or some other process where you're deciding which hardware to run for performance, for cost and for security. You're doing that manually, you know, someone needs to go and slide something into a rack, plug it in, configure it. And that means the environment is relatively static. Because there's a ton of friction in that process of buying stuff, racking and stacking it. It has a, you know, typically a three or a five year recapitalization cycle. So, you know, your server named Frodo is going to be sitting out there for three years. And you're going to be trying to maintain it. And when I say you, it's typically some kind of dedicated, you know, operations department more than the developers in the data center. The developers might give requirements to them, but they are not procuring things and bringing them online in the data center. There's this high friction manual process that goes on. All right. And the scale, even in really, so a little on my background, again, I've kind of split my career between national security kinds of environments and high tech companies. So I was at AWS before founding Fugue with Drew. And there I was the principal solutions architect for Department of Defense and some intelligence kind of world stuff. So I've seen some very highly secure and very large environments. And they tend to be only in the scale of thousands of components, which is small in the cloud world. That's very small. And they tend to be of a small number of like types of services. And by a service here, I mean a server or a router or a network attached storage device or a security appliance. You can typically count them, you know, they're in the few to low dozens, right? Well, when you go to cloud, things really change. They almost invert. In cloud, the hardware configurations are driven by software. Instead of it being a scenario where you get some hardware and then deploy software, you write software that deploys hardware. And it's all API driven. It's all programmable and done well. It is highly dynamic, meaning your footprint is going to change a lot as the system runs and evolves. You're not on a three year recap cycle. You're on a 22nd API provisioning cycle. And what that means is that developers and DevOps are creating the actual infrastructure. And by the way, this is how most cloud breaches happen is not through operating system exploitation, although there's often a component of that. It's mostly through the configuration of these cloud resources. So scale hundreds of thousands of components. So Fugue is now part of Sneak. Sneak acquired us in February to create this whole security vision of the entire system, this ability to see the entire system. But Fugue on its own was already managing millions of cloud resources for our customers in financial services, a whole lot of high growth, high tech companies, but millions of resources. And of hundreds of types. So when you think about any given resource in the cloud like a container or a managed relational database or a managed video transcoding service endpoint, all of those things have their own APIs. And that means all of them have their own types of exploits. And you cannot base your defense strategy, your security strategy on TCP IP network defense, because it's mostly useless in the cloud. There's a little to gain there, but almost nothing. Okay. And hackers have changed how they operate. We have this labeled as pre-cloud and cloud, kind of hard to know if coincidence or causality, but it is largely coincident in time. The pre-cloud version, kind of the Hollywood hack version, is the hackers pick a target. And this still happens. Some famous examples are the Sony motion pictures hacks by the North Koreans. Sony made a movie they didn't like. And so they went and what hit the press is all of the executive emails they had breached, but they had gotten into just about everything in that network. So you pick a target, search for vulnerabilities, often those are executives. Often those vulnerabilities are humans who are overpermissioned and that you're fishing them or doing some other kind of social engineering. And then your exfil tends to be relatively low and slow. You're trying to go unnoticed, you're trying to remain resident in the environment. And so you might do things like exfiltrate records from a database in outbound DNS requests or similar. That is not what we see anymore for the big high-profile CEO gets fired kind of breaches in the cloud. That's just not how they're happening. It's radically different. So what the hackers are doing is first and foremost, they're searching for vulnerabilities on any public-facing endpoint, IP address, DNS record, et cetera. And they're doing this almost instantaneously. So by the time you put something on the internet, you've probably got just a few minutes before one or another groups of hackers have noticed if you have any vulnerabilities. So the preferred methodology now is to run automation on these endpoints. And this really is enabled by the cloud because the cloud providers, the hyperscalers, Amazon, Microsoft, Google, their networks within their clouds are so fast and so high efficiency that this is an inexpensive thing to do in time. So the hackers are searching for vulnerabilities. And we know, for example, the Capital One breach happened this way. And I'll go into a fair amount of detail on Capital One in this session. And then the hacker gets kind of a menu of stuff that's vulnerable that they can then go try to see who it is, right? Like stuff that they know they can exploit because their automation is told them there's something exploitable. And so from that menu, they'll pick a target. This is vastly more efficient than trying to do some kind of like social engineering stuff, right? There's no humans in the loop here. This is full automated hacking. And then the exfiltration is just smash and grab. There is no need or want to stay resident and do slow exfiltration because you can't tell that it's happening. And what that means is detection is almost useless in the cloud world. It has to be prevention. All right. We're actually going to go through all the named breaches here today. So Twitch suffers massive 125 gigabyte data and source code leak due to server misconfiguration. Boy, is that a misleading headline. And if you pay too much attention to these headlines, you will not be aware of where your real vulnerabilities lie, where the real risk is. Yeah, misconfigured servers contributed to more than 200 cloud breaches. Misconfiguration can mean a lot of things. Very often in the press, the journalists like something pithy and short that lots of folks can understand. And that's, you know, understandable. That's kind of their job is to disseminate what's happening in the world in a way that people can grok. But very often it hides the complexity of what's really going on and therefore how you need to defend against it. But misconfiguration can mean everything from leaving like a port open that is a dangerous port. You know, that's a very simple misconfiguration to fundamental design and architecture flaws in the system that allow massive blast radius effects in attacks. Okay. And the industry and the press tend to focus on the former, the simple stuff, where the latter, the bad design, bad architecture from a security perspective is actually how people really get harmed in these things. That next headline, the Capital One breach is more complicated than it looks. Yes. Yes, it was. We'll go through this in some detail. Twitch breach highlights dangers of choosing ease of access over security. It's a reasonable headline. But, you know, the specificity of what is being spoken about there is where it gets interesting and we'll get into that. Okay. How far are we in? We're like 18 minutes in. Okay. I'm going to speed up a little bit. So control plane compromise, almost all cloud breaches, I said almost all because there are probably some I'm unaware of. Every cloud breach I'm aware of at scale follows the same pattern, which is there's some form of initial penetration. It might be something like a dangerously open port or a very often an orphaned piece of infrastructure, a virtual machine or container that people have forgotten about and therefore has developed a CVE. API keys are the king of cloud packs. And sometimes people put those in source code in places that they should not, I would argue, never have them. If you get repo has API keys, you're inviting disaster. But that initial penetration is really just to get a foot in the door. Like nobody cares anymore if they can flip your operating system. Nobody cares anymore if they can route a virtual machine. That doesn't really matter. What matters is getting access to those control plane APIs. The ability to do things like list S3 buckets, like issue get commands against S3 buckets in a privileged way. And so to get from point A to point B from, okay, I got in to some compromise server. In the case of Capital One, it was probably a self-managed web application firewall that had slight misconfiguration to it. And that's all the press focused on. But the real hack happens after that, which is from there, I need to go discover what you have that's valuable to me. And this is going to be a theme. 90% of hacking is learning. It is not attacking. It's learning. It's understanding the environment. And so your goal as anyone building a system, I think this is at least as much on the developers as on the security team is to prevent knowledge from the hackers. But once they do that discovery and movement, they'll find something valuable. In the case of Capital One, it was like 700 S3 buckets. We'll look at the DAJ filing in a second here. And then typically smash and grab exfiltration, like sudden exfiltration. All right. Let's talk a little bit about what we think of at Sneak as the five fundamentals of cloud security. The first one is you have to know your environment. Okay. Just as the hackers are trying to learn what you are doing, you must know what you are doing. And this is a non-trivial problem. It is going back to where we, yeah, hundreds of thousands of components. So knowing what that environment is doesn't just mean knowing a list of them. It means knowing how hackers might approach them and whether or not those things are kind of configured in a way that is vulnerable to those kinds of breaches. And particularly around limiting blast radius. I'm going to keep coming back to this. You cannot prevent initial penetration across the board. You can't. It is not possible. It is a fool's gold. It will happen eventually. But what you can do is design your systems in such a way that the discovery and movement is highly limited. And that the data extraction or other kinds of exploitation are limited as well. And by the way, we have a whole series of classes we are teaching now on doing this through things like controlling for time, for what actions are available, for what resources can be for each. But this talk is a little narrower on just deconstructing some cloud breaches. So let's go ahead and deconstruct some cloud breaches. I'm going to stop sharing while I get my screens sorted here. And have we had any questions? None yet. When you have them. Yeah, go ahead and fire away if you have any questions. Let's see here. I'm trying to find the right screen. Here we go. We're going to switch from the deck over to my browser. Are you seeing a DOJ filing group? Yep, I can see it. Cool, cool. So this is the actual United States District Court for the Western District of Washington at Seattle complaint against Page Thompson, who was the Capital One hacker. So this goes back to 2019. It's still highly relevant. There have been no huge changes to cloud platforms that can prevent this kind of thing. The beauty of this one, and by the way, I'm going to use real breaches. I'm going to name names and use their actual content. These are some of the best companies at doing cloud security in the world. I'm going to do Capital One, Twitch, Imperva. And if we have time, I'll get to Uber. But certainly the first three are among the best companies at doing cloud security. So this is not an attempt to say, you know, paha kind of thing at all. These folks are brilliant at what they do. But it's super interesting to look at real world cases versus abstractions because the abstractions are always overly simplified and therefore nearly useless. So we have to look at the real things. All right. So I'm skipping ahead a bit here. Kind of what it's said to date is the Capital One got this email telling them, hey, somebody's on social media bragging that they hacked you. And this is an interesting first point. That's how they found out. And it was weeks or even a couple of months after it happened, I believe, when they actually learned of this. And that includes, in this case, AWS, their cloud service provider. No one noticed it when it was happening. And there's a very good reason for that that we'll get into. So they get this email and then there is a file, a collection of data that the hacker has shown the world for bragging rights, saying, you know, these are these are the things I did to do this breach. So there were four commands according to this DOJ complaint. The first command when executed obtains security credentials for an account known as something dash WAF dash role. Okay. So the press picked up on this as meaning that a WAF, a web application firewall was exploited particularly and that that was the cause of the breach. AWS came out and said that was a tiny piece of it. But the important point here is what matters is security credentials, not residency on the server. I mean, you need that maybe depending on how things are configured. But once you have the credentials for this WAF role, it says then in turn that in turn enabled access to certain of Capital One's folders. They mean buckets at the cloud computing company, they mean AWS. All right, the second command list buckets. So the WAF role is an IAM set of credentials. If you're new to cloud, IAM identity and access management is the principal network that you're going to care about regarding access to cloud services APIs and data. It's not the TCP IP network that that is almost never I've never seen that be the mechanism of exploitation in the cloud. Ultimately, it's always through identity or similar like security group access, things like that. So those credentials are the goal, not server, who cares about server, the credentials are the goal. And those credentials had the ability to do a list buckets command. Now, a lot of times, in Feud, we have like a special rule that tells you if you're running things, compute resources in the cloud that have the ability to list storage locations because that is incredibly dangerous. Going back to the slide on discovery and movement. Well, if you can just do a list buckets, and in this case, get 700 bucket names, imagine if you had to guess those or window them out of some other resource. It's just a really convenient way for hackers to understand the topology of your system. And you really want to prevent them doing that. So this comes back to app dev. This comes back to the system design. You should not have in your system design the need for components of the system to list storage locations. They should know them in some other way. Okay. And then the third command here was the sync command. And when executed, it used that WAF role to extract or copy data. Okay. Well, sync is part of the AWS CLI. It's not actually an API endpoint for S3, but it is part of the CLI. And what it does is it first lists and then calls gets on objects. Okay. S3 is the world's largest web server in the sky. It is probably a third of the internet. Something like that. At least a fourth. Trillions of objects hosted in S3. And S3's job is to extremely efficiently store data and allow access through gets of that data. You're going to do puts and things like that too, but mostly activity on S3 are gets. And this explains, I think, why this was very hard to detect. Because S3's job is to host a bazillion gets a day, however many you have coming in. And therefore, reading each object once out of S3 with get commands is not going to stand out in a signal to noise kind of way. All right. We're at the bottom of the hour. I'm going to switch to this Twitch hack. All right. So if you're not familiar, the Twitch breach was last year. Yeah. Last year. And what happened is a hacker got in through what was likely a orphan piece of infrastructure. And with Impervo, we'll see it was definitely an orphan piece of infrastructure. But when that hacker got in, what they did is get 125 gigs out of GitHub source code repos, including source code for literally everything Twitch makes and source code for Twitch is part of Amazon. Right. If anyone should be able to do AWS security, right, it should be Twitch. Right. So and that is not, I'm not saying that to disparage Twitch. I'm saying that to let you know that you are probably more vulnerable than they were. Okay. These are deep, deep experts on this stuff. And the flaw here was really a combination of a minor misconfiguration and a design flaw, just like in Capital One, that the design flaw in Capital One was the ability to list buckets. Doing the get commands was actually the actual like data X fill was trivial. Right. Just a pile of gets. The important part of that breach was the ability to get the list of things to get. And similarly in the Twitch breach, the problem here is blast radius, not initial penetration. If you think you can prevent initial penetration, you are wrong. There are people who can get into your system somewhere. And so your job is really to limit the damage to minimize the blast radius. All right. So this quote in particular that I've highlighted is one that kind of drives me nuts. This is as bad as it could possibly be. How on earth that someone X fill 125 gig of the most sensitive data imaginable without tripping a single alarm, you cannot have good enough alarms. That is the wrong idea. If you're trying to do that, you will fail. The hackers will win. What you have to do is have a system design and architecture where you're not providing them the ability to do things like this. And what that means is really limiting access to sets of data. So I'm going to switch to a whiteboard here for a sec. I think I've got time for that. Feels like I have time for that. It's not what I want. This is what I want. Okay. Drew, are you seeing a whiteboard? I am. Yep. Okay. Cool. Cool. So over here on the left, we've got a hacker. And let's just think through what little we know about Twitch. We know a whole lot more about Capital One because there's DOJ filing and there's a script and commands. But you've got some server, let's assume for a moment that it's an old school virtual machine. And some vulnerability has been found on this, allowing the hacker to gain access to that compute instance, to that compute instance. But what they steal is a bunch of stuff in source code because, and also by the way, a bunch of user data. So we don't have all the details on this, but how I imagine this probably happened is that somewhere there was a database that this server connected to that might have had things like user data. Famously, the press focused on like, oh, my God, super popular Twitch channels make millions of dollars a year. I mean, we all knew that. The nasty stuff was what they breached from the GitHub repos. So you're going to have SCM repositories get repos with all that source code. And I'm highly suspicious that I don't believe that the same GitHub repo that held some AWS native service source code, which we know from the reporting on this. And I mean, you could just go download this 125 gigabyte archive off of 4chan where the hacker posted it. This has got to be numerous get repos. So what I'm interested in is not really, where am I? Is not really this. That could be anything. That could be some kind of CVE, Unpatch operating system, a log for shell. It could be all kinds of things. And again, you have to assume that every compute instance that has seen air, no matter its composition, although VMs and containers are more vulnerable than serverless functions for sure. But anything that has seen air, you have to assume is compromised if you want to prevent these kinds of breaches. What I'm interested in is how in the world were all of these different get repos, as well as likely databases, then accessed. In the worst case scenario, from a, you know, the Twitch decision making process, there was an IAM role attached to this that actually had access to do all of those things. I cannot think of any reason why the same identity would have access to source code repos and to production databases. It just does not make sense to me. So I'm skeptical. We don't know. But I'm skeptical that the breach was, you know, that simple. I think what we probably had here, and again, it would be nice if people would publish. And I'm going to show you Impurva, who got hacked a few years ago, they published all the details. And God bless him. That's what you should do, right, to help the rest of us not get breached the same way. But my guess is that the, oops, I'm drawing the wrong service, that the ability to switch IAM roles was probably part of this. And it was definitely part of the Capital One breach too. So you might have, for example, this server right here, not having access to these Git repos. But if it has the ability to remap its identity to something that does have access to those Git repos, it effectively does, right? And we see this a fair amount. So what you really want to focus on is having the right kind of segmentation in the system. And folks are used to doing that with TCP IP networks, which again, are really just not even going to help you in this. That's not how this stuff is done. All right, we got 10 minutes left. Let's do Impurva. So Impurva is a very good cloud security company who got breached and whose CEO was replaced as a result. So there, but for luck, can go any of us. So we're trying to learn from these things. Again, good company, solid security practitioners, yet breached. So what happened? Our investigation identified an unauthorized use of an administrative API key in one of our production AWS accounts. Okay, yet again, it's API keys. You're not going to catch these things in flight. I know I keep saying that, but I can't enforce it enough. And you'll see in Impurva's case, they understand that. They're not making the goofy mistake of thinking we should have been able to detect this while it was happening. Instead, they understand that the right way to prevent this is building systems that don't allow for these behaviors. Okay. So how it happened? So this is the CTO, by the way, of Impurva. And I think this is my case study for like how to do this right if you get breached, like really share information, explain things, and explain what you're doing. And that's exactly what they did. So kudos to Impurva. Essentially, they were migrating databases from managed databases to RDS, which is a AWS managed relational database, extremely secure service. If you do it right, doing it right looks like nothing you've thought about in the data center. And they got burned by that. So some key decisions made during the AWS evaluation process taken together allowed information to be exfiltrated from a database snapshot. That's honest. We did wrong things. We made bad decisions on design. And that led to database exfiltration, not from a running database, but from a snapshot of database. Okay. So the errors, according to them, were one, we created a database snapshot for testing. Now, we know from this breach that it was production customer data that was in this. So I've worked in national security environments where you are literally not allowed any production data outside of the production enclave where your test data is all manufactured data. That is an unpopular thing to do because it's very expensive from both a time and cost perspective. A lot of folks, according to one survey that Drew found, I don't remember exactly which one, something like 84% of organizations do have production data in test. And it's not because they're lazy, it's because they're trying to catch bugs before they hit production. But they did that. They created that. Okay. Mistake number two, an internal compute instance highlighted here that we created was accessible from the outside world and it contained an AWS API key. Okay. Done properly, any compute instance through the metadata service or through a secrets manager will have temporal access to, will have access to API keys that for some period of time, based on the rotation of those things, will be valid. So that's not one you can fully solve. You can try to prevent the initial penetration, but you should architect assuming that you can't. All right. Three, this compute instance was compromised and the AWS API key was stolen. Yeah. The hacker didn't care about the compute instance. They cared about the API key. Nobody cares about your operating system anymore. Nobody cares about your local disks, not in cloud. They're going after your cloud storage accounts, which you don't even have access to operating systems on that. So yet again, in this case, and for the AWS API key was used to access the snapshot, not the database, the snapshot. All right. I've got five minutes. So this will probably be the last one we do. Let's switch to whiteboard real quick. Then we'll just erase all this stuff except for the hacker. We still have a hacker. And in this case, our hacker has gotten into, right, we said a, we'll call it EC2 again. It could have been a container. They got into a server that was, in this case, we know from what I'll show you in a minute, that it was an orphaned piece of infrastructure and therefore probably hadn't been patched in a while. This is an extremely common problem in the cloud. And from there, did they access the database? The RDS database? We don't have the chattiest crowd today. No, they didn't. They did not do this. They didn't maybe even try that because who cares. What they did do is go for the snapshots that were being accumulated behind that database. So in the cloud world, in the data center world, it would be really weird for hackers to go hard at backup systems because everyone has different backup systems and so on. In the cloud world, these are very consistent APIs for things like database snapshots. And database snapshots are actually stored in S3. And so you can do out of band, you know, AWS back channel duplication of these snapshots into things like, you know, other AWS accounts. So again, the problem really isn't over here. Like this, this can happen. The problem is in the permissions allowing access to these snapshots. So let's look at what Twitch did to not have this happen again. And their corrective actions are excellent too. So I just can't praise them enough in this. All right, the steps we've taken since this incident to improve our security protocols include protocols meaning stuff you have to do all the time. One, applying tighter security access controls. That can mean anything. Sounds good. Probably good. All right. Two, increasing audit of snapshot access. This is a form of misconfiguration scanning. Looking for access to snapshots that is unneeded in the test environment here. So those of you who are thinking, well, if I run a lot of controls in production, I'll be safe. Hackers actually tend to prefer dev and test if you have real data there because you probably have less controls on it. So don't think that is going to help you too much. All right. But good on them for increasing that. This should be happening 24-7. This should be happening all the time that you are aware of potential misconfigurations. All right. Three, decommissioning inactive compute instances. This goes back to orphaned infrastructure. Hundreds of thousands of resources. Fugue manages like millions and millions of resources for our customers. It's pretty easy to forget things. When you can create a global network in a minute by executing a cloud formation template or a Terraform template, it's easy to do that. And therefore, it's easy for those things to accumulate and compute instances and databases and whatnot. And hackers love those things because you've forgotten about them. And remember, again, you can't fully prevent the initial penetration. And this is part of why. You won't be able to. But what you can do is make it not that important if they do get in. Okay. So good for them. They were looking for their orphaned infrastructure. We've been doing that for our customers for many years. Okay. Rotating credentials and strengthening our credential management process. Your number one friend in preventing hacks using API keys is frequent key rotation. It won't prevent everything. But time is on their side. Not yours. You want the shortest lived stuff in the cloud, whether that's compute instances or containers or API keys that you can come up with. All right. So good for them. Credential rotation is probably the right place to start in terms of like really scrutinizing your patterns, your design patterns. We won't get to it today, but the Uber breach, that stuff was like three-year-old API keys that still had root on production. Like, don't do that stuff. Okay. All right. They put stuff behind a VPN. All right. That's cool. And then increasing the frequency of infrastructure scanning. That should be, again, 24-7. So we're up on time now. I'm going to really quickly flip through a few more slides, and then we'll get to questions if we have any. Okay. Do you made all these nice slides? I should use them. So we think that the five fundamentals are to focus on prevention and secure design. That's a theme I've been kind of hammering on through this whole thing. All of these breaches were due to flaws in design, in development, not to lack of security monitoring. Okay. And to do that, you've got to empower ... Oh, sorry. I started on the wrong one. First is know your environment. I already covered that, so I kind of jumped to the next one. So focus on prevention and secure design. This stuff has to be baked in through the entire SDLC. So the ability to do the... And this is what we do for a living and have for years that few now sneak is tell you where you are making these kinds of design errors so that you can fix them. But it has to be all the way through the SDLC. The way you do that is by empowering developers with tooling, not with meetings, not with education. You cannot train people out of this problem. You have to provide them automation and tooling that tells them where things have gotten into any kind of trouble. And this is all glued together with policies code. I mentioned earlier the hackers are going after things in a completely automated way. There is no security and obscurity anymore. If you have an internet-facing endpoint, it's being scrutinized. And therefore, your security policies, which are where you embed the knowledge of these dangerous design patterns, right? That has to be fully automated too. Or they will just beat you on time. So education, it's kind of like... I'm a big gearhead. I love cars. Cars have gotten much, much safer, not because we've told people drive better. That's like a pretty weak sauce, right? Like drive better. No, we've made cars safer through the design of the cars. And that is how this stuff needs to work. And policy as code is really your tool for that. And then you do want measurement. You do want some kind of quantifiable, objective measurement of what matters to your organization so that you can keep yourself honest and so that you can also explain why the investment in doing these things is worthwhile. So I'm going to throw this screen up. We have some resources Drew. Any questions, comments? No questions yet. By all means, do pop any questions into the chat. I wanted to see if you might take a few minutes to talk about the role of a security architect in the cloud. Because what we see a lot of times are security professionals coming over to the cloud and kind of wearing similar hats that they might have had in the data center, whereas really in the cloud, it's kind of an architect role when it comes to security and being able to prevent this. Can you shed some light on that role? Yeah, I mean, the role of security has to change because developers are controlling what is out there in the infrastructure. So what that means is the security team has to become more design and architecture focus. What is a dangerous application design matters most now in security. And what that means is the security practitioner needs to be able to speak that language and that language is best expressed as code, as policy as code. Because policy as code can run on anything from a terraform or cloud formation template, pre-deployment to the running infrastructure. If you're a security practitioner, this is a huge opportunity to move out of largely monitoring and trying to create defense in depth and having a real voice in the creation of systems that add value to the organization. And that's the future of successful security practitioners in cloud. Awesome. Yeah, so we have an interesting question here. What are your top five secured design in cloud recommendations in order of difficulty to implement? So let's maybe start with the easiest. You know, rotating keys might be that like easy low hanging fruit. Yeah, exactly. So we're actually in the midst of what we just did the first of a series of three or four, I think four classes on a taxonomy that breaks this down. Instead of five, there are three kind of layers to this taxonomy. The first one is time. That limiting time. So I had the privilege to write one of the earlier, I think the first O'Reilly book, little small book on immutable infrastructure. Time is the hacker's friend. And so anywhere you can make things really temporary, that's good. But the other two dimensions you really need to care about. And I know this isn't a direct response to that question. I'll try to put in a couple at the end. But this is how I think you should really try to think about it. The other two big areas to think about are the resources you have and the constraints on those resources and understanding the graph of identities and actions and resources. It's non-trivial. I can't break it down to like, here's a list of five things everybody does wrong. If you look at the hacks I just described, all of them did something subtly differently wrong. And that's the challenge, right? It is a very complex topic. So sorry, I couldn't boil it down to five. We've got one comment here. The way I see it, security should be the responsibility of all teams in their own context and that the security team's role is doing research and supplying information to others for them to implement. I agree with that, but I would add to it that I think the security team should be supplying code. That we now have the ability with policy as code and we at Sneak use open policy agent for this to take that knowledge and rather than putting it into English or whatever your local language is, putting it into something that's executable because now that can be fully automated all the way through the software development lifecycle. But I agree. Security needs to be kind of a center of knowledge the way that knowledge gets disseminated is what's fundamentally changing right now because of all of the automation and the complexity in the cloud. So Simon here with in terms of monitoring abuse of API calls, do you have any use cases that enterprises should be thinking about? I have yet to see a scenario when I was at AWS or since founding few where it was possible to prevent damage by noticing abusive API calls. That's kind of the fundamental hard thing in cloud computing from a security perspective is once they're hitting the API, it's over, right? It's already done. It's in the past. So this is why I was saying, it's good to monitor your APIs, of course, like have security in whatever places you can insert it, but understand that the 700 buckets worth of data in S3, if you had been looking for abuse of API calls, S3's job is to field gets all day and the data would have been gone. So you really have to solve this stuff architecturally. Next question here. You mentioned about getting the architecture right. Is there a community or trope of resources which are publicly available under this topic? So one thing that we publish as full open source and again, I'm a big fan of having running code as the way to express these ideas, things that you can actually use rather than just like descriptions. If you go to the Fugue on GitHub, Andrew, maybe you can post the URL for regular. If you're using Terraform or other IAC, we publish a very thorough open source project that will check for hundreds of these kinds of architectural mistakes and misconfigurations and design problems. There you go. GitHub.com slash Fugue slash regular. So that's a place. Mostly honestly, what I see out there is a lot of naivety and marketing stuff like security theater. I think looking at real breaches will really open your eyes to what hackers actually do out there. Okay, we're right at the top of the hour. We're like 10 minutes long. So I need to turn it back over to the Linux Foundation. Thank you all for your time and your questions. And I'm sorry, I love folks that I ramble. You're there's no problem at all. So thank you so much, Josh and Drew for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you join us for future webinars. Have a wonderful day. Thanks, everybody.