 This is Cloud Security Orienteering. Thanks everyone for sticking with us. I know we're right towards the tail end of what's been a really exciting conference. I'm actually going to end up referencing things from just the last day or two as we go through these slides. I'm Rami McCarthy. I'm a staff security engineer at Cedar, which is a health tech startup. I'm what I like to call a reformed security consultant with NCC Group. I have a couple certifications in Cloud Security, and I was a core contributor to Scout Suite, which is a multi-cloud auditing tool, and released an open-source vulnerable Terraform configuration tool called SAD Cloud. We're going to dive right in here to the background. What is Orienteering? I don't know if anyone googled it after it was in the talk title, but in the general sense, it's an outdoor sport really where you use a map and a compass to navigate from point to point in diverse and normally unfamiliar terrain at speed. In my background as a consultant, I did a lot of AWS security assessments and a lot of GCP assessments in Azure. These were sprawling multi-account environments. In some cases and in other cases, it was one account or one project or subscription that had a ton of resources intermingled. I was being paid to do a security assessment. I really need to establish this methodology for Orienteering, for finding my bearings and finding my way from point to point at speed in an unfamiliar terrain. Now, understandably, most of you are likely not security consultants. I am not a security consultant anymore. How is this relevant to you? Well, when you start a new job or a new team as a Cloud Security engineer, this is going to be exactly the sort of work you have to do to get your bearings. If you are a consultant or decide to do some consulting engagements, similarly this will come into play. A merger or acquisition is going to be an opportunity for you to very quickly understand the current state of an environment. Then you're going to want to see your own environment with fresh eyes. Hopefully following this, there's something that's applicable even if you feel you have a fairly good handle on your own company or organization or account. Before we can talk about security at all or Orienteering, we need to look at how so many environments get into the state they're in, and really that comes down to Cloud Adoption patterns. Securosis, which is a really incredible information security advisory firm, has broken out these four common patterns of Cloud Adoption, and I recommend their whole blog post. It's really enlightening, but this is a short synthesis here. There are the four patterns, developer-led, data center transformation, snap migration, and native new builds. There's a pattern, a trend you can see here, which is that in three of the four cases, security is either late or trailing implementation, and that results in higher variable risk. Really, because of the nature of how the Cloud is adopted in businesses, we're inherently going to have these heterogeneous risky environments, and that's why it's important to be able to discover what the risks are and also know how to prioritize them appropriately. So you walk into a Cloud environment, and unless you're very, very lucky, this is a pretty fair representation of what it's going to be like. There are a lot of reasons for this we'll go into, but basically the Cloud's a new concept in the grand scheme of things. There are standards and best practices that are elastic and evolving, and security trails behind Cloud adoption, and so you end up with these environments that there's a lot wrong, and us as security practitioners, we need to know how to handle that appropriately. So if we're going to talk about the current state of the environment, if we're going to criticize it, we need a clear and well-founded definition of our target state. What does good look like in these Cloud environments? And this can be hard in Cloud architecture for a few reasons. Like we've said, the standards are emergent, there's a high complexity ceiling, and there's a ton of configuration and services. You know, in AWS, it's 200 plus, 150 of which have security documentation at all, and this is true across all the Cloud's service providers. So quick programming note, from here on out, I'm going to use AWS for all of the examples. We're going to do our best to talk about principles here, not any specific Cloud's defaults, for example, and nothing that shouldn't be applicable to other Cloud providers, no matter if you use GCP or Azure, or even for the like one person in the audience who has a workload in Oracle. So what does good look like in AWS? What comes to mind when I say AWS architecture security best practices? Maybe for some of you, it's the AWS well-architected framework, which has a distinct security pillar. For others, maybe it's the AWS security reference architecture, which is this prescriptive guidance they've released as part of their solution architecture that clums with CloudFormation to set up a perfect environment for you. For newer folks and newer teams, maybe it's Control Tower. I haven't used it a lot, but it's a prescriptive offering from AWS that will set up, you know, as it says on the slide, secure compliant multi-count AWS environments. What I'm getting at here is the complexity of the fact that in the last 10 years, all of the Cloud providers have been rapidly improving and iterating on their recommendations. They've been building new features, new services for security, for governance, for compliance, for orchestration. And you're gonna see permanent architectural evidence as you go through a Cloud environment on where an organization started in its Cloud security journey. If you take, for example, AWS, account management, AWS organizations, which is now a standard recommendation and expectation for most organizations, weren't even around until 2017. And so it's super common if you walk into an organization that's been around any longer than that and most have. There are gonna be accounts that either had to be shifted into AWS organizations, they're gonna be orphaned accounts. And it's very common, they just predate this construct. As another example, we can break down the AWS security pillar white paper. This is directly inspired by Scott Piper, who back in 2018 did a diff of the update to the white paper. And at the time it was super interesting because AWS recommended guard duty for the first time, really comprehensively, as well as SHIELD, WAF and firewall manager. They used to recommend ElastiSearch, EMR and Athena and moved to just Athena for searching, analyzing logs, cloud formation and therefore infrastructure as code took a key place. And Macy, this was at the time that Macy was being criticized for the pricing model and its utility and was fully removed from their best practices. I've gone and done the same thing for 2018 to 2020, which is the last year the white paper was released, following that they moved to HTML documentation. And as you can see, a lot changed just in those couple of years. AWS organizations, as we talked about, had really matured in that time. Federated identity was now called out as a best practice. AWS security hub and config were a core part of the guidance as well as movements towards automatic remediation. SCPs were coming in as part of AWS organizations and they significantly expanded instant response guidance. So when we talk about best practices, you have to remember that at the time that organizations may have configured workloads in the cloud, the best practices and even available tools may have looked very different than they do now. So what does good look like in AWS? I said there's not one answer. I said it's been changing, but for this talk, for reference, I'm gonna be mapping roughly to three standards. When we talk about best practice security configuration, we're gonna be referring roughly to AWS CIS benchmark. For architecture, I'm gonna generally be aligning to Scott Piper's, sorry, for architecture, well-architected framework for cloud security maturity, Scott Piper's AWS security maturity roadmap, which is phenomenal. So you have the rough guideline here. Let's talk about orienting, right? We have a few standards we can look at for best practices. We understand that cloud adoption is elastic and that over time these expectations have changed and this is why we have these heterogeneous accounts. How do you actually find resources and know what's going on? First, some assumptions. This guidance is gonna work, assuming you have cooperative help. You need to be partnering with DevOps, with developers, with operations teams, CIS admins and leadership to gather information. I'm assuming companies had good intentions and are doing some of the basic things right, but that this is the first concerted effort at security architecture. I'm not talking about multicloud. This is not an indictment. There are plenty of those floating around, but you're on your own if you wanna figure this out for multicloud. Certainly apply it to each cloud, but then you have to worry about the matrices. And we're gonna assume that you have access. I'm not gonna talk about how to break into an account that you know exists in your company owns. There's a lot of good blog posts out there that give you guidelines on how to regain access to an AWS account. And finally, it's important to be aware that when you're doing this sort of discovery work, you often will stumble upon some really scary things and they can turn into instant response. We're not gonna cover that situation here. There's no expectation of active or store compromise and cloud instant responses. It's whole genre. So principles of orienting, breath, then depth. This is actually something I see junior analysts and security practitioners falling into really commonly is that they'll see something interesting and part of the curiosity and discovery of security is folks who really know how to dig in. And that'll get you in trouble when your goal here is to orient yourself. Before you can dig in anywhere, you have to figure out where the most important places to do that. And therefore you need to strive for breath, which means avoiding rabbit holes. Take a note, come back to it. But when you start, you're always gonna wanna have a good idea of the overarching situation. Anomaly detection is the name of the game here. And I just mean pattern recognition. This isn't ML, this isn't automated. Some of this is gut and experience. Some of this is get used to the norms of a specific environment. Make sure you look at every region, every project, every account. And you should start to get a feel for what is maybe further drifted from the norms of even that environment. For example, if they have everything running in US East One and you see a resource running off in Bahrain, take a note, follow up on it. Also, another gap I see is take advantage of your inside out access, meaning leverage the access you have to query and enumerate efficiently. Don't try and emulate an adversary if you have authenticated access to APIs, whether you have security audit or read-only access, use that. But also remember to go outside in. This is the only way to catch these unknown unknowns in this quadrant. Credential access only works for things that you know about. Outside in and there are a lot of talks, for example, Felipe and Kavisha's talks earlier in this conference about how to find unknown resources. This is gonna look like DNS brute forcing or certificate transparency logs using tools like census and showed in. But take advantage of both. Get easy coverage of the things that you already have access to but don't forget to look for things that aren't documented or known. And then I like to call what we're gonna do, corporate archeology. Really, what you're doing is you're digging into the history and artifacts and organization itself, builds and creates in the process of deploying these environments and using that to understand where risk may lie. So corporate archeology, there are a lot of data sources. If you're really lucky, there's a detailed asset inventory, although often it's incomplete or doesn't exist. If you're a little lucky, they've adopted infrastructure or configuration as code with tools like Terraform or Chef Ansible Puppet. And this gives a definitive source for cloud configuration. Most organizations, especially those of the mature compliance or governance function should have data classification and designation of scope. For example, even if they don't have a good handle on their whole estate, they should know really well where their PCI environment is. And then all organizations should have subject matter experts available as we talked about earlier, hoping for cooperation here who can pass tribal knowledge. And much of the knowledge in organizations I've audited is tribal. If you're lucky, there's standardized tagging. If you haven't rolled out tagging in your environment, check out your which automates it. But tagging can let you do things like query for all the resources with a certain day classification level or group resources by workload or find business owners. And that's really valuable data to use as we go through this process. And finally, I'm gonna return to this idea of cloud security tools, but just remember when you're orienting yourself in a new environment, take advantage of what they already have. If they're already paying for vendor tools, a CSPM maybe, a cloud security posture management tool, use that, don't ignore it. And we'll talk about how later. Eventually, you're gonna need to find, develop or have someone build for you architecture diagrams and documentations of workloads. There's only so much reverse engineering you can do in this process. And so start the ask early, flag that down the line, your grand detailed questions and they should figure out what their priorities are there. You're also getting to know crown jewels and this is very organizationally dependent and people overlook this. But in organizations like mine where we're very regulated and compliant, we know what our major risks are. And those are around protected health information, PHI, because we're a health tech startup, PSI data, cardholder data. In organizations that maybe aren't regulated or don't have those compliance requirements, this can be a lot fuzzier. And go and ask, ask the business, what is the riskiest data? What would be worst if it were compromised? If it's an ad tech startup, advertising technology, is it their ML models? Is it the user data? Is it their data they've bought from vendors where they're strong contractual requirements? You can't know and so you have to ask and discover that and use that to guide your prioritization later. Also, you need to find what the intended authentication and identity approach is. There may not be a standard one, but knowing if it's normal for folks to have their own IM users in each account, whether they're using roles and cross account assumption with an identity account, et cetera, is gonna be really important down the line as we start to draw everything together. And there's this hierarchy in cloud environments and you should use it to your benefit and discovery. We start at the bottom level with the broadest selection, which is collections of accounts or environments. In AWS, this would be AWS organizations. Then at the environment level, you have the point at which there's a default enforced boundary based on the cloud service provider. So this would be an AWS account which by default is completely encapsulated. Next step, you have workloads and then regions. So workloads can span multiple regions. These can actually be flipped though of course you can have multiple workloads in a region, services and then resources used to implement those services. And this is not ordered. It's hierarchical, but it's not a guideline, right? We're not gonna start with collections of environments and then work our way up. Instead, once you find, for example, a workload, you're gonna wanna trace it in both directions. You're gonna wanna find out what the environment it lives in is and maybe is that affiliated with an AWS organization or a GCP account and which, and you're also gonna wanna trace it down to the individual resources. So this model is important to think about just in terms of traversing it and building up a knowledge base, but not necessarily directly followed for orienteering. So getting into some of the, maybe more technical details here, how do we actually do this, right? Discovering your environments, discovering your accounts is generally the first step as they're the largest unit you can easily discover. Finding AWS organizations is not necessarily trivial. Finding AWS accounts, there are a lot of ways to do it. Scott Piper over at Summit Rack has written a whole blog post that I'm just gonna reference directly. You can work with your technical account manager, your finance team, search company, emails, DNS logs, network logs and also put out a request to public employees. And I actually wanna dig in more to this last note here. So when we're talking about security and cloud security, there's a technical aspect, but there's also a human aspect and security is all about evangelism to your organization. So how do you reach out to employees, right? Find them where they are. If your company uses Slack, just put out a message and say, can everyone please let me know what cloud accounts they're using? But you also have to incentivize things. There are a lot of ways you can incentivize people to bring accounts that are shadow IT under centralized management. That can be opening up a new policy where you allow folks to expense development environments which will come with some cost, but then guarantees or almost guarantees folks will bother telling you once they set one up. You can also offer default profiles for these accounts, offer to stand them up and manage them, offer to be responsible for maintenance and stability. And I say you offer and I mean as a security team or as a security team partnering with an IT org or DevOps org. And so once you go through that process and have a starter set of accounts, you're gonna wanna take a pass at discovering workloads. And you can do this from a technical perspective but I find that the best place to start is actually documentation. Work backwards, find where the company likes to keep records. Is it GitHub or GitLab? Is there a Confluence or Wiki you can look at and start reading about what their products are, what their services are, what tools they use and how they're designed and where they live. You can also inside the accounts, look at the monthly billing report. Any spikes are important to dig into but also the monthly billing report in a cloud account can be reverse engineered to say a lot about the architecture. For example, lift and shift migration models generally result in a huge usage of EC2 instances to replicate on-prem servers. And so that's a really valuable piece of data on how the company works and has designed things that you can learn just from looking at billing. Similarly, how much adoption there is of various managed services like container services. And Corey Quinn who has a well-known startup doing cloud billing optimization has a whole blog post discussing how you can tie bills to architecture that goes into this in much more depth. And then also infrastructure as code is specifically useful for workloads because often best practices for writing terraform or similar infrastructure as code has you group the actual code in a workload. So that can be mapped how if you look at a cloud account and you have an EC2 instance and an RDS a database backing it and maybe a firewall, maybe a CBN. Those resources have to be traced whereas in infrastructure as code they probably all be sat alongside each other. And I wanna flag here besides looking at discovered accounts here when you're looking at the documentation you get a second chance to find accounts based on novel workloads. So there may be somewhere deep in the documentation from 2017 where someone says, oh, we stood up this service for a single client or we have a beta running and you don't recognize the cloud ID and that gives you another chance to find these things. And then so you have an idea of accounts you can work backwards to organizations. You have an idea of workloads roughly what are the sort of things we're using the cloud for. You should discover resources. And I can't overemphasize you cannot manually discover all the resources in an organization's cloud estate of any reasonable size. I've done it, I've literally gone through Zoom calls over the shoulder with clients trying to navigate their AWS console to find resources and do a security audit. It's slow, it's painful and you will miss things. It doesn't scale, it scales linearly. Automation is key. There are two paths for automation. Like I said earlier, we'll talk about vendors. Here's where we do it. The company may already have a cloud security vendor to in place. Cloud security posture management is gaining wide adoption. There are a lot of really cool tools in this space. Maybe also they've just adopted native cloud security cloud service provider services like AWS security hub and AWS config. You can use these tools if you're orienting yourself. There is no reason you shouldn't look at this existing data, but there are a few caveats. Watch out for exceptions and actually dig in why are certain things excluded from scans. Oftentimes that's because they were firing too many alerts which can be false negatives or sorry, can be false positives but also now you could have false negatives. Also check the configuration. If you've gone through the process to this point you've probably found some accounts that the organization didn't know about some unknown unknowns and it's unlikely those are onboarded to their tooling. So your other option here is to run a tool yourself and it's almost always gonna be open source if the company doesn't already have something that they're paying for. So why is this? Well procurement is hard. Anyone who's worked in big company knows it can take months to get sign off to put in a new tool and when you're trying to orient yourself you don't wanna wait for those maturity processes to roll through. So we turn to open source and there's so many people writing so many great open source tools. This is just a subset. My former employer NCC group disclosure has a tool called AWS inventory that tries to identify every resource in a specific account. I also personally really like to use the auditing tools just for resource discovery because as a predicate to actually running configuration checks they have to discover the resources and often we'll have good reporting on vulnerabilities and workloads and how services communicate. Steam pipe, prowler, scout suite are all targeted at misconfigurations but are really strong examples of tools you can use. And native services are really good. I wanna specifically call out GCP has some great asset inventory tools that work out the box that you should leverage. And for the most part, asset inventory tools are relatively inexpensive but check before you turn them on so you don't get fired. So that's the really quick way we walk through and find resources, services, workloads, right? But then we have to actually dig in and talk about prioritization. So you have more day than you know what to do with you have a decent mental model of the organization's cloud of state the real thrust here is how do we go from that to actual guidance? How do we decide what's important? If you try and fix everything you're not gonna get anything done and you're not prioritizing your time appropriately. Someone in engineering leadership once described this to me as you know in most companies everything's on fire all the time and the important thing is figuring out what can be let burn. If you're not carrying any risk your company's probably being too conservative and that can impede speed of growth. So what's important in the cloud? We need to know what's important to prioritize. Identity is the new perimeter but the network perimeter actually hasn't gone away in many cases especially in these lift and shift adoption patterns or other non-native patterns. So we need to look at the realistic threats here. I like the disrupt ops breakdown of cloud kill chains and I'm just gonna speed through this part but we can look at these threats based on three features. Whether they're actually the root of initial access whether they're cloud specific and what the impact is and we're gonna immediately drop anything that's not high impact anything that's not cloud specific probably isn't solely your job to cover as part of this process. And if it's not the source of initial access if it requires another vulnerability exploit or compromise it's just not a day one priority. And here's the second set here network attacks novel cloud day exposure and exfiltration. If we flip back we'll see almost all of these are either resource exposure or the account perimeter network perimeter or resource exposure which is an identity thing in the cloud like S3 buckets allowing any identity to read them. We can also look at this based on publicly disposed breaches and I did a whole talk on this so I'm self citing here but you can see that these risks aligned to the kill chains we flagged and aligned to this idea of identity and network perimeter credential abuse, exposed resources and application vulnerabilities all show up here as well which gives us an indication we're on the right path. So environments and collection of environments you need to figure out where your target state's gonna be on that front as you go through this process so you can make fixes that support a target state. Are there accounts that you shouldn't be digging into fixing because they're not used and can be deprecated? Who is the business owner that you need to talk to before making potentially breaking changes? And is there even a need for multiple organizations? Are you gonna start onboarding things? Look at your workloads, check the oldest ones first they're the most likely to have problems because no one's touched them in a while often they predate certain controls and configurations and most cloud service providers don't apply those backwards by default and then we get to the identity perimeter and a lot of people have talked about kind of IAM hardening so I don't wanna spend too much time on it. We're gonna restate a lot of these things though specifically what's interesting for orientating is the management plain access model how are users getting access to roles in this cloud? Is it SSO? Are they using IAM users? Is it tied to directory users? Are they federated? Is there an identity account and cross account roles? Are they accessing each account directly? Do they have access to all roles with the same permissions? You need to start understanding this because that will give you a good idea of what's deviant and also the level of maturity. If they're using raw IAM users at this point you can suspect that there may be a need for a maturity push on the identity front. And similarly SSH and server access model ideally there will be one standard here but often there are a few. Are they using bastion hosts? Are they directly SSHing into cloud servers which means they have to be aeronet exposed? What sort of credentials are they using? Are they doing SSM or cloud native services to access these? Are they buying some sort of zero trust cloud native tooling? This will tell you what is a misconfiguration and what is an architectural decision. So if they actually, if an organization says we directly SSH into servers they therefore need to expose those servers. Maybe there's a VPN, maybe there's a bastion host but then you know that maybe those port 22 open to the world isn't quite as concerning as it would be otherwise. Still something to mature but not something that's necessarily a major exposure out of the gate. And we get to IM. There are a few things that are actually urgent. Cleaning up the root user is essential. Understanding cross count trusts with a tool like Cloudmapper will let you know if there's anything potentially nefarious or presenting a lot of risk with what sort of vendors or partners have access to your account. And the IM credential report is a great use of great tool to find unused users or to find unused roles. And then we talk about the identity perimeter and how we actually find this stuff. So like I said, the credential report is great. Trust advisor security hub or AWS config all have IM findings, trust advisor. There are some included in kind of the free tier. And then open source tools, a couple I wanna call out that will help you prioritize by flagging the highest risks. Cloudsplaining from Salesforce is really wonderful because it'll give you a good idea of the general hygiene with the number of findings. But specifically it calls out privilege escalation and it calls out what users and roles have the ability to expose data, leak data, modify infrastructure. Eric's actually speaking at Black Hat or DefCon about Pmapper, he just released a new version. It's awesome. It will do transitive access and talk about highly privileged users. So no longer it's just look at who has admin. It's look at what possible roles can be assumed for someone to get from their default permission set to admin, as well as privilege escalation vectors. And so look at these tools, but you're not gonna fix the universe of IM permissions nor should you while you're just finding your way and killing the most important things. Instead, you should focus on high risks that are immediately exploitable. So too many admins is a bad problem if you can kill a few of them great. Privilege escalation vectors on a standard employee role are worth finishing immediately. Overprivileging of your existing admin users is probably not a day one job. Those are already trusted users and there are more important things to fix. So that's the speed round of the identity perimeter. Then you go into the network perimeter. You need to find public resources. GitHub has a collection AWS disposable resources. I didn't link to here, but that is an excellent way for you to tell what the like 42 possible resources that can be exposed. Here's where you dig into your audit findings, right? We talked about using audit reports to find resources. There are a few key findings to look out for. Anything that's open to the universe is worth a second glance to make sure it's intentional. Anything that is publicly exposed, you need to make sure the workload is intended to be publicly exposed. Scan findings will tell you, trust advisor would tell you. This can look like wildcard security groups. Also check all the launch wizard security groups because those generally indicate something was made through the AWS console, which is a sign that things may not be quite as governed or managed. Like I said, you wanna dig into the exposed resources. And this includes hosted applications and services. So this is somewhere you can really easily rabbit hole. And I wanna be clear, this is no deeper than a vulnerability scan, but can't be neglected. Find all the exposed ports, all the exposed services, and do a quick check for out of date services, known vulnerabilities that might be exploitable as we saw application vulnerabilities or a source of compromise. Look at any unauthenticated service and make sure it's intended to be public and unauthenticated. And check to make sure you're not putting things like configuration management or Jenkins out on the internet. These are the ways the companies get hacked in the real world, and that's why you need to prioritize fixing them. There are other concerns less actionable or less impactful, but I wanna be clear they're considered, but put aside exposed secrets. There are a million ways to expose secrets, but if you look at this list, almost all of them are either exposed outside the cloud environment where you probably don't have visibility or they require someone to already have authenticated access to exploit and privilege escalation from authenticated access is an important day three responsibility, but not day one. Similarly, find the secrets management pattern and look at the supply chain, but fixing vendor relationships isn't something you can do as part of your getting your feet wet, and so just make a note and put it aside. And this is my moment for very short compliance interlude. I'm not talking about compliance here. There are compliance professionals. I just wanna say if you're working a regulated industry, I'm very sorry. You're not gonna be able to follow a lot of this prioritization because you're gonna have to focus on compliance impacting controls, documented exceptions for drift, compensating controls where you can't get things enabled, and you're not gonna be able to avoid fiddling with encryption. And a lot of what we're talking about here is misconfiguration. This is a Gartner report that says misconfiguration is the root of all evil. When you're orienting yourself into environment, how do you know what's a misconfiguration? How are you supposed to know the difference between an S3 bucket called application files and an S3 bucket called project meteor, right? And so this is where it becomes really difficult to dig through the data and you should look at documentation. There's a common flag when you're looking at securing AWS environments that says S3 buckets being public is the root of a lot of compromise and that's not false. But in most environments, they're going to be intentionally public S3 buckets for static assets. And you need to be really thoughtful about efficiently figuring out which are intended to be public and which are risky. So that's where you look at tags, that's where you look at documentation, that's where you look at the classes of data and need to know what the crown jewels are. So misconfigurations. Hey, hello. You're gonna slip. Awesome, perfect. I think we're right on time. Misconfigurations are really an argument about defense in depth and I am by no means against it. Defense in depth is a core security principle that you should be applying to your environment. However, in the short term, a fixation on defense in depth is leaving gaps in your controls. It's better to have your first line of defense configured before you invest in defense in depth. And encryption is the canonical example here. I'm by no means recommending that encryption is useless entirely or that you shouldn't leverage encryptions available. But I am saying that if you can avoid an encryption, if you're not regulated, it's probably not your top priority to fight getting old resources onto EBS encryption, for example. It's still useful and you definitely should turn on the default encryption, going forward, but it's probably not your top priority. How do you prioritize misconfigurations? There are a lot of controls and a lot of misconfigurations. AWS's foundational security best practices has grown in size from 31 to 141. The ones that matter, we go back to our identity and network perimeter. Exposed management ports, exposed resources, lack of authentication are the things that'll bite you immediately. And I'm going to use my last couple of minutes to talk about where you go from here. Blanket AWS hardening, always turn on guard duty, always turn on cloud trail, back up and centralized cloud trail. I am access analyzer, security visibility to all accounts, S3 block public access. Other people have mentioned these and it's because they're a really good idea and there are very few environments where they're not applicable. What does fixing things look like? It's an exercise in relationships. This is a really kind of canonical book on how you build a security program. It says seven steps and this is true when you're orienteering. Once you have all this data and you know what needs to be fixed and maybe where the architecture needs to go, cultivate relationships, get alignment, focus on key security domains, get buy-in from other teams through evangelism and invest in convincing and working with other teams, make business owners own security in their areas, build out a team and measure what matters. As you can see, this is a whole book, but I wanna say that you're not gonna fix all these things alone. So you need to figure out what your priorities are and how you're gonna convince folks to buy in on those fixes. Fixing things otherwise is just gonna look like whack-a-mole. It's a marathon, it's not a sprint. You're not gonna get everything up to code immediately or in the near future. I recommend approach of governance with exceptions in the long-term. So set a best practice standard and just track, drift against it and report on it, measure what matters and slowly buy down that security debt. Make sure you're explicitly tracking these legacy resources and drifts. There are a lot of maturity models. Cloud security maturity model is one option. Marco Lanzini has the roadmap at Cloud SecDocs that I recommend. Look at these, pick the one that fits your environment. If you wanna learn more, the slides will be up after this talk and you can dig in here. Cloud security roadmap is great. Matt Fuller's So You Inherited in AWS Cam is a different take on a lot of the same content and other resources that are here as well. So I don't think we actually have time for questions necessarily, but thank you. Thanks to the organizers. Slides will be up at that link. Come work with me.