 We've talked about the rise of infrastructure as code and the need to secure it. IAC is, for all intents and purposes, code. Like other code, it needs to be scanned, and like other code, both static and dynamic methods can be applied. Yanni Lederstorf, within Denny, is going to show you how to level up your security of cloud-native applications at scale while achieving continuous compliance. His secret is using dynamic methods. His session will help you chart a roadmap to advance your IAC security automation journey. Welcome, Yanni. Hello, everybody. My name is Yanni Lederstorf. I'm the CEO and founder of Denny. Today, we're going to talk about secure GitOps workflow for continuous compliance. This presentation assumes you're familiar with infrastructure as code, like Terraform and CloudFormation. You're familiar with the concept of using it within CIACD. For example, the GitLab pipelines, as well as doing security for infrastructure as code, specifically like using Chekhov, TFSec, and a variety of different static analysis tools. So we're going to take that a step further on the presentation today. So my name is Yanni Lederstorf, CEO and founder of a startup called Denny. We specialize in security automation. We have two products. One is in security infrastructure. The other one is very focused on infrastructure as code security. The product is called Cloudrail. And you can find me at these two handles. So if you'd like to reach out after this presentation, feel free to. Always happy to take the conversation forward. So let's start with looking at static analysis for infrastructure as code for a moment. So as you probably know, infrastructure as code is used to build cloud environments. Teams all over the world are now using it to design, define, and actually deploy the cloud environments that they use. And they wrap it in CI CD. So for example, they use GitLab pipelines in order to run their infrastructure as code and actually deploy it. So whether it's Terraform plan and apply, CloudFormation deploy or anything else, or create stack or anything else is obviously done through that. And then the step after that is, of course, adding static analysis. So tools like Chekhov and TFSEC who are also in this conference. Those tools, what they do is they analyze your code and look for potential security misconfigurations. So for example, whether you got an S3 bucket configured incorrectly and it's public or your IAM policy is too permissive, or your security group is too open to the world. So those are the kinds of things that those tools will find for you and they integrate into GitLab. The trouble with those tools and the trouble with static analysis in general is that it has a lot of false positives. It's very, very noisy and it's very annoying for developers. For those of you who are familiar with SAS in the application security space, SAS is considered kind of the first generation of static code analysis or code analysis in general because it was basic and it found a lot of things, but most things that it found were not that useful and developers didn't like it. And so what we're seeing is actually an infrastructure as code security, the same thing happens. Static analysis generates too much noise and is very frustrating for developers to interact with. In addition, there are a lot of issues you can't see if you only look at the static analysis or you only look at the code. For example, you can't find configuration drift if you look at the code. You need to look at the live environment as well. Configuration drift is when you're defining a resource and code, but then in the actual live environment, it's different because somebody went in and made a change. So you can't see that in static analysis. Also, there are a lot of IEM issues you cannot see through static analysis because it actually depends on how the environment is constructed and how the workloads are actually executing. And so there are a lot of IEM issues that are missed in static analysis. And then lastly, it's what we call kind of data protection challenges. But essentially, if you look at just a portion of code, you can't really fully understand how that code is impacted by resources and the environment. So for example, let's say you're setting up an RDS database, you can't fully understand how the environment may expose that database by just looking at the code that sets up the database because that code will not include maybe some EC2 instance or Lambda functions or something else in the environment. And by the way, I keep giving AWS examples, but of course the same thing applies to Azure, GCP and the others as well. So basically static analysis is very noisy and it misses the vast majority of the really, really interesting things you'd want to do. So we decided to take it a step forward. And instead of just doing static analysis, we also drive dynamic analysis, which in turn allows for much more specific policy enforcement. So what is dynamic analysis, actually? Dynamic analysis basically means that you don't look at just a code. You look at the code and the live environment. So that concept existed in AppSec. It's called DAST and sometimes RASP. And it exists here in infrastructure as code security. And the idea is that you look at your Terraform code, let's say, or confirmation or anything else. Then you look at the actual live environment. So the actual cloud account and what is configured there. And you cross the two together. You actually merge them. So if you have a resource in your code that references by ID, some resource in the cloud, you actually connect the dots. And you say, okay, so this EC2 is going to be deployed in a subnet. The subnet is configured elsewhere and the subnet has, let's say, the public IP address allocation flag enabled. And so you can only see that if you connect the dots there. In addition, this actually allows you to connect the dots between multiple code repositories. So let's say you've got a few Terraform workspaces and you've got a bunch of cloud formation stacks. And they don't know each other only through IDs, but they don't actually know each other at the code level. By actually doing dynamic analysis, you can connect all of that together and say, okay, this Terraform piece is building some piece of my infrastructure. And then this cloud formation is building, let's say a compute resource. We're deploying a workload that is using this infrastructure. And then of course this gets even more complicated with Helm charts referencing infrastructure that was built by other languages. So in order to bring all of this together and actually be able to do a really consistent and coherent security and compliance analysis, you have to use dynamic analysis, which we're obviously big proponents of. So let's talk about a few examples of where dynamic analysis can actually help you. The first one is around privilege escalation. What we often see is that IEM policies and roles are defined in one set of code. And then the actual usage of those roles is defined elsewhere. So if we think about this example here, maybe I defined this specific role here on the screen in, let's say Terraform, and then I'm using it in another piece of Terraform. And so if I just look at the role itself, I might not fully understand how it's being used. So doing dynamic analysis connects the dots and actually allows you to understand whether there's gonna be some privilege escalation. For example, maybe there's a Lambda function that is using this role. And the role is also granting the execution of a Lambda function, what's called invokefunk, but also setting a role on a Lambda function. And so now the Lambda function can actually escalate its own permissions because the policy allowed it, the role allowed it, and it was assigned to this Lambda function, and now this Lambda function can escalate its permissions. So those kinds of things, you can only understand if you're doing true dynamic analysis and understanding how things are connected to one another. Another example could be related to drift. So let's say you have the user, and the user is part of a group, and the group has some kind of IEM policy attached to it. And now somebody goes into the console itself and actually attaches another policy to that group. Most infrastructures code languages will not be aware of that. So even though you define the user, and let's say Terraform and the group in Terraform, and even the policy attachment for the group in Terraform, but then somebody goes and attaches another policy to that group, Terraform is unaware of that. Same thing with cloud formation. And so by not analyzing the code with the live environment, you can't spot something like that. And these kinds of drifts are actually very serious from a security perspective, but they're also serious from an operational perspective because it means somebody circumvent your process and you're unaware of that. And so a dynamic analysis tool is actually capable of finding that. Another example I'm gonna give you here is around protecting S3 buckets. So S3 buckets are normally defined by one set of code, whereas the compute resources are defined in another set of code, just like the examples we've given earlier. And so the compute resources may be in some kind of VPC and they're communicating with the S3 bucket. Now, in a normal setup, the compute resources will go to the S3 bucket and will actually access the data through the internet. And you might not want that, you might not want your private resources needing to actually go to the internet and then pull S3 bucket data from there. You may want to keep your subnets private and not give them any access to the internet and so you would want the S3 traffic to go internally. AWS invented something called VPC Endpoints for that and it's one of the things that tools like CloudRail can find through dynamic analysis. And so even though your buckets are defined in one place, your subnets and your resources are defined elsewhere, dynamic analysis can connect the dots. And then lastly, a very interesting example is actually the impact between resources that are exposing one another. So let's say I've deployed an RDS database and I've deployed it to a subnet and I did not allocate an IP address for it. In that case, it's technically not public or not publicly accessible. But what happens if I have other resources in that same subnet? So let's say I have an EC2 resource that is public and it's sitting in the same subnet and by default there's access from the EC2 over to the RDS database. That means that a hacker can hack the EC2 instance and from there actually get to the RDS database. Again, it's one of the things that you find through dynamic analysis and not any other way. And so as an example, and here we're at the GitLab conference, you can use in Danny CloudRail all product that does dynamic analysis together with GitLab. And the integration is very, very simple. So you actually call CloudRail as part of your pipeline. The output is in GitLab's SAS format. So even though we do dynamic analysis, it is called SAS the format itself. And you can actually see the security issues in the environment directly. So you can actually see what were the issues, where they are in code, et cetera, and handle them. And so you can use CloudRail with GitLab to find the most sensitive and the most interesting issues. By leveraging dynamic analysis. So if we wrap up the conversation here, the idea is very simple. You start with infrastructure as code, you wrap it with CICD, GitLab, and operating it through merge requests. You start with static analysis for security, then you advance to dynamic analysis. And that allows you to do policy enforcement and actually always keep your cloud environment in compliance, which is the goal that all of us have from a security perspective. That's it, thank you very much. Hopefully this was interesting. Again, I'm available on Twitter and LinkedIn. Just hit me up with a message. I'd love to take these conversations forward. Thank you very much and have a good day. Enjoy the conference.