 Hi everyone, I'm Matt from Bridge Crew and I'm here to demo end-to-end policy as code right from the first line of infrastructure as code written to running workloads in Kubernetes or across other cloud provider resources. So what do we mean by end-to-end policy as code? Put simply, it's the idea of ensuring the same policies and the same checks can be run at any point in the developer's day-to-day life cycle. Typically that starts with an IDE, then maybe running some local tests before a commit into a merge request, some CICD kicked off in the merge request and then once merged as part of a main pipeline and then finally through into creating real resources with that infrastructure as code. The idea being at each of those points, we can find and fix issues as soon as possible but while still having the defense in depth of knowing we're going to catch any for example existing issues that are already running in that cloud environment by also running those policies all the way through to runtime. And so we start our journey into the developer's IAC workflow by looking at our open source tool, Chekhov. Chekhov is an open source policy scanning framework for multiple different infrastructure code frameworks. We support Terraform, Kubernetes, Helm, CloudFormation and let's look at this to start with in the kind of pre-commit and local testing stage. Developer using Chekhov as a pre-commit hook or using it to test an existing infrastructure as code repo for any existing issues with their infrastructure as code. So while Chekhov has a number of command line options, dash D with a local directory will do a recursive directory for any of the supported IAC types it can find and we'll output a colorized list as we've just seen there or even outputting JSON for each of the different frameworks and we'll provide hundreds of built-in policies across all of our supported frameworks. Now at the moment, we're getting that output because we're running against Kubernetes Goat which is an excellent resource of purposely vulnerable Kubernetes resources and BridgeCrew also maintains TerraGoat which is the same for Terraform-based resources and CFN Goat which is for CloudFormation. So these are all purposely vulnerable, not designed to be deployed but designed to kind of understand and explain some common vulnerabilities within infrastructure as code and as you can see her running against TerraGoat we get another set of output and then for each of the policies you'll find that we provide a link in Chekhov to documentation and consider this like a virtual security person on your team providing you context for that issue if you don't necessarily understand exactly why you're seeing an issue in Chekhov you don't fully understand why a policy is a security issue that's what those guides are for on each of the policy sections and jumping into more Kubernetes centric uses of Chekhov as I mentioned earlier we support Kubernetes manifests and also Helm charts out of the box so as you'll see here if we go back to the Kubernetes Goat repo that I checked out earlier we can actually tell Chekhov to only discover certain IAC framework specifically if we know what we're looking for with the dash dash framework flag so here I'll ask it to only find Kubernetes and we'll see within the Kubernetes Goat repo it's going to go and find any Kubernetes manifest you see all the valid policies at the top and any policies that we fail on at the bottom and we found quite a concerning wildcard cluster roll there at the bottom in a aptly named porn chart directory and then we can also do the same and say actually we want to just scan Helm what we do with those two sets of policies is they both get run against the same policies you'll see that each policy has a CKV underscore Cates name and a number and the difference being if we find a Helm chart we're basically going to template that out into a temporary templates directory and then provide a bit of logic to make sure we can tie the scan resource back to the particular release name and the template name whereas obviously the Kubernetes style framework we're just going to find anything that looks like a Kubernetes manifest and scan that directly so the same policies get run but it just allows us to scan Helm charts out of the box and then it's worth noting that all of the power and all of the built-in checks within check-off we've built into a VS code plugin so instead of having to go and run this on a CLI you can see here we've checked out TerraGoat and with the VS code check-off plugin installed from the Visual Code marketplace it's actually underlining and providing us tool tips on any issues within a given Terraform block and also providing us information there in the problem section as a list so with that let's jump further into the developer's workflow and look at post commit in a pull or merge request now for this we're going to use the bridge crew platform which basically extends the power of check-off and allows amongst other things visualization of policies like this as well as things that require automated scanning persistence integration with runtime and cloud environments and one of the things we're going to show here is the ability to integrate bridge crew with your GitHub account or GitLab and what you're going to get is again another step along that pipeline of automation where we are then going to take those same policies but annotate any changes in an inbound pull request that are breaking any of those existing check-off policies so you know you see this in GitLab and GitHub here the same rules apply we have a incoming pull request with some vulnerable infrastructure as code and it's being annotated directly in the PR to highlight to that team that this probably shouldn't be merged further than the pipeline we can look at the same repo where we were just having a look at that annotated merge request and we can see we have a CI CD pipeline pre-configured and within this we're just going to kick off a rerun of an existing job just so we can have a look at what's going on there this is a very simple pipeline with Terraform but could just as easily be any of our supported IAC such as Kubernetes or a Helm chart deployment and if we go and have a look at the pipeline we'll see check-off being used as our security validation step the only difference compared to our local CLI run is we're now passing a bridge crew API key and that's the same whether using it within GitHub with the check-off GitHub action we provide or within GitLab as you saw in the first example what that API's key is basically going to do same results same output but it means that that data is getting sent into the bridge group platform as code reviews so we can track and centralize all of our incidents and kind of see those objects moving through our CI CD pipeline from you know maybe an issue in a PR that was scanned into being in Maine and being highlighted and then even as we'll show later into runtime as well and within the platform it allows security teams or a security advocate on that team to filter by certain guidelines such as CIS Kubernetes and you know just just get a good idea of collating issues across multiple resources and across multiple environments and then the final piece of the puzzle is runtime within the integrations page where we integrated GitHub or any of our other Git based annotations we have the ability to integrate with Kubernetes clusters and also directly with cloud providers such as AWS Google and Azure so I'm running just a simple local Docker Kubernetes cluster on my Mac here and what I can do is I can generate a manifest that will allow my Kubernetes cluster to send the state of all its current workloads to the bridge group platform where we will then run those against the existing set of Kubernetes policies that we also ran against our local YAML or against our Helm charts so we get consistency all the way through into production let's say maybe there's been a manual change or a deployment outside of infrastructure as code which is breaking some of those policies I'm obviously want to going to know about that as well or you know someone's updated an existing template manually and then alongside Kubernetes integration we can also as I said do the same with the cloud providers and considering that a lot of Kubernetes security actually comes from the configuration of maybe a cloud provider hosted Kubernetes cluster which potentially has default which allow access from places you don't want or embedded security defaults that are maybe not ideal it is important to you know catch those things as well and potentially codify those in something like Terraform or CloudFormation and you'll notice if we look through the policy section of the built-in policies for Chekov and the bridge group platform a lot of the policies also check kind of secure defaults of things like EKS clusters as well as just the Kubernetes specific security themselves and then while we're in the policy section if you wanted to add a policy yourself to cover something specific organizational or just extend one of the checks we support a UI and a JSON based policy templating language as well and then within Chekov the built-in checks are all available to browse within the Chekov repo itself and are all based on a simple set of Python helpers built into the platform so if you wanted to extend existing Chekov checks and this is possible either by contributing to Chekov itself or by using the option within Chekov to load in your own checks from a separate Git repo you'll find within kind of Chekov for example under Chekov Kubernetes checks all the built-in Chekov checks and a fairly simplistic set of helpers in Python to basically find the parts of the configuration you care about find the information and then work out whether you want the check to pass or fail in that scenario so fairly easy to read through some existing examples to extend or write your own and you'll also find in the test directory some Kubernetes manifests for each of those scenarios which might help in customizing or kind of getting to grips with the testing language within Chekov and with that I hope you enjoy the rest of KubeCon and CloudNativeCon 2021 virtual and any questions feel free to reach out to me at metahertz on Twitter or join us slack at slack.bridgecrew.io thanks very much