 My co-presenter, George Castor, unfortunately, could not be here today, but wanted to go through some of the things that have happened since we last presented on the State of the Project and our roadmap in Detroit at KubeCon in October, and also present some of our roadmap for 2023, do a little bit of a demo around some of our new capabilities around doing infrastructure as code governance directly against source assets, and then walk you through the contribution process for those that might be interested in that. So, out of curiosity, a quick show of hands. Who here has not used Cloud Studio? Okay. Well, this is to provide a quick overview of it. It is an open source rules engine for cloud management. It's simple YAML policy DSLs. It allows you to find interesting resources by doing arbitrary filtering on them. So, take an EC2, a set of EC2 instances in AWS, find those that are on the, available to the public internet, that have IAM roles attached, that allow them to create IAM users that don't have encrypted disks, and you can throw a chain of these filters together in arbitrary ways, and then you can take a set of actions on them. And actions could be something like stop an instance, or take a snapshot of it, or change its role, or security groups, and you can combine those filters and actions sort of to create arbitrary policies. So, you might target it towards doing security, you might target it towards doing better operations, taking on Mac backups, you might target it towards cost optimization, underutilize things. And so, it's designed to be a service with Army Knife around your cloud management. And additionally, it also binds into event-based execution. So, now you're able to, as API calls are happening in your cloud provider, actually introspect what's happening with those API calls and if the resulting resources are compliant to your policies. So, that said, what's happened since last October? We've had about 70 authors contribute about 280 different commits across different resources and providers. We've added two new core maintainers, Darren Dow from Intuit and Petruc Mishra from Capital One. And we've added a few new providers. Providers in custodian are sort of like what you express your policies against. We've added a Terraform provider for being able to evaluate Terraform source code inside of your pipeline and additional capabilities around that. And I'll actually do a deep dive demo on that. And then we've had the contribution of the Tencent cloud provider. This is for those that are using Tencent cloud. And this is currently almost, does not support our event-based execution, but also supports multiple resources, filters and actions, the same as our other providers. So, looking forward to our roadmap in 2023, we've been looking at how do we improve the policy authoring experience. So, we've added policy testing for our shift left policies and we want to continue to add that back to our regular cloud providers on all the public clouds, as well as policy tracing. So we have some built-in tracing support that target the cloud provider's native experiences around tracing. But we wanted to actually add that in as just a command line capability that's independent of the cloud provider so that it's a better baseline experience. And as part of that, also doing some policy debugging so that you can step through your policy execution and understand sort of what it's doing, how it's filtering things. On the release engineering stuff, we've been doing a bunch more work to sort of simplify our release process and make it more automated. And the goal here is to get humans out of the loop. We generally release once a month, but that's currently, you know, a typically it's a maintainer that's going in and actually generating the release, testing the release, et cetera. And we're trying to get that completely automated such that it just happens on a clock and it's all system driven out of GitHub Actions. And then additionally, more functional tests than CI. We just recently, this year, got access to some additional cloud provider resources from CNCF and we've been running our functional tests there. And we're hopeful to add additional providers as we get additional resources with the CNCF. And as far as the core, we currently have the core sort of, the currency system and package is sort of hardwired into AWS. And for other providers, we don't want to carry those dependencies into those other clouds. So we're trying to look at, we're looking at trying to extract the core code from that so that the other providers have a minimal dependency set. And then additionally, clock studying as part of being like a, having a native cloud experience for each of the providers has direct integration with the cloud provider's logging metrics tools. We're looking as well as tracing and we're looking at, how do we simplify that and potentially use open telemetry support to abstract that out so that we don't have to carry forward on individual implementations for every provider which becomes more relevant as we've been adding new providers. Looking at some of the providers. So we have Kubernetes admission controller support but I think there's been questions. We currently label it beta and it's currently fairly synchronous. So looking at doing a new implementation of that admission controller to support and this is both validating and mutating to support better concurrency for production execution. And then additionally, we have a separate AWS provider and the main one called, it's based on the CloudFormations CloudControl API which is sort of gives us fairly generic uniform resource coverage across all the AWS resources. One additional capability that we'd like to add to that provider is being able to do preventative support. So in the same way that we're doing ChefLeft earlier in the pipeline pre-deployment we also want to add that same capability for AWS CloudControl. And then on the ChefLeft side we've currently started with Terraform and we feel like it's got great, fairly good universal coverage and significant usage, let's say, across the different providers as well as being available across all the providers. But we want to go ahead and add additional languages based on what community feedback is and what people are interested in. And we've heard clear interest in both CloudFormation and Kubernetes Manifest. And then one delta to sort of our standard policy sets. So normally in CloudStudio for the different providers we provide lots of examples and documentation around types of policies you can do but we don't provide sort of an out-of-the-box supported set of policies. With ChefLeft we're going to actually change that and focus on giving folks an out-of-the-box experience with a set of policies that they can use. And then looking at potentially new providers there were some community interest in Oracle Cloud as well as an HCP sort of web service provider generically that can be bound to doing policies against any SaaS service. Looking directly at some of the cloud providers, we have a few contributors, larger organizations that have a backlog of contributions for Azure and GCP. So I think when we went through them they had about probably about 200 pull requests in pipeline. So we expect to see additional Azure and resources and GCP resources. And then looking at trying to get the Azure SDK slim down we've been tracking upstream with Microsoft on this because the current Azure SDK sort of blows up our Docker image and it's typically about a gigabyte. They're currently in progress on slimming out in the individual SDKs. It's effectively just a lot of generated code and it's trying to make it smaller and more lightweight. On the AWS side we're trying to value from is how you support sort of exception management. We currently support S3 and arbitrary URLs. We're looking at supporting DynamoDB and being able to add a query using CQL syntax directly into your policy to be able to source exceptions or valid values, vocabularies directly from DynamoDB. As well as being able to do better fuzzy IAM statement matching across the different resources and that having embedded IAM policies as well as IAM principles. Okay, so I wanted to talk about shift left. So super excited about this. This is sort of how do we mostly just doing is applying things at runtime. So even the event based real-time remediation capabilities are based on something's actually been provisioned in your cloud and you're now going to inspect what's happened and apply changes. In certain contexts you need to be in the runtime like certain cost optimization policies. You need to look at the metrics for utilization and that requires sort of a time basis for it to live to be able to evaluate that. But being able to apply policies and governance directly within the developer pipeline generally leads to increased productivity and in the general sense all problems are easier to fix the closer you are to their source. So in the same way that static typing helps us write better code. Being able to apply policies, sort of pre-commit allows us to do the same. So the three different contexts here are, let's see, is that readable? Are going to be sort of pre-commit where I'm just on my developer workstation and then pre-merge, which we'll go into in a second, but effectively I am, just to let me first show you what these policies actually look like as well as the Terraform. So I've got a simple module here that's just creating a VPC, a security group and an EC2 instance and this EC2 instance is attached to a security group that allows for sort of wide open ingress. So if I switch out to the policy, the exciting policy and what does that look like? I basically have here a exciting policy that is looking for a particular resource, EC2 instances and is applying a set of filters to find sort of if it's attached to a security group that also has an ingress permission that allows for wide open access. So when I go to run it, it tells me that I've run a few policies and that this particular resource has failed and it's sort of giving me the exact source line so I know where I can go fix it. The experiences intend to be developer centric and hence sort of operating directly on Terraform source and this is inclusive of all the Terraform interpolation, variables, conditions, includes and they only be respectful of all that so that we can track back to the originating source. So we also, so for this provider, we've also added a dedicated CLI and that's really just to try to help create, since we're no, typical custodians sort of in the operator mode on the CLI and targeted towards administrators whereas C7 and left in this context is targeted towards developers and providing a developer oriented experience and workflow as well as being directly integrated into CI pipelines. So if we wanted to look at a CI pipeline, I've got a CI pipeline here that's got, that's had C7 and left integrated in and it's failing in CI and I can look at that and see the actual change. So this is sort of the pre-merge workflow when you go to submit a pull request and it's really about informing the rest of the code reviewers on what's actually happening but most of all don't want to dig through CI logs so directly we also support doing code annotations on the code hosting. So here I've got the C7 and left output when it was in CI directly annotates the code on the pull request to show what's part is failing and that's generally to help the reviewer so I added a comment here after seeing this that it should use a different security group. And then from a CD perspective as you go to do deployment then you typically are providing a different set of variables. You can pass those in directly to C7 and left when you run it to provide that additional context. Now C7 and left here doesn't require any cloud credentials. The intent is that the developer doesn't necessarily have access to that production environment. They may just have access to a local development environment and that you can provide those different environment credentials via the variables. Sorry, different content values that might be an enforce in those different environments. All right, so switching back. Okay. And so that sort of covers off on sort of pre-commit and then on the authoring experience. So we added a few things to this. One is that you can write policies that sort of wild card through typically in custodian because we're operating against the cloud providers. We have to query resources. We have to do additional fetching. So typically I provide policies against a single resource. With valuing infrastructure as code assets and source on disk we're able to do a couple different things. Effectively because we have the entire graph sort of in memory we're able to do policies that target multiple resources. So you can do like a glob and do like a single tag check policy or label check policy against all resources. And then you can also do arbitrary graph traversals. So we support effectively going from any node to any node in the graph. So we have an example we had before we were going from an EC2 instance to a security group to a permission on security group. And you can do that across any set of resources that are connected. And that's an inclusive of respecting all the terraform variability and references. And then additionally we've added in built in policy testing. And this is currently only for C7 and left but we're looking at how do we target this towards runtime providers. And this is basically providing a terraform that the policy will evaluate against and then a expectation assertion language that is effectively validates that the different checks have succeeded or that and fails if there's any findings that were not asserted as well as if any assertions were not met. So I think that George also wanted to have this go through what does it look like to contribute to custodian? So typically on custodian we run everything out of our make file. And that helps us sort of drive across all the different resources and providers that we have make test is sort of the basic. We've also had make format, make lint. The source tree itself is sort of laid out with the most providers being in the tools section. And you've got Azure, VCP, Kubernetes, C7 and left as well as various other sort of operational tools that are helpful. Mailer for notifications, C7 and org for running across many accounts. And from a making a core quest perspective we typically run this all against CI matrix of different providers, inclusive of building docs and Docker images. Our docs are all built with Sphinx. We support both restructured texts in Marktown and welcome contributions on that record. But, oh, I've got a failing test. I didn't have a module installed. Well, that's sort of an update on where we're looking at going and what we've been up to lately. If there's any questions, I'm happy to take them. Any questions? Go for it. Today it works, the question is, does the C7 and the left work with CloudFormation or just Terraform? And the answer is today it works just with Terraform. It is designed to be multi IX source format. We're charting CloudFormation next. Hopefully we'll get there by the end of May and then also charting Kubernetes. Kubernetes is a little bit easier just because the runtime is the same as the source definition. With CloudFormation, it will be parallel, it will be based on the AWS Cloud Control. So we're also trying to see if we can mirror the IX source format to some of the runtime format. And for CloudFormation, the runtime provider for AWS Cloud Control that we have effectively is the same syntax as CloudFormation actually uses the CloudFormation schema registry. So looking at doing CloudFormation supports it. Any other questions? All right. Well, thank you all. Hope you have a good KubeCon.