 Hi everyone. So today we're gonna talk about cloud governance with infrastructure as a code also called as IAC with Giverno and crossplane. These are a couple of the topics that I'll dig a little deeper. So I'll start with why we need IAC and what are the current challenges that we see today while using IAC. I'll talk more about how crossplane can address these challenges and how crossplane and Giverno all of this fits in. I also have a quick demo and a use case where I can quickly demo you how you can use crossplane and Giverno together for your cloud governance with IAC. Okay. So before talking about the topic, I'll talk a little bit about myself. So my name is Dholis and I'm a customer success engineer in Irmata. I'm a CK35, CKAD. I've done solution architect and right now I'm pursuing my masters from John Leopold University and I'm based out of Halifax, which is in Nova Scotia, Canada. Cool. So yeah, let's talk. So what is IAC? IAC is nothing but infrastructure as code. That means it is automating your creation of infrastructure and provisioning and managing your infrastructure using code. In a nutshell, it means codifying your infrastructure, which is great, but why we need IAC? What is the challenges or what are the problems with the current system? So with legacy applications, we have seen that administrators have to manually create their infrastructure. They have to manage the infrastructure, which takes a lot of time, which takes a good amount of cost. And also there is a huge life cycle in managing all the infrastructure. With IAC approach, you can codify everything and whatever used to take hours now just takes a matter of minutes or maybe even if it's a small infrastructure, maybe a couple of seconds. So now your engineers do not have to do the process manually. They can just build up the building blocks of the project. And IAC can do the heavy lifting in a right way, right sizing and infrastructure, managing, provisioning, doing everything in an automated fashion. This makes sure that the time for production or trying to get the product in the market is reduced tremendously. So it increases in the speed of deployment. Not only that, when you're doing a couple of things manually, you see that there are human errors involved. So IAC cancels out everything. So you don't have to worry about all the human errors, all the typos, oh, I clicked here and oh, in a different region, all kind of stuff. And then there is a cost reduction to that. Now when you deploy things manually or when you start managing things, what happens is you see a consistency, a drift in your infrastructure. There is a miss in the consistency. There is configuration drift all the time. Now there might be because there is a change that is ad hoc that makes that configuration drift between your dev and your production. This can actually jeopardize your deployment cycle in the long run. And also it can increases or it can make your project vulnerable in the long run. So IAC nullifies that there is no configuration drift because you are automating everything. You have the single source of truth. So it's auditable. You just automate it with single click. You deploy everything. So there is no configuration drift. And it aligns developers and operations because developers can actually use the same YAML and operations can also use the same YAML. So it's more towards a DevOps approach. You have this YAML which is in your source code storing it. It's a single source of truth. It's auditable. It's repeatable, which is required today to scale your platform. And it's not even matter of scaling the platform. Think of it of a situation where you want to shut it down and bring it up into a different region. IAC can do it really easy for you. You have a more structured infrastructure and then you have a lot of transparency when you have it in an IAC style approach, which sounds promising. But what are the current challenges that you see in IAC? So when you have the code and when you have it build up it is in the source code. There might be possibility if you do not scan it, if you do not have the proper check, there can be possibility that there are security risks associated with IAC. As developers are trying to bring their development cycles and they are trying to make sure the product is released as soon as possible, on the side, on the counter side, their security team doesn't have a lot of visibility in the code. They don't know what has been pushed, what does the code look like, what is the security posture of that? And having that visibility, applying those guardrails, applying those rules without bridging the gap between DevOps and SecOps becomes a real big challenge. And with the current tools that are there in the market, there are a lot of tools that are evolving in the market, which has a coding dependency. So now there are different tools, there are different languages out there. So you need to get the right side of the people. You need to build a team. So it's not something that everybody can do. It's not something that a developer can do. You have to make sure that a certain skillset of people that are hired and they are doing that. So there is a coding language dependency for that. And when you are, and in the long run, there might be some configuration drift as well. So when you're reconciling it manually, you need to take a change window or a chime line for that. And if it is not done the way it should be, it can create some noble dead logs. So these are some current challenges. And how crossplain can address these challenges? So crossplain uses Kubernetes API to provision and manage your infrastructure. That means with the same Qtl command today, as you're deploying your deployments, you can use, you can create infrastructure. You can manage it using the same command line. Crossplain can be deployed in your Kubernetes as an add-on. So now think of it like in the cluster that you are using to deploy your application, the same cluster can be used. You're actually extending the functionality of the cluster where the same cluster can be used to provision infrastructure and manage your infrastructure. Crossplain uses the same declarative approach of Kubernetes. That means you just have to tell crossplain what is the desired state and it automatically reconciles it. So all the manual reconcilation process, all the dead log kind of situation, you don't have to deal with that. And then crossplain has version control configuration. It does it in a version control deployment fashion. That means you have version control. So you can track it. It becomes easy when you want to roll back. And when you want to troubleshoot you, you know what exactly has been running in that platform. So that really helps. It's Kubernetes native, so it can be easily integrated in Kubernetes. You don't have to learn any coding language. That means it's a simple YAML. So anybody, if you are using Kubernetes today, your developer might already know that what a Kubernetes looks like, what is a YAML? So they can read through the YAML. You don't require any special skills or any coding dependency. And using this approach, what you can do is you can provide your developer self service. So you can provide them YAML and using this YAML, they can create their own, maybe easy to instance, their own infrastructure before testing that application. So you can provide that self service to the development that they can create their own infrastructure. There is no dependency between operations. So which leads towards a more DevOps approach. But the challenge over here is that how do you make sure that you provide self service, but you also have, you're also making sure that you have control on that YAML in terms of security. How do you make sure that the proper guardrails, proper rules, and compliance-wise, whatever they are provisioning, that's kosher, that's good. So that's where Qerno helps you in. About Qerno. So Qerno means govern in Greek language. It's a CNCF incubated project. It's a policy run engine which runs as a Kubernetes Admission Control. It's most popular by stars as a policy engine. And we have the largest policy library of any policy engine. So if you go to Qerno.io, and if you see the policies over there, we have a huge set of policies there. And what it can do is it can verify, mutate and generate your Kubernetes resources. That means you can add the rules in a verify policy. And whatever request it's getting, it just validates that. And if it's good, it will allow it, or else it will block it. The same way, if you want Qerno to mutate a couple of things with some kind of trigger or generate something, Qerno can do all those automations for you by the help of policies. So Qerno with crossplane, why should we use it? So Qerno addresses IAC security risk, which involves excessive privilege and compliance violation. So some of the security risk that we were discussing that a developer can do is how do you make sure that whatever security group a developer is creating, the ports are not open? Or how do you make sure that they are restricted to a particular region? Or how do you make sure that they don't have all the privileges and they are just restricted to what they are supposed to create? So those kinds of checks is something Qerno applies. It enforces guardrails by deploying policies. So you can have validated policy, you can have a couple of policies, and you can make sure that they have the right set of privilege. You have the control, whereas you are giving the ability for self-service as well. It not only does verify, mutate, and generate, but it also scans your deployment with errors, vulnerability, and insecure deployment. So whatever deployments are running today, it scans it, and it generates a report for that. So you will get a report which will tell you, okay, this is failing, this is passing, and you will get an entire report of what violation it's creating. So this way you will get an entire visibility of your cluster that what's going wrong and what you have to do. What is the security posture of your cluster? When Qerno blocks a request in a Kubernetes cluster, it actually throws a message to the developer so that developers get proper guidance. So as a user, when if I'm deploying a YAML and that is blocked, I get a message saying that, okay, you're not supposed to do this, you're supposed to do this. So now I get an idea of what I have to go back, what I have to work on my YAML, and what works, and this is something that it doesn't like, it's not allowing me. So it provides guidance to the developers and engineers on remediation and deployment security. And last but not the least, writing policy is easy. So it's simple YAML file. If you're good with Kubernetes, you know something about YAML, how does it work. You can not only deploy a Qerno and you can not only start writing, deploying policy, but you can start writing your own policy from day one. So you don't have, there is no complexity, there is no coding in dependency and nothing over there. Okay, so I'll quickly jump with a quick demo and I'll bring up my screen. I'm just testing whether everything looks good there. Okay. So I'm just going to make sure that the font and everything is good. Should I increase a little bit? Let me know. Is it visible? Should I increase the font? How is it now? It's good. Perfect. Okay. So I have this, I'll appear with me just one more couple of seconds, maybe. Okay. Perfect. So I have a Kubernetes cluster over here and let me see if it's still up because I had created it yesterday. So this is basically a kind cluster which has one node. So it's here. So it is basically a kind cluster with one node and I have done some of the configuration, configuration like I have deployed cross plane over here. So if I have to quickly show you what it looks like for stash. Yeah. So this is a cross plane set up. My pods are running. So it looks good. I've done it using a health command. So it's available out there. You can check it out. And then there are some providers that it requires so that it can talk to your AWS. And this is my provider that I've used. I'm using, I'm selecting AWS and I have set up my credentials and everything. It's referring the secret which has the credential. That's how cross plate actually integrates to AWS. So I have the set up ready over here. And I also have qverno, but I can quickly check it. So I have my qverno pod running and you don't need anything to do. You just have a cube cutler. You can use help and you can deploy qverno. That's it. So I have qverno running here as well. So let's dive deeper into how you can create infrastructure and with policies. So I have a couple of them and I'll quickly bring up one of one of it. Maybe an instance one would be good. Is that visible? The font is good. Perfect code. So this is a YAML that, this is a cross pane YAML that creates an easy to instance for you. Now if you see that it's simple YAML. You understand what it does. So it's creating an instance that is a kind, it's creating a CRD for you. That's an instance in, that's an instance kind. And then it's creating, it's taking a provider which is, which is default configurations. And you will see for providers, it's taking my subnet ID, image ID and all the standard configuration. So whatever you do when you go to AWS and you launch an easy to instance and fill in all the step that that you do manually, you can put it in the same YAML, the subnet, the security group, the roles. It has a lot of fields and this is how it looks. So it has a region and couple of more fields. So let me deploy this and show you how does it look. So that, so deploying it is just applying this YAML and there you go. So my instance is here and you'll see an instance ID has been generated. It's synced. The ready is false because maybe it's coming up in AWS, but I can check that. So maybe I'll see an instance over here which might be coming up or I can cancel this and I can change this maybe a different account. So this is an instance that is coming over here 106. Let me verify that. There you go. So this is how it is, it is created. It's, it's very simple and it's initializing right now. So cross then we'll do its check and it will, in your cluster, you'll see it ready once those checks are performed. So this is how you can easily create an instance using a YAML file. You don't have to do all the 10 different steps that you're doing today manually. And if I have to show you a bucket example as well that I have over here and this is a standard standard infrastructure that a developer actually requires from time to time. So this is a bucket example and you'll see it's the same kind of YAML. Maybe I can put it in a code so that it purifies it. Yeah, so this is a bucket over here and you'll see the ACLs, the locations. If you want to have server-side encryption, all those settings that you're using today to create an S3 bucket, you can throw it in the YAML and you can deploy it and you deploy it in the same way. You just do the same thing and it's created. And if I have to show that's, that's, there you go. So it's true. The bucket bucket name is this and there might be situations like if your S3 bucket doesn't have a unique name and you see some errors on AWS sometimes that, okay, you need this or you need that. You can actually describe this resource. So right now if I describe, what is this bucket? So it tells you the event. So it's not that whenever it's not created, you don't, you don't know, you have to log into AWS. That's, that's not right. You can, you can actually get all the details over here and maybe I'll just refresh and it will, it will go in. So here I should see an S3 bucket created in the backend and there should be one. There you go. This is the one. It's great. Just now. Perfect. Yeah. So that was about instance creation and S3 bucket. Now CrossFit also have some more use cases, some more complicated use cases like creating an EKS clusters. So they have composites, they have something called composite resources. So you can create a YAML where you have an entire EKS cluster. Not only that, along with the EKS cluster, it creates all the dependencies like VPC subnet, all those prerequisites that you have, you have to do to create an EKS cluster. You can put it in a YAML and with just Cube CDL apply, you can, you can bring up your EKS cluster and you can provide this as a set service to your developers. Now this is the, this looks good, but how do you make sure, for example, when I have, when I, when I bought up the EC2 instance or even a bucket for a, for to start with now, how do you make sure that this bucket is private? You have a private section over there. How do you make sure that a developer is not making it public or maybe doing some, some, some typo and maybe it's not public? How do you make sure that the proper guardrails and rules are applied and you are providing just the amount of control a developer require? So that's where Qverno comes in and that's how we can, we, what, that's what we can do using policies. I can quickly show you what does a policy look like before I apply that and before I show that how it's going to deny those. So this is an easy to restrict policy here and you'll see that this is a Qverno policy. Again, it's a YAML file. So if you go through that, even if having no knowledge about Qverno, having no knowledge about cross pane, you'll be, you'll easily understand, have a little bit of idea of what it is doing. So this policy is a cluster policy. It's a Qverno policy. There are a couple of annotations over here and just, just so it becomes easy to describe that policy and you can, you can, you know, you provide some description just so that you can identify that policy. So there are particular specs over here and then there is a rule. So this rule is going to match the resource with instance. So it's going to be just applied to your instance and you are validating it with instance type. So this policy tells you that you cannot create any other instance apart for T3 medium and you see this use cases all the time for cost optimization. You don't want any big instance running out there and just, you know, you're spending a lot of bugs out there because of a big instance for a small application. So you want to put guardrails, you want to put proper checks over there. So this is a policy that tells you that you can only create T3 medium and nothing else period. And we can deploy this policy and see how it looks. So this policy is here. I'm going to just see. So this is ready. It says enforce. So policies given a policy basically has two mode. It has enforce and it has audit. Audit mode is not going to block any kind of request. It's generally going to create a report for you and it's not going to block. It's, it's a soft limit kind of thing where you do not block anything, but you still get the visibility of seeing what is the cluster security posture. And enforce is going to block it as well as create a report for you. So this is an enforce that means it's going to block it. Now, what I can do is maybe I can delete all the instance that is that I ran. And I'll try creating an instance again. I can quickly see if that instance have instance type. You will see that instance have five M1 small. That means given a policy should block it. I'm just doing a little bit of cleanup. So it will take a while to shut it down. But, OK. I'll just give a little bit of time for it to delete the instance. And I'm going to go ahead with the other policy so that we, OK, I don't have any instance right now. OK, it should. Now let's try applying it and see if it likes it. Once my cluster instance is cleaned up, you'll see that I applied it again. And it tells me that error from the server, this, this, this, this, this, this, validation kever, no fail, denied the request. It also tells you what is the error message. So over here, it tells you that, and you can customize this message. Actually, this message was there in my policy if you see in the message section. So you can customize it. It tells you that using an instance type other than T3 medium is not allowed. So me being a developer actually understands that, OK, they don't like T3 medium. They only allow that. They don't like M1 small. So let me go back and change it and come back. So it gives you guidance for the developers to remediate their, their yamls. And we also have one more. I also have one more policy that is an S3 policy. I can bring that up as well. And this is an S3 restrict policy. And this is again a cluster policy. There are a couple of descriptions. It's, it's again an enforce mode. You can change it in audit mode. And this is a validate policy. So it's going to get, it will be applied just on buckets. And there is a validate rule over here which says that your location constraints should only be US West 1. So there you want your users or you want your developers to be sticking to only one region just to, just to maintain hygiene and just to make sure that there are no steel resources in any other regions. So you can, you can do this by that using this policy a developer can only create an instance in US West 1. Any other resources created in any other region will be blocked. So yeah, that's one. And I'm not going to apply this and it's the same kind of thing it's going to do which it did for easy to instance to stick policy. But I can show you one more that is a mutate policy. I can, I can show you how does the structure looks like. So this is an auto create S3 bucket policy. And it's an interesting policy. You'll see that this is a generate policy. So what over here Qano is doing is there is a rule and it's again applied on instance and it's going to generate this resource for you. That means it's going to generate an S3 bucket for you. So whenever an instance is created it gets a trigger and then it generates a bucket for you. So this is using such kind of policies in generate policy. You can, you can take care of all the automation use cases. You can use Q1 as a trigger to generate some policy which of course Q1 is going to do for you. So yeah, that was about the demo. I'm going to go back to the slides. Just going to bring it up. Oh yeah, this is the couple of policies that we have seen. So these are some YAML files that I have taken screenshots of. And let's talk about the summary. So what did we discuss in today's session was that IAC is necessary. We want to automate couple of things. We want to codify, automate everything. You want to have version control, all of that stuff. So IAC is necessary. But there are a couple of challenges. There are challenges like security risk. There are challenges like declarative style approach when you're using IAC in Kubernetes. So Crossbin helps you to address couple of challenges. Crossbin runs on a Kubernetes. So it's used the same API. So it's Kubernetes native, easily integrated through Kubernetes. But what about the security risk? That's where Q1O helps you. So Q1O addresses all the security risk. It provides proper rules and guardrails. So you can use Crossbin to provide self-service and you can use Q1O to make sure that those self-services, you have proper control over those self-service. So yeah, thank you for paying attention and joining me today. Is there any questions? You can ask a couple of questions. We still have five minutes. If I know, I can answer it. So when policies are defined and as people start using cross-plane to provision their infrastructure, is there a way or is there already a solution for reporting on what are the violations that people are bumping into? Some sort of like people are trying to create desktop buckets this way and we have a hundred violations for this. Yeah. So yes, there are, I can just quickly see if there are any, I can get a command real quickly. So yes, there are ways that you can get the policy report. In fact, Q1O has some integration for Grafana and from Athea. So you can get all the policies that you can put it in a more structured dashboard. Or you'll see the policies over here. So you can get the policy report using this command and if you describe this, right now there is no violations, but if you describe this, you will get a policy report and you'll see what is remediating and what is not. So what is the advantage of using Q1O over the composition and specifying a default value for what you want the dev team to be doing? Okay. So what is the advantage of using Q1O over? Over the compositions that cross-plane has and defining those compositions as a platform team. That's a very good question. So when you're using composition in cross-plane, so the complicated use cases that I was talking about is you create a YAML and you create an EKS cluster. But you might know that AWS provides some best practices for EKS cluster. So they have some best practices that your EKS cluster should be private. Your EKS cluster should have this many security groups put open. So how do you make sure that YAML doesn't have public by chance? So that's where you can create a YAML file where it checks the YAML and it checks the kind, maybe a security group and instance, an EKS cluster and it will check and it will throw an error or maybe block that request. Are there any additional policies you can add to prevent? For example, like you don't want certain users or groups to deploy in US West one, but you do want other groups. Are you able to provide like additional policies for different groups of people? Yeah, definitely. I'm gonna just bring up something so that I can easily talk through that. So this is what the policy structure looks like. So policy structure have a rule and then it has a match and an exclude condition. So in match, I had this instance match it with instance, but if you have some exceptions, like, okay, this user is an admin user and he or she can do some couple of things, you can put it in the exclude section. So yes, that's doable. You can use a match and exclude section and you can create your own policy. So I'm interested in the self-service model. So if I wanna allow my developers to deploy their own infrastructure, how can I communicate what privileges that they have without necessarily broadcasting to them what privileges everybody has? Yeah, so what you can do is you can, whatever the standard compliances of your organization has, you can have the rules defined in Quburno and they'll just get blocked with that Qube in Quburno's application. So your Quburno and your policies will be used by your SecOps team. That is, you can, you will be administrating or you will be managing the policies, whereas developers will just have cross-lane YAML. So if they don't have the YAML right way, that is gonna be get blocked with them. Was I able to answer your question? Okay, so what you can do is in Quburno's policy, there is something called messages. So there is a section here. So you were seeing when there was certain request if I have, if I can go up and you see, there is this request that came up, right? So it tells me that is not allowed rule this. This is what your developer is gonna see. And this is coming from this message over here. So you can customize it. You can, you can put it, you can communicate it that way, the way you want your developer using that section. Use Quburno with raw Kubernetes manifest files or is it cross-lane only? I'm gonna just repeat what I understand. Your question is that can we use Quburno for any Kubernetes resource or is it just for cross-lane? Yes, yeah. So you can use Quburno for any of the resources. If you go in Quburno.io over here and if you go to policy section, we have 232 policies, which is for everything. You can, you can have a policy for any use case and this is a simple policy. It's applied on namespace. So this was a use case or this was a demo where I was talking about cross-lane and Quburno, how you can integrate and self-service was something that we discussed, but you can use Quburno for any Kubernetes resource in your cluster. We have three more minutes. Anybody else has any questions? You can have one last question if anybody has one. I think there's this one. If you have a resource that you're trying to create that fails multiple policies, do you get all the failure messages or just like the first one in funds? Yeah, so what I understand, your question is that when you mutate something, you get a failed policy result. Is it gonna be for one policy or all the policies? So I'm trying to create like a bucket that's public and in the wrong region. So I'm failing because I'm making it public and I'm also failing because I'm putting it in the wrong region. Will I get two messages or will I get, you can't put it in the region and then you fix the region and then you try again and now you get, it can't be public. Yeah, so you'll get only the message that only the validation error that you're applying for your YAML. So whatever violation your YAML has, you're gonna get message only for that and not only that, you will see, I don't have the, you will have this described policy report. So using this command, you can actually describe the results. So this command will give you, this command right now, I don't have any policy audit over here, but because I'm not using a lot of resource on this cluster, but using this command, you'll get all the policies and using this describe command specifically results on what is failing or what is that policy that you wanted. You can more dig deeper and segregate in what exactly the description is. Okay, we just have two minutes, but that was the last question. Yeah, so I think that there are a couple of questions which I want to interpolate. So one is on the compositions part where I think making compositions will make it easy for developers to know upfront as to what they can do or what they cannot on the authoring time itself because you can have open API, you can have validations, et cetera. But this is more like a post hoc where, okay, you write your YAML and it goes all the way to the deployment step and then you deploy and then they come to know that, okay, my YAML is not adhering to the policy. So what is the best way, I think someone asked here the question, so what is the best way I can know that whatever I'm entering, is it adhering to the policy or not? So if I want to choose between the composition and the skewer, what are basically the pros and cons for both of these? Yeah, so what you can do is that, of course, the messaging, as I said earlier, that messaging actually helps for you to communicate to your developers that why the reason the policy is failing. And you see that they don't have to debug or troubleshoot why it's failing. The message is very clear that there is something wrong in your YAML. You can use that and when we talk about whatever the YAML you are applying and whatever it's getting violated, you will have the exact message over there. So there might be other policies that might be applicable for other YAMLs, but you'll not see that. You'll only see the restriction according to your rules applied for only that YAML. For more visibility, you can integrate that reports to Grafana Prometheus and you can have a more visualizing. But if you want to communicate, if the question is how do I communicate, message is the best option, you can do that. Thanks for joining, I'll do drop by at the Kibarnu booth and say hello if you're passing by. Thank you.