 And here we go. Hi, everyone. Thanks so much for joining us today. Today's CNCF live webinar data protection guardrails using open policy agent. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand over to Joey lay with cast in by beam beam and Anders Eckner with Stira. A few housekeeping items before we get started during the webinar, you are not able to speak as an attendee, but there's a chat box that I think you've all found where you can drop your questions. And we'll get to as many as we can at the end or intermittently if they're if we have time. This is an official webinar of the CNCF and as such a subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link and the recording will also be available on our online programs YouTube playlist. With that, I will hand things over to Joey and Anders to kick off today's presentation. Thank you very much, Libby. Good morning and good evening everybody, depending on where you are. My name is Joey L. I'm one of the product managers with Casting by Veen. I'm joined here by Anders Eckner, who is a developer advocate from Stira. And we're going to be talking to you about a new concept called the data protection guardrails, which is focused on helping misconfigurations with your data protection environment. We're going to talk about how to back up, think of disaster recovery solutions. And we're going to talk about how to do that with the open policy agent policy. And I'll let Anders introduce himself. Yeah, thanks, Joey. Hey, I'm Anders and I work as a developer advocate for Stira, which is the creators of the open policy agent project. Today we'll talk about using OPA for the purpose of data protection, and I'll be covering like the OPA side of things and Joey will talk more on like the topic of data protection. So just to get started. We'll see here what's going on. Okay, so before we get started on on OPA. What is policy as code or what is even policy? So OPA is an open source general purpose policy ending. So before we dive into what that entails, it's a good to remind ourselves what policy is and what policy as code is. So basically policy is a set of rules. These rules could be anything from organizational rules, permissions and for authorization, Kubernetes admission control, given rules on what can be deployed or not, infrastructure policy and infrastructure rules, what kind of, what kind of instance types should we allow what kind of security configuration and so on. Build and deployment rules and policy is something we're going to cover today as well. We will show how we can use like policy as part of our CI CD pipeline to enforce things that apply to the data protection space. Data filtering is another item for policy, and there's so much more so basically anywhere you have rules. That's pretty much where OPA shines. So that's policy and why would we treat this as code. The simple answer is basically that treating policy as code that kind of provides all the benefits of treating anything as code. So we can work with our policy, i.e. our rules in a collaborative manner so we can work with pull requests, we can test our policies in isolation. We can work with tooling like static analysis, linters and so on. So no more PDF documents, our policy should be code just as anything else. And when we talk about policy as code, we often talk about decoupling, meaning that just as we can decouple storage from our application and move that into a dedicated database, we can want to treat policy the same way. Ideally our applications and business logic should not need to deal with authorization or perhaps not even users, but that should be treated by a separate system. So that kind of decouples and removes responsibility from applications and into a dedicated entity like OPA. So that's policy and that's what a policy as code. So what is OPA then? OPA is kind of an implementation for all of these ideas. It's an open source general purpose policy ended. And as of February last year, it's a graduated CNCF project. It offers a unified tool set and a framework for working with policy across the whole stack. So even if the topic of today here is data protection, OPA is general purpose again, so it's meant to cover all of these places where we might need rules. So the idea is to bring a unified platform framework for working with rules and policy. In OPA decouples policy from your application logic, so your application would commonly query OPA rather than to do things like app authorization or whatnot. One thing to note is that OPA separates the actual decision from enforcing it. If you query OPA for a decision, OPA is going to give you a response, but it's still up to you to do something with that response. That is the actual enforcement. And that is, of course, highly dependent on the kind of context you're working in. So one type of enforcement, which we're going to look into later is, for example, in a build pipeline where you might want to say that you cannot merge a PR unless this manifest of this resource passes all the policy checks. So policy checks are written in a declarative language called rego, and that's kind of the glue that kind of binds all these diverse and different kind of policies together. And we'll look into what that looks like later too. So OPA, again, it's an open source project. It's got a huge community, a big ecosystem of tools and whatnot. There's currently over 250 contributors to the project, over 70 integrations listed. It's everything from job applications, PHP, databases, Kafka, and whatnot. Pretty much anywhere where there's a need for policy, there's probably going to be an existing integration. And there are more added pretty much weekly. So OPA is used by over 800 GitHub projects. So it's very common to integrate for authorization or for any of these other projects or purposes. 6,600 GitHub stars, almost 6,000 Slack users, and more than 130 million downloads. There's an ecosystem, which includes not just OPA, the actual policy ending, but also things like Conf test for running policy against files. There's the Gatekeeper project, which is OPA applied to the Kubernetes admission control space. There are editor integrations for VS Code, IntelliJ, and so on. So it's a big ecosystem. This is going to summarize what the benefits OPA brings. I like this quote from Kelsey Hightower. So the open policy agent project is super dope. I finally have a framework that helps me translate written security policies into executable code for every layer of the stack. That's kind of what OPA does and what OPA is. So if that is what OPA does, how does it work? I think there's two aspects that I tend to focus on. And those are the policy decision model and it's Rego, the policy language. So if we start with the policy decision model, and this is kind of key to how can OPA integrate with all these projects that they really weren't built with OPA in mind. And of course, they are enormously different from each other. So a very heterogeneous tech stack here with everything from Linux, PAL modules to Kafka or databases. So the way it works is that any service, and by service we have a very broad definition, it kind of covers all of these things. It's basically anything that can service a request from a user or another service. And when that request is received in the service, the service passes that request to OPA in order to have OPA make a policy decision. And this policy query is just basically any JSON, any JSON value. And OPA looks at that query and based on the policy and based on any data it has loaded, it makes a policy decision. And the policy decision is also just JSON. So pretty much any service that understands JSON can communicate with OPA. So the next aspect, which makes OPA work for all these different technologies, I'd say that's Rego. It's a high level policy language, which is generic enough to work with pretty much any JSON data or YAML data, while it's still tailor made for the policy space. And it allows you to describe policy across the whole cloud native stack. And again, just like a real world policy, a Rego policy is just a number of rules. These rules, they commonly return true or false, should the user be allowed or not, but it could return any type of data available in JSON. So you could return strings, lists, objects and whatnot. OPA includes a testing framework so you can run tests directly on your policy and you can kind of build confidence in your policy just as you would with any other code. It's a well documented project, so check out the official docs. And there's also a playground so you can try, you can try policy offering without even having OPA installed. So with that, I'm going to hop over here to an editor. I'm just going to open that just to kind of show you the basics of Rego. I hope you can all see my screen here. I'm just going to say policy dot Rego. I'm going to create a Rego file. The first thing to do is to create a package and this is similar to a namespace and other languages. It's going to call it policy. And, and now let's write our first rule. So what a rule is, the first thing we do is to just provide a name. In this case, I'm going to, I'm going to name my rule allow. The way a rule evaluation works is basically you have the rule head here, which includes the name. Optionally, you provide a return value. So in this case, we might want to say that allow is equal to true. If all the conditions or assertions provided in the body here are also equal to true. So if we do something similar like one is equal to one. Yeah, that's obviously true. So if we now go and we evaluate the allow rule, we should hopefully see that this, this rule evaluates to true. So I'm going to load, I'm going to run OPA eval here with just a simple command line tool to evaluate any value from a policy. I'm just going to say data policy allow. We can format that a little nicer, just going to say pretty. And we can see that indeed, this is true. So if we change this to something that's not true, we can see that the result is now undefined. So when, when a rule is not evaluated, OPA simply says like this is undefined. So if we want to ensure that there's some value return, we can provide a default one. So we can say by default. OPA allow is equal to false. And now we see that we will always get something back. We have an, we have either true or false. So the user is either allowed or it's not. What we might want to do here is of course have something more realistic. So we might want to add, again, remember that when we query OPA we can provide any type of value. We could, or any type of JSON document. So I'm going to add an input, a document here to just simulate some data. I'm going to say user, and that user has some roles. And one of those roles is going to be the admin role. And I'm going to have my policy here check that if one of the roles is admin. I'm going to say input user roles. I'm going to say admin in input user roles. And for this, I'm going to have to import the in keyword, which is part of the future keywords. And that's that. So if the user is in the admin, or group, or if he has the admin role, this should be allowed. And we can see, oh right, because we did not provide the input file here so I'm going to say JSON. And we can see now that providing this we see that OPA allow evaluates the rule and returns the provider return value. But this could be any JSON value so we could say, we could return a string that said yes, or no, or even a complex nested object. But in this case, I'm just going to, I'm just going to settle for true here. So if we change the groups here to be for the roles, let's say this is a developer, and we evaluate this again we'll see that the decision is now false because the user is no longer part of the admin group. So that's basically super simple policy or the anatomy of a policy and the anatomy of a rule in a few minutes. Alright, so I'm going to head back to the presentation here. And I think it's over to you Joey. Hey Andrews there's a question in the chat from Patrick, could we integrate OPA with as your active directory. Yeah for sure. So, pretty much anything that can be that exports anything as JSON or Jamel, or that can be transformed to do so. This is definitely a viable integration so for Azure AD or any active directory. Yeah, you can you can definitely do so I think like the way you'd commonly do it is to either provide the data from from your active directory. In an access token or something similar, maybe a JSON web token that is included as part of the query, but you can also provide data beforehand to OPA so OPA is an in memory store. So you could kind of mirror parts of your active directory with any data that might be relevant for policy evaluation. Alright, cool. Hey Andrews can I just share my screen. Yep. Okay. You guys can see that. Okay, so let's talk about guardrails for data protection. Now the one thing that to learn about data protection is, it's all about the application data itself and the backups of application data. Historically, you know, data data protection and data management solutions they're historically focused on availability. Right so the quick recovery to disaster recovery the business continuity of application data, but there's more to it than that you know the security attacks that these days are very full compromised type of attacks. So things like exfiltration things like ransomware. The concerns that people have been our customers have been raising to us has largely been full spectrum confidentiality issues with the set of data, the privacy of data integrity issues that really affect is the data that the application presenting. Is it legitimate or is it, you know, manipulated by adversary, or is it corrupted. In the case of an integrity attack, in addition to the availability requirements. And so we need to start thinking about the data protection objectives much earlier in the process. Historically, it's a day to operation, right, let production be deployed let it be provisioned and hand off that responsibility to a data protection team, or you know, operations engineering staff right there are definitely techniques to get the business to start thinking about the data protection objectives well early in advance early in the death cycle and these are a few things that we want to show you. Now historically, there are two ways to control and enforce access to different objects. Right we have role based access control. That's more of a granting an authorization technique to say, hey this this admin or this admin can read, you know, the PVC is the stateful data the services the secrets. But sometimes we, we want to actually enforce or deny behaviors, and this is going to be more specific to the data protection objectives. Think of the recovery point objective the RPO think of the RTO the recovery time objectives retention so if you're in a specific industry subject to compliance, like HIPAA or, you know PCI compliance you have specific retention but if you have confidential or sensitive data that you know is going to be a target for an adversary, you're going to want something called a meetable backups and so it's really important to enforce the existence of these objectives upon your data protection infrastructure as code. And we're going to show us few pseudocode examples about how you would implement those guardrails using OPA itself. So the first example that we have is if you're if you're a developer and you're if you're using a technique called GitOps. Now GitOps is where we check sort infrastructure as code into a Git repository, and that represents the source of truth for how a new production environment should be stood up a new auto scaled node should be stood up. And that includes of course your production infrastructures code right your stateful sets your PCs your secrets your network policies. It also includes your data protection infrastructures code. And so if if this code is peer reviewed, you can also have it checked by the policy engine right the OPA policy engine. And it's going to be inspecting your data protection code to see that it's meeting the right data protection objectives. So you can see in the example we have here on the right, we have some example policies and example backup target. In that policy, you would want to look for things like three to one, three to one is where we are ensuring that we have multiple copies of data. We've exported it to an offsite cloud location, you know for disaster recovery purposes, and that is meeting our compliance for retention, maybe it's seven years. But it has an RPO that actually meets the mission critical objective, it might be minutes it might be hourly. In some cases, it could be just daily. But we want the policy to actually look at that we want to look at it and enforce that when an application developer is deploying a new application, that it's also of course deploying the right backup policies. In the bottom example here on the backup target we've got a bug where the object immutability was accidentally left out, you know I commented it out here so I can actually show you the bug. But in a lot of cases, we might just stand up a backup policy and kind of just forget about these advanced configuration settings, and that ultimately is very dangerous, especially in the case of ransomware attack. If you don't have immutable backups, it means the adversary can obviously steal your data, but it can also delete your backups. And if they can corrupt production and delete your backups and you now have a double failure situation where you can't recover your data. And that mess configuration is particularly dangerous, especially in data protection. So how does this work exactly. Let's say you're using to get up to workflow, you know and some of you are doing that that's great. Let's say you have a developer who wants to create a new tier zero mission critical data protection policy for their app. They write some prod code they write their secrets their policies their stateful sets the provisioning of those pbcs, and they write the data protection policy and the data protection backup targets. They committed all to a new GitHub repo a new branch, and then they initiate a pull request that pull request goes off to other developers on the team, maybe the cloud platform ops team maybe the data protection team to review the, that the are standards. And then in the background, if you have opa the engine integrated as well as the policy code opa is going to give you instantaneous feedback about whether it meets your objectives or not. And once all those reviews are signed off once the opa engine is actually validated that the code signs off against the policy, you're now going to have and once you merge it back into the main bridge. You're now going to have secure protection and production infrastructure code to deploy to any region to, you know, any new clusters from day zero and day one. Now the second way to leverage the guard rails. It may be more of a for more mainstream audiences is if you have a dedicated cloud team or a dedicated data protection team who is writing this code to hand off to an application developer. And the application developer, you know they may not necessarily be seasoned experts in writing data protection policies, but they don't need to be. They can easily consume, for example by name, the name of, you know, our gold standard policy, our standard, you know, bronze or silver level policies. Make it easy for them. It's how I always like to think of it when you're consuming security and data protection make it easy for your application developers. And you guys are going to have a great partnership in deploying a secure platform, of course, and delivering your applications to your end users. So this, this, this concept is called admission control and for opa you would you would achieve this through the gatekeeper project. And this is helpful if you want to apply your protections a little bit more closer to staging or production that you actually want the Kubernetes API server to not only do the authorization and authentication that according to our back. But you want an extra layer to actually ensure that that, for example, that backup policy or that backup target is actually meeting the data protection objectives and that's something called admission control where it's going to send a request to the opa policy agent right using the regal language, and it's going to give a decision that says hey this this API this code meets or does not meet our compliance requirements. So let's talk about some example policies you should be thinking about if you're responsible for deploying data protection in your environment, and you want to leverage infrastructure as code as a way to accelerate and get to market faster. Here are some of the policies that we recommend deploying. A basic primer, you know on data protection, it usually is centered around two things recovery points, recovery point objectives which is RPO and recovery time objectives. Put it simply, this is really how much data you would be able to tolerate in a data loss scenario that typically states your RPO requirement. So if your RPO was one hour, your business is saying you can only tolerate about an hour of data loss in the event of a disaster or security incident. Moving to the right is the RTO the recovery objective, and this usually states the continuity requirements, how much time should pass before an end user might notice, or it starts to disrupt your core business activities. So today this is negotiated with a general manager, a business leader, somebody who owns the application. It's very important to get that alignment of course with your business stakeholder because this ultimately affects revenue it affects adoption it affects retention and churn and branded loyalty of your customers. So to find these data protection policies, make sure they align with the business, and that's going to help you when you start to author these directly in code because you can actually show that back to your, your stakeholders and actually say hey does this meet your business objectives. So the first example here is exactly that it's talking about the recovery objectives. So in this pseudocode example we have here we have a backup policy that's tuned to do hourly backups, and that's great for general purpose, sorry mission critical workloads, a general purpose workload might accept a daily backup 24 hours. In this case you know we want to also make sure that there's availability of that data, so that we have a copy of that data off on a third party cloud array object store NFS target somewhere that's that where if production fails that you have a secondary target course. So this is looking for and you know depending on the backup software that you use you might have different objects you'd want to target. So there's policy objects is scheduling objects is cron job objects. And you're going to typically want to look for that RPO and validate, is it lower than or equal to what you expect from a data loss perspective. So what you can see here in the rego code maybe actually Andrews can explain a little bit what the rego code is actually doing here. Go ahead. Yeah, sure. Okay, so yeah we have an allow allow rule here, pretty much like what we have it had in showed in the, in the example before. In this case we're going to just check the, the type of the input. So, in this case, the resource is of the type policy. It's just a little, a little confusing though because it's, it's, it's, it's actually a backup policy so it's this doesn't check a rego policy but it, it verifies a backup policy. Yeah, just to check that the kind of the request is policy. The loop is backup.io. And what it does on the third line here it references another rule. So rules are composable so one rule can reference any other number of rule, and if they all are evaluated true, and the allow rule will evaluate to true. So in this case, allow is only going to be true if the has backup policy rule evaluates to true. So backup policy rule below. It's kind of simple. We just take the spec from, from the input. And I should say here that the input in this case, it's a Kubernetes admission review object, which is what the Kubernetes API server sends to OPA for validation. And it explains the, the structure here of like the input, the request object. So, for the, for the object, we, we use some action in, and I think we're missing something here, but so we're iterating over all the actions provided. And if there's one that matches or is equal to hourly, we say that we have an acceptable backup policy. And kind of paired with that, the other most common thing to definitely look for, and this is particularly important if your data is typically a target. So the adversaries, especially ransomware operators, they're, they're looking for sensitive data that you know is valuable, it has purpose, it has business value. And they're looking to hold it hostage, of course, by encrypting that data. So ransomware protection is also known as immutability or air gap. And it really depends on your backup product. And, you know, if you're at a private cloud or a public cloud scenario, typically in a private cloud, you would go for an air gap backup. That, that's meaning that there's no network access on a regular basis to that backup unless authorized by a privileged admin, for example. But if you're working in public cloud, of course, everything is always online. So they typically have a right once read many objects store. So it's an S3 calls this an object lock bucket, for example. And the backup software, you know, if it supports object lock, it's going to enable writing those backups to that object lock store so that it can create a property of immutability. And so depending on how that code looks, you might have an advanced configuration value. In this example, you know, we have a configuration value called object lock true is this might set up the bucket to configure it using that object lock and then of course send the backups to that that same object. And I think the regal code here is also pretty straightforward, right? It's looking for that specific action or advanced metric configuration that says, Hey, is it true? Is this object lock being enabled? If it doesn't find this, you know, then you're putting, putting the backups at risk for deletion or corruption. And some of the resource types, you know, you might have here are backup targets, locations, location profiles, PVCs, just targets, NFS in general. And, you know, it varies of course across depending on what software you're using. The third example is related to an availability of your of your backups. So a common disaster recovery policy would have, you know, you have your production, you have your copy of production, which is sometimes a storage snapshot. And then you have an ability to copy that storage snapshot to an offsite cloud location. And so that's what we would call 321. 321 is having three copies of your data, two of them on different storage targets, prod and secondary. And then one of those, of course, being in another region or another offsite location. You can extend that concept further if you want to include the air gap or immutability property with something called 32110. And again, it varies by data protection software, if it supports immutability, you can have a check for that. If it supports data verification or verification of those application backups that can be, that can be restored, of course. You know, you'll want to write your policies to ensure that it's able to kick off that test. And again, the regal here is very straightforward. It's looking for two actions in the actions array. There's a backup action followed by some kind of copy action or export action to another object storage location. And if it doesn't find both of those specific actions, it's going to reject admission, right? It's going to reject, make a decision that says this doesn't comply with our policy. And the last one I want to leave you with is a little bit less common and a little bit more unique, which is a scenario that is starting to come up more and more in the ransomware attack pattern. It's not enough to just get your data encrypted. And nowadays the adversaries are also stealing data. And they're using a variety of techniques to do that they might steal it directly from production. They've breached a pod and escalated privileges to the node. They might steal it directly from production, which is a bit trickier to protect from. The alternative is that they've breached the data protection infrastructure, and they've gotten credential access to a restore admin or a privileged admin. The attack was what we call living off the land. And living off the land attacks are particularly dangerous because they use legitimate software to perform a malicious acts. And most detection software is typically not tuned to detect incidents on trusted software. So if, for example, your data protection software is compromised, or the access to your data protection software has been compromised, then they can use the restore functionality to exfiltrate data itself. And so depending on how the data protection software creates jobs or backup jobs or restore jobs in Kubernetes, they would typically be another resource. And you can actually track that resource and using OPA. If you can write a rego, for example, that says, hey, for any restore job that's created, let's look at where it's going. You might want to create a whitelist to say, these are approved locations or approved namespaces or approved, you know, approved cloud locations. The beauty of the rego language here is that it's very flexible how you want to, to author it. And so in this case, in the exploitation protection, you might want to write a rule that says, if we see a restore job going to somewhere we don't expect, shut it down. Don't let that job actually kick off and that's going to prevent any exfiltration of data coming through the backup application. So I'm going to double check to see if there's any questions because we do have a demo for you that's actually going to show the data protection concept, gargle concept in a CI CD pipeline. So let me just stop sharing for a second here. Any questions at all about policies or anything we spoke about so far? Okay, then we can go ahead and just get to the example then, Anders. All right, great. Yeah, so we have some like basic knowledge on rego and what the rules look like. And as Joey showed, we also know like, what does these kind of backup policies look like. And again, like, given how Opis is kind of agnostic or a general purpose framework, we can work with any type of JSON or YAML data. So one thing to note here, though, that the way, if we check these manifests, so in this demo repository we have manifests, which are the YAML files that we want to deploy to Kubernetes. And we have some policies where we have all these rules like any backup action must be accompanied by a backup copy and so on. So we have a bunch of manifests, and we have a bunch of policies. So, for example, I don't know here, let's check something here like the ransom one where we're checking that, as Joey demonstrated, we need to ensure that for any AWS backup target, we should deny that deployment if it does not have either an object lock or an air gap type. Or for the recovery scenario, we're just going to check that for any policy, there needs to be a backup for any, I don't know, for the restore job. We want to ensure that the restore can only be performed in one of our allowed namespaces, so we want to protect ourselves from like the exfiltration scenario. So, one thing to note here, when we look at all these kind of manifests, the format of these manifests are going to be a bit different from what we'll actually be working with when these manifests are presented to us by the Kubernetes API server. So the way the Kubernetes API server presents these, again, is going to be in the form of admission review objects. So we could, of course, spin up a cube cluster, have OPA installed, ask the admission controller, and test things that way. But in the context of a CI CD pipeline, we're going to want to have some form of quicker feedback. So in this case, I've written a little tool, it's called cube review, which simply takes any Kubernetes manifest like a deployment in this case. And it simply turns that into an admission review object. So in that way, we can kind of shift left the testing. So we create admission review objects and we can push those into OPA directly in the CI pipeline and run our tests on those objects. So the way we've done it here in this demo repository is if we check the PR workflow here, we'll see this applies on pull requests. So we're just checking out the manifest, checking out the policies. We download OPA using the setup OPA project, which is basically just a downloader for OPA in the context of GitHub actions. Then we download cube review and we run the validate script. And of course, if this fails, the PR is going to fail or it's going to have errors. So if we check the validate script here, we're going to see there's some scaffolding here to check that the cube review tool is installed and so on. What we'll do here is we iterate over all the manifests in the manifest directory. And for each of these, we create using the KR or cube review tool, we create an admission review object, which would be identical to what the cube API would present us. So we can run the same policies here as we would in a later deployment step, but we'd rather want fast feedback and we'd rather not have to run a cube cluster in the build step. We will obviously want that later in the deployment. But for quick feedback, quick iteration, we just want to run something to verify that the manifests are fine. So again, you might remember the OPA eval tool, which is a simple way of just evaluating a policy. So what we do here is for each of these manifests, we pipe that into OPA eval and we run that over all the policies in the policy directory. And in this case, we have one rule that kind of aggregates all the results from all the rules. And if it's not an empty array, meaning if we check the rules over here or the policies, you can see what we're using here. We're calling all of these deny and these rules look a little different. They have a string in the name, which means these are partial rules so you can have any number of deny rules. And when evaluated, each of them that evaluate to true will build up a set containing all of these strings. So it's basically a way to have your rules return not just true or false. But what we want to do here is we want to provide a reason or a message back to the user to tell them why did this not work. So if we were to change something here and we make this, we'll remove the backup copy here. And I'm going to run the validate script. I'm going to see here that when when validating their recovery YAML file, we have a policy violation because policy resource must include both backup and the backup copy actions. So very fast feedback, I can modify my manifests and try things out and then just run the validate script to have OPA evaluate the manifest. And of course, if I am to, it's called feature branch, where I now have one changed file. I'm going to try and add that commit. I'm going to say, remove backup copy action. And I'm going to push that and I'm going to try and create a pull request on that. So now we're now now hopefully OPA or the GitHub action is going to run the PR checks, which will run the same validate script as we just ran locally. So hopefully we're going to get some pretty fast feedback or get to know what went wrong here. So we see that, okay, OPA or OPA did not validate the resource. We can check in the details here. We'll see that yeah, pretty much the same output as we did locally and OPA and GitHub will now not allow us to merge this pull request. So a very, a pretty simple way to verify manifests. There are obviously more like elaborate things like contest and so on. But for the purpose of the demo, I wanted to keep it basic and show how you can use like simple tools like OPA eval to run as part of the build process. Yeah, I think that's pretty much it if there's any questions or so. So Anderson that scenario, you know, I'm a platform developer I'm writing code for deploying into a new region or new mission critical. I don't necessarily have to wait for other people to give me feedback right OPA is going to give me feedback. And I can fix the bug right there right without any human interaction potentially. Yeah, exactly. So GitHub is going to tell you like this PR cannot be merged due to the validation errors that OPA returned. So of course we could, we could do these checks in an admission control but I would mean like, we wouldn't know at the time or at the point in time when we created a PR that this is not going to work. We would have to have the PR merged and then when we deploy that code would see like oh this this cannot be deployed because this is in a violation of this or that policy. So the idea is of course to shift lamp we want to we want to get feedback as quickly as possible. Before, you know, like before we're on the stage of we have to get the production in the next 24 hours, and we find all these problems we can actually find them as we're developing the code to get ready for staging and production right to give us a little bit more time. Yeah, exactly. So like the same kind of validation steps that you run locally are run in the CI or in the build pipeline. Which is, which I think is ideal because then you can, you can use like pre commit hooks or whatever you want to ensure that you probably don't even want to create that PR in case there are violations and OPA will provide you with detailed messages telling you which resources in the violation and and for what reason. Okay. Anybody have any questions about anything you saw, you know clarification that we need to provide. I've got a few questions. If not, is there a website to learn Riga. Oh yeah, for sure. So, again, the official docs are great. And in addition to that, there's also the styra Academy, which is a learning resource with things like video based content tests and so on. So, I can definitely recommend that. Okay. What is the URL. All right. Yeah, you can put it in the chat. I'll add it in the chat for sure. Okay. And one of the things that I saw from a developer form. So on Reddit, our slash DevOps was about policy as code. And one of the one of the developers actually commented that we're writing declarative code, you know, like, like a YAML right policy is declarative. And that developer's opinion was actually that's a little bit redundant, where he would rather see this used with a declarative imperative concept so the scripts trying to deploy something more specifically, and a declarative policy to validate that that imperative is actually doing what is what we wanted to do right. So he had the opinion that declarative was declarative was redundant. Do you agree with that I mean how do you, how do you see that. I'm not sure I've followed entirely, but I think like, are we'll see referring to like having the actual kind of restrictions defined as part of the actual like policy or like the backup or the manifest. Yeah, I think it has to do with, we're trying to, we're trying to meet an outcome right in a decorative language defines the outcome you want. And the policy as code is enforcing the outcome you want. So it will protect against of course typos and things like we forget code but if they're just trying to test the outcome. In theory, both of those approaches will meet the outcome if written correctly, but the imperative is not outcome based its action is very scripting so you tell it to do this command line and this action or this API and it's not objective based outcome base rather, and it is a good pairing of enforce enforcement on an imperative language. I think I think I understand I think like the benefit that I would emphasize is having the rules be coupled from the actual like specifications. So the manifests in this file are separate and the policies could live in like an entirely different repository or be managed by a different team. So of course, if it's just a simple check. Sometimes you might just want to add like an, if then if or else or like as part of the build script or something like that. But once you start to kind of scale up and you have like hundreds of resources, and you might have like 10s or hundreds of teams. How do you know like what what what policies are applied to this type of resource in a unified way across our whole organization. How do we audit that and things like that. So I think, sure, for the simple use case, you can definitely like add some simple checks and and so as close to the to the actual like data as possible but once you kind of go past that you're going to need something more organized. I definitely think this is this is kind of the technology you would use in a much more distributed environment, you have a distributed team, distributed applications, maybe there's one central cloud platform team that's responsible for production also responsible for data protection. And you've got a series of application stakeholders is not enough meetings in the day where you don't want that many meetings in the day to kind of coordinate things. Yeah. So if you can codify the organization's practices in policies code, publish, distribute to the developers and you know make it simple for them to consume. You're going to have a very highly scalable organization. And, you know, deploying would be the least of your concerns at that point. Yeah. Yeah, you can definitely have it in different repos and different different repositories. Oh yeah, definitely and and opa provides several ways to kind of now we were just running opa eval which kind of just takes a file on this but there are there are more advanced ways of fetching like data and policy there is like a concept of remote bundle servers where opa can go in and and get policies and so on so there are many ways of doing this. I just wanted to keep it like very basic for for the purpose of this demo. Okay. Any other questions and I'll close it out. If not. Okay. Well I just wanted to thank everyone for attending today. I hope it was helpful and, you know, hopefully we can, you know, provide you some guidance on, you know, if use case that you have might be for, I see a question here. Are there any use cases for using opa against a serverless. Is that a question for me. I think so. Yeah, using opa in like a serverless environment right we mostly talked about a kubernetes environment right and you can use serverless and kubernetes by the way but yeah, go for it. Yeah, sure. So there's, yeah for you can opa itself kind of the common mode of operation is is having opa run as a service. So you'd have an actual like HTTP server that services requests. There are, there are ways to have opa, the more to run opa and more kind of a stateless context which which is suitable for serverless serverless. So yeah, I can can definitely be done. I think like another pretty common way of doing it is have opa run somewhere and then have something like the AWS like gateway query opa before it kind of forwards. The request to to lambdas and things like that so there are many ways of doing it. Either having opa itself run serverless or or or just have opa run as a server but still in the serverless context. I have a similar question here can we can we package maybe the rego and opa engine, you know, for a functions as a service offering. Maybe it's the same question. I'm actually not sure about open boss and I've never used that. So, I'd love to know to. I think our context might be available somewhere but I will I will drop my email and is maybe you can drop your email and we can have any follow up questions. Of course. All right. Thank you everyone. We're just just about out of time and see you next time so much. Thank you Anders and Joey and thank you everyone for joining us. You know where to find them and we will see you next week for another version of CNCF live webinars. Thank you all so much for coming. Thanks everyone.