 My name is Gopir Abala. I'm a CTO of Oxonex. And my colleague Bala G. Sivar is supposed to be here with today. He's not able to come, so I'll be taking care of his presentation as well. We're going to be talking about automating industry regulations enforcement. It's so mainly to see the regulations of many types, right? So we want to cover what is it that we can automate, what makes interesting for us to do with the spinnaker in terms of our delivery automation, and what are the drivers for it actually. And then we'll see some examples of what are the kind of things that we can automate and how we can have it as a policies in the delivery pipelines. Is there a variety of regulations? These are like alphabet soup in some extent. So these standards, they define not only the processes, but also they're not very prescriptive in terms of what is it that you need to do. They define more in terms of the process saying, these are the checks that need to be done. But some of them are more prescriptive like FedRAMP or PCI, but others not as much. So if you take these, there are generally two sections within those. One would be things like the Usher House of Training, onboarding process and things like that. So the tools that do those things are more than manual. The second part of it is in your software delivery, you have to have certain security built-in, certain approvals built-in and validations and those processes that we can automate. So we will look at what are the those kind of process and how we can automate those. So what are the drivers for this automation? So some of the trends there have been is this developer productivity, the focus on reducing the burden on the developer to be in the middle of the security path or knowing all the security processes. At the same time, there is also usually in organizations, the security teams are small and they provide some kind of a guideline saying, you have to have these security checks that are done, but it's up to the teams to follow the process. When it comes to the compliance audit, typically you find places where there are exceptions taken and haven't been recovered. So it becomes a friction between the security teams and the development teams. As we are trying to improve the speed of the delivery, the developers are focused on new features and the delivery of the features, and the security teams become an impediment. So those are some of the drivers to say, how do we do this automation during the delivery process? If you have the pipelines, having those checks in the pipeline at the developer side is also a problem, because now the developers need to understand what the security checks are, and if they have to explicitly create those pipelines with those checks, that becomes a problem. To some extent, policy is a code. If you can define these policies, have some in repository that get executed at different stages in the background, providing alerts to the developers saying, these are the failures or at the deployment time as a admission control in the target environment, then you can get a better response to the security requirements in the organization. So these drivers having the security integrated into the delivery process really needs an integrated approach for the DevSecOps delivery. So what is the complexity here? If you look at the delivery process, it's typically not just one pipeline that goes all the way from code checking to the production environments. In regulatory environments, they have production environments that are independent or separated out with a limited access and also the approvals that happen in the production environments does not allow the same pipeline to go all the way through. So having checks in the pipelines in one pipeline is not possible. So you have to have these verification of the policies done for example, the static code analysis to the code and then vulnerability checks at the time of the deploy making sure there are no critical high in them or if there is a process requirements like Sorbonne's Oxley checking at the time of the deployment, that becomes fairly complex. If you have all these disconnected pipelines and you're trying to do it through Spinnaker itself. So these are the main things that make it difficult to do the complete automation for the security. And doing this at scale becomes even harder. So how do we do it with Spinnaker? So Spinnaker provides some features in terms of security. It has RBAC built into these applications, you can have triggers, who can trigger a certain pipelines, you have service accounts configured. You can also as a process lock the pipelines from the UI and have them only change through Git or automated process through Git. You can have cluster locking, so no changes to the target environments can be done from the UI. And then you get significant audit information from Spinnaker. So you know when a pipeline is triggered, when the pipeline completes, who triggered it, all of these notifications can be captured and you can have dashboards for those. But that's not sufficient in most of the systems. If you look at Spinnaker itself, you can divide the security on top of what you deploy in the production environments to be as a infrastructure security, operational and the delivery security. Typically what we see is the enhancements to the Spinnaker core itself on how we deploy Spinnaker and how we operate Spinnaker, how we manage the pipelines, secrets management, those fall into one bucket. And then the delivery security in terms of what are your pipelines, how are you integrating in an application delivery, the security requirements for those applications, those fall into the delivery security pipeline. So we'll focus more on the delivery security for the application delivery itself. So here, usually we also have extensions in the pipeline and custom integration. So let's say in an enterprise you're using checkmarts or a sneak. You want those checks to be part of the delivery. That means you need extensions to be built into the Spinnaker to verify those. From the security perspective, you can see that security team is trying to protect the application from various types of attacks. It's a security perspective, not a application developer perspective. So if you look at those, they're looking at OSS security. What are the libraries from the OSS that we are including in the software? What code are we forking and using in our code? What are the vulnerabilities that could be exposed because of the code that's being done? And if you're building, are we building it in the right server? You don't want the developers building on their test systems and then use that code to promote to the production because you want to be ensure the compiler and the libraries that are being included are all verified and vetted. In the artifact repository also, you want to make sure you pick it up from the right locations. You don't want the artifacts to be presented in a location where it's not approved by the central team. At the same time, the deployment, who can deploy? What are the processes? You don't want direct access to the target clusters. What are the processes, automated processes that we are using to deploy to those environments? And when you're running load balancers, security groups, what are the possible ways in which it can be attacked? So from the security perspective, these are all the things that they are trying to identify and enforce the threat vectors and enforce the compliance. So these are not exposed. So what that translates to, from an application development and delivery perspective, is all the checks that we need to do. It's a static code analysis, image scanning. When a binary is built, you want to make sure the image is scanned and for any of the libraries that are included in it, come from the right location and known vulnerabilities don't exist for them. You also want to make sure the git branch policies are set. This allows the, ensuring that the changes that are going into the main branch are wetted and you also go through some unit testing or other analysis before it gets into there. And then at the same time, you can also ensure that licensing for the open source. That's also one of the common themes that comes up. And once the binary is built in the testing, you have other processes that you have to go through. So dynamic analysis and if you have a specific process requirements in an organization, some of them could be using G-R, ServiceNow, and ensure the process goes through that for the approvals. So you want to have that integrated in your automated process. And in the staging, similarly, you have additional checks like the Sarbanes-Oxley where you need two different individuals to approve to go to the production. This is just a guideline. And there are some exceptions. Typically you take exceptions when you build the system and you're moving to the production. You know there are some vulnerabilities or process exceptions. Maybe you have to push the heart fix immediately. You have to take those exceptions, but you need to be able to record those exceptions and then fix them as soon as possible. In the production, and you want to make sure the blast radius when you're deploying is small. So you need to ensure that you have some kind of a deployment structure where you can recover from errors very quickly. You want to be able to verify the infrastructure vulnerability checks. And so ingress-aggress for the load balancers and the prominence checks on the images that are being deployed went through the checks. So given all of these things that need to happen, today the organizations, the way they do it is some of these checks get built into the pipeline, but some of these are done outside the pipeline. For example, if you have an image deployed in production and a vulnerability is found for it, there's no way to track that's been remediated immediately. Usually they look at, say, docker registry and say, for this docker registry when a scan runs on it, all these images have vulnerabilities. So if a developer pushes a new image into it, but not necessarily deployed in the target environment, you may not know. And so these kind of exceptions do happen because of the way today the systems are built and who checks the security things. And so one of the things that we recommend is the integrating the security checks in the pipeline and use these prominence checks as admission control during the deployment time. So if they have process requirements like JIRA or a service now, those need to be part of your delivery pipeline. And one of the key things that you have to ensure is Spinnaker, we can do extensions, plugins. You have to make sure that these extensions themselves are secure. They come from the right locations and if you're making them dynamic, that's even more critical that the locations are properly set and the versions are controlled. Admission control is a critical piece. So when we are doing admission control, there you have for us to check whether the image has gone through the proper process and exception approval structures. You need to have some place that it can go and check. So that's where you need a database where you can verify the prominence of the image that is coming through and check that. So that's one of the things that we will talk a little bit more after this one. And it's not sufficient that you are compliant but you need to be able to prove that you are compliant. That's typically one of the critical requirements. And so you need to make sure everything that's happening is auditable and have a database where you can present the data and alerts on the policy checks. So here the multiple groups are involved. That's one of the key problems, right? There is a security group involved. There are DevOps groups that are involved. And the actions taken by different groups depend on what kind of alert that comes in. And everyone needs to be aware that there is an action that needs to be taken. So that's where the alerting and ability to share these alerts is critical. So one of the things that we talked about is we need to be able to check the prominence at the end, which means since in a spinnaker, even if you run the pipeline, there is a pipeline execution data. Even if you have multiple pipelines, you have execution data across these. Now the trick becomes, how do you correlate the information across these and have a policy that can run on the pipeline? For example, if you are deploying in a production system from an artifact tree to a Kubernetes environment. So how do we ensure that the artifact that is in the artifact to the container was built on a certain date file, a certain commit? So that correlation information is one of the pieces that I was missing. So what we need to do is have some kind of a correlation database built. So this is something that we built. So there is a database, then you can expose the data at the deployment time as an admission control. If you want to check a provenance of the image, is it built by a certain build system within the last two days? So we should be able to check those kind of things. So that's where, then we can have these policies that can apply to the data that we are delivering through the pipelines and that can be integrated in the spinnaker pipeline to check at any given time. So when we talk about the policies, what we are talking about is simple things like these, right? Is check with Sorbin's Oxley compliance in a pipeline. So if you're deploying to production environment, can we check if it's been approved by more than one person? Or the vulnerabilities, are there any critical or high before we deploy? Or any changes to the pipelines happen and are they only from authorized users? So some of the pipelines we can lock them up, but someone can go change it in the gate and that gets applied to the spinnaker, let's say. So those changes that are going into the gate can be verified, that's from the approved group. So these are all the simple enough checks, but they can all be critical for having a compliance and be part of the spinnaker. And some of these checks, you want them to be part as a hidden pieces in spinnaker, like your plugins that automatically check. And some of them are the process checks that are individual stages in spinnaker. So we look at one of both the examples on how we can use those. Let's see here. So what you're seeing here is some examples of the policy pipelines. These are set to just check for the policies and not more. So in this figure. So the way these systems are set up is there is a OPS server running in the backend and there are some policies that are applied to it. We are getting the data to the OPS server both through the pipeline execution context and also in some cases backend database. So in this one is simple Sorbine's Oxley check. Let's say it's been triggered by somebody and then the manual judgment is done by somebody else. So you can look at the policy as a simple thing here based on the pipeline execution context. This one is a similar one. So here you have the manual judgment done by one person but the trigger is done by somebody else. So you can have simple policies based on pipeline context and in some cases you can have built-in structures where you say in a process you require a automatic verification done in the deployment and if we try to remove that and then go to the production environment it should not allow you to make this policy or process change in the pipeline. So these are more like a plugins that are built-in to front 50 built-in to cloud driver that allow you to do the checks automatically. So the advantage here is that the security group can define these policies and have them as a policy or code and deployed in the backend server and as the developers start using their dev test cycles and then when they promote to production it automatically checks and verifies that the process has been followed or not and then stops them from going forward. So the advantage here is that the policy as code is available if the developers need to see what is the policy that is failing they will be able to see the policy and at the same time it doesn't stop them from doing their dev test cycles and have those pipelines as self-service for them until they start promoting to the production. So that was a quick demo. The process where we are getting the data from database for vulnerability checks or the provenance checks is not included in this one but we can show those demos. That's outside the Spinnaker so we can show those demos in the booth. Any questions? That's all I wanted to show for today. Any questions? I'm sorry? Testing. So how do you load the regos or your policies in the demos what you have demonstrated? So we have GitOps model set up. So the Spinnaker, we have the connection with the Git so anytime the change to policy applied into the Git that gets applied into the OPA server that's running in the Spinnaker environment. So this is happening outside of Spinnaker? Yeah, those policies are applied. And you have a plug-in or whatever that connects Spinnaker to OPA? So those policies also we are applying through the Spinnaker pipeline actually. So you have the rego policies in the Git. There is one admin pipeline with the Spinnaker. Whenever changes happen in that rego system, sorry, in the Git for the rego files it applies into the OPA server. Okay, makes it. Thank you. Thank you everyone.