 Next, we'll hear from another commit sponsor, Acurex. Again, be sure to check out their demo presentation as well. Om Mulchandani, co-founder of Acurex, will share how their solution can help you secure infrastructure as code within the developer workflow. He'll explain how approaching everything as code can break down silos and lead to better development, security, and business outcomes. A key element is understanding the broader risk context of vulnerabilities, including the breach path to better enable informed security decisions early in the workflow. Please welcome Acurex. Hello, and welcome to my talk on better dev, sec, ops, and business outcomes with everything as code. My name is Om Mulchandani. I'm co-founder, CISO, and CTO at Acurex. Just a bit of fun fact about me. My favorite attack technique is waterhole attack. I love waterhole attack. It lands you directly into the target attack zone where you want to be and bypass so many different security controls. I love cybersecurity and mechanical keyboards. My favorite are holy panner switches and SA key caps. Just a little bit about my company, Acurex. Our mission is to self-heal cloud-native infrastructure by codifying security throughout the development lifecycle. We are based out of Bay Area. We are a DevOps-native security company, and we are also a GitLab partner. So let's take a look into the agenda today, what we're going to talk about. We're going to talk about how DevOps is changing everything. And when I say everything, precisely what I mean is everything as code is now what is being used to deliver hyper-automated applications as well as delivery pipelines. And we're going to also talk about how some of the automated workflows and GitOps workflows are really having such a high impact that in the world of secure GitOps or in security, now 1 plus 1 is not equal to 2, but it's greater than 2. We're going to show you some of those things, how they work when it comes to GitOps workflows. So let's take a look into a traditional DevOps process. This is how you would develop your applications and deliver them on to your runtime in a traditional DevOps manner, which means that you would have your application code being written, you would have your code repositories being used for storing and versioning the code, of course, and then you would have your pipeline for build and deployment processes. Once your cloud applications are built, you will deploy them on your runtime environment. And they could be AWS Azure, GCP, and many other different types of environment. Now, in this traditional world, there are many different types of security tools that have to be utilized in the entire DevSecOps lifecycle. For securing your application, you would have to use SAS tools for performing static analysis, DAS tools for performing dynamic application security testing, software composition analysis tools to discover open source vulnerabilities and package level vulnerabilities for containers. And for runtime, you would have to use CWPP tools for threat protection. And you would have to use CSPM and CIM tools for infrastructure misconfiguration detection and possibly remediation as well. But with the adoption of infrastructures code, what we were able to do was we were able to take this lifecycle process to a next level. We were now able to ensure that declaratively, you can define your infrastructure force in a codified manner. But then infrastructure code opened up also the possibilities to detect your security misconfigurations from infrastructure point of view also by scanning this code. So that means a lot many use cases in the cloud security posture management space could be shifted to left. That means you can now do early detection of infrastructure misconfigurations and possibly remediate them as well. Now, so far so good. But there are many other tools that are being used in the DevSecOps pipeline, particularly in case of the fact when you're using DevOps lifecycle, SAS, DAST, SCA. And now you also have shifted your CSPM to left. So if you do not make meaning out of the results produced by other tools, you will not be able to then perform better risk analysis and determine what will be the impacts for the different vulnerabilities and the results that you're getting from other security tools onto your runtime. Now, so far, it's OK. You still get a lot of opportunity to do this analysis in the DevOps pipelines. But then when the new GitOps world now has been embraced by developers, where not only you have to write your application code in infrastructure code, but you also have to define deployment as code. And don't worry, we're going to cover up all these as code types as we move along in our presentation today. But just to level set, since now there's so much of automation that you have to do using code, there's very little time left for DevSecOps teams to make meaning out of the results which are produced by various different tools that are embedded in your pipeline. And in case of GitOps pipeline, especially when the GitOps tools are so fast in syncing the state of your runtime with the infrastructure code, you have very little time left to do the analysis. But at this time, if you do not do the analysis and you just allow your applications to go into your runtime without determining the risks that exist because of the results available from various different abstract tools, you would be able to incur a very high mean time to remediation because you would end up detecting these problems probably in runtime if you have enough runtime security tools. And you will be also making a decision-making process is also going to become a lengthy and a costly process because you have not managed to do the contextualization and as well as analysis produced by the analysis of the results produced by various different tools. And that's where now we see that a lot of opportunities are opening up. So before we talk about how we can do analysis of various different abstract tools which are available in the GitOps pipelines and make meaning out of it, let's understand what are these terms which are related with as code. So let's take a look into first things, which is infrastructure as code. Infrastructure as code is a technology that helps you codify declaratively your infrastructure, whether you are building a control plane on a public cloud environment or simply on Kubernetes, you could use some or the other type of infrastructure as code technology to perform declaratively automation of your provisioning processes. Infrastructure as code is stored as source code and can be fully versioned. It will ensure consistent infrastructure and configuration throughout the development lifecycle. Of course, it is used in highly automated workflows and it is a low effort provisioning system. But what is the most important thing that it introduces is that now infrastructure development is pretty much like software development. That means you would be able to follow a lot more software engineering process in developing your cloud environments as compared to using traditional DevOps processes which were more oriented towards a console-based approach. For example, you have various different infrastructure store technologies like Terraform, Kubernetes, Ham, Customize, and you can force use them in a declarative manner and store them in your gate repositories. Next, when you are developing your infrastructures in declarative manner using code and using software engineering processes, how do you ensure that you are controlling that code from the point of view of best practices as well as you are enforcing security on top of it? You could also have some operational requirements that you want to enforce on this code so that even before your runtime environments are built, you are able to ensure that they are compliant. That's where Policy as Code comes in. So what is Policy as Code? It helps you codify operational as well as security policies. It can be stored as source code and version. Again, it becomes a component of your software engineering process for building infrastructure. It helps you consistently enforce policies across your development lifecycle for the infrastructures code as well as throughout the pipelines that you might be using for provisioning processes. Again, it's a low effort enforcement technology which helps you define programmatically your policies and you can distribute them throughout your development lifecycle as well as to your development teams to enforce the policies consistently across all different development environments and infrastructure environments. Some of the examples are let's say you want to enforce that you do not want environment variables to be used for secret storage. You can create a policy and you can have that enforced across your infrastructure as code development processes. And we'll see what are some of those workflows that can allow us to do that. Similarly, if you want to limit your containers from having root privileges or privileges which are considered super user privileges, you can write a policy to do that so that none of the containers that contain such privileges can go into your runtime. These are some of the advantages of policy as code. So what exactly is GitOps and how we can use everything as code within GitOps? So the official definition says that it is a code-based infrastructure and operational development practice that relies upon a version control system built on Git. And it uses infrastructures code and DevOps for high velocity CI CD or Kubernetes. This is what the official definition says, but a more refined definition that community now is latching on to comes from following a statement that GitOps is also used for describing tools that use Git as single source of truth for storing ISE and which helps you deliver applications and deploy Kubernetes clusters, apps, using pull request-based mechanisms. Some of the examples are Argo CD, Flux, Janking X, Kaffo, Kandico, Tecton. GitLab now also has its own GitOps tool as well. So essentially, GitOps is basically a combination of Git, infrastructures code, and DevOps. So what are some of the common automated GitOps workflows? Well, let's take a look into a typical automated workflow. So you have your local environment as a developer. You may be developing applications, or you may be writing infrastructures code. You would use a Git-based storage system, such as, say, GitLab to store your source code and version control it. And then you would use GitOps tool chains to achieve following outcomes. Number one, you would be using the GitOps tool chain to deliver your build pipeline or your build lifecycle stage, which is going to result in compiling, interpreting, or just packaging source code of your application into a, in this particular example, a container image, or it could be some other package as well. But let's talk about containers as kind of, that makes it more easier for us to deliver applications. Once your containers are packaged and your images are ready, then you would require to write specific infrastructures code, which we're gonna talk about later in our presentation, which is going to basically declaratively use and define where these container images are gonna get deployed. Once you have your infrastructures code written, you can then define the further pipeline structure by writing deployment as code. Like for example, you can write GitLab configuration files, which are going to define that how this infrastructures code is going to be utilized to deliver application, which is packaged in your container image to your runtime. And that's how a simple workflow like this can bring automation to such a level that the developer is just focused on writing application code or infrastructures code, but rest of the pipeline, rest of the processes are all controlled by GitHub's tool chain, which directly accomplish build provision as well as deployment for your applications on various different control planes like AWS Azure, GCP Kubernetes. So with so much of automation available in the GitHub's pipelines, how do you ensure that you are securing your pipeline and you process it at every stage? That is where you can use infrastructures code, policies code and security as code together to deliver this outcome. So first things is that all as code technologies are there to declaratively define your desired state for that particular target outcome. For example, infrastructures code works upon defining declaratively the state for your infrastructure. Policy as code works towards declaratively defining your policies, security as code also works towards declaratively defining your desired state for security. So how does security as code work? Basically, when you have to perform automated assessment and decision-making ahead of time, that means ahead of your runtime environments when before they are built, you would combine all these different as codes together and achieve such automated assessment and decision-making. So you would use everything as code and GitOps to derive a framework for delivering secure GitOps. And how can we do that? Take for example, we can perform automated security assessment of IAC artifacts such as Terraform and chart customized. We have those results available in the developer lifecycle. Similarly, we have various other different tools that we are using today in the pipeline such as SAS, DAST, SCA tools for performing application security assessments. So we have the results available from them, fair enough. Next, we can have results available from our dependency scanners, from our license validators or license security scanners. And we can also have results available from our container scanning tools. All these results are available before the runtime environments are built. When you start combining the results from all of these different sources and define your security policies accordingly. For example, you have a security policy that I do not want a container image, any container image, which has got at least one vulnerability with CVS score 9.0 and above. You can carve such a policy. That is what is your security as code, which is going to be implemented by combining your infrastructure as code, your policy as code and the results available from these different tools. And you can use such a policy now to first take decisions whether you want to deploy such applications in your runtime or not. You can even halt the clustering processes if there are violations for such policies. So this is how you achieve secure, automated GitOps workflows. Let's take a look into some examples. So in this particular example here, we can enforce these policies at various different stages. We can enforce them at local development environment itself. We can enforce them at a Git platform level as well, such as GitLab. We can enforce all these policies while the GitLab pipelines are about to use GitOps workflows to build. Or let's say for example, we can also enforce them at the time of sync or maybe at a last mile, which is your admission controllers in case of Kubernetes. Of course, we can also utilize policy as code in the runtime too. Point is that we have various different stages available to enforce everything as code, policy as code, as well as security as code. So what exactly can we achieve out of it and why should we even stop there? What it means is that today, we see that the DevSecOps teams are struggling to correlate the results from various different security tools that they are using in their pipelines. And one of the big reasons why we see that needle is not moving is because, despite of we having all the security information available to us in a codified fashion, we are not piecing them together. So if we start piecing all the information together that we receive from different security tools, at the time of pipeline decision-making processes, we can enable better automated decision-making as an example that I gave you. And we will look into some more examples. So another quick example could be that, let's say that there is a SAST finding reported by a static analysis tool against a piece of source code. Now, can we determine that this source code is gonna get packaged in which container image and that container image is going to be exposed in which particular manner on the runtime? Like it will be deployed on which cluster, which namespace, would it be exposed to internet? All this information now is available to us when we scan infrastructure as code. But if we start combining the results available with SAST tool, we will be able to enforce security and take the better automated decisions using security as code principles. Similarly, is there a storage bucket that was about to get deployed which contains sensitive data and was exposed because of overly permissive IM roles? All this information is already available with us now with infrastructure as code and policy as code implementation. We now start, we need to start making sense out of all this different information that is available to us. Similarly, if we can start detecting that what are those findings that represent greatest and most immediate threat, we will be able to drive better automated decisions from security point of view and reduce. Meantime to remediation as well as cost of doing security because we will do all of this in the development phase itself. Take for example, let's say that there is an application which is written in Python for example and the application code was being scanned by our app sec tool. It could be GitLab's own app sec or SAST capability which reports say a vulnerability, a CWE 918 vulnerability, SSR vulnerability, great. And the infrastructure as code scanning tool detects that there are other misconfigurations that were being introduced because of the weaknesses in the infrastructure as code on the runtime that would have been built where this application would have been hosted. Now if we start combining these two data sets together, one reported by app sec tool, the other reported by IC scanning tool, we could detect this entire breach path that you see on the screen in the development phase itself that you were about to deploy an application which had an SSR vulnerability on an EC2 instance which was misconfigured, possibly it also had instance where the data service we want to enable and also has a IM role which allows access to the database which is unencrypted. So if the attacker walks on this path, there's a direct path available from application penetration or exploitation to the data exfiltration. So all these smart detections can be done ahead of time and decisions can be made. I do not want such application to be deployed unless such vulnerabilities are fixed. Although the app sec tool will continue to report these vulnerabilities but somebody has to start make meaning out of all these different results which are available. That's where what we are proposing at Equix is that the developer first approach to security is now needed for DevSecOps. And what that means is that, well, number one, start enforcing policies at the time of infrastructure's code and deployment of support development that means secure your cloud development processes. And it also means that your cloud development is now being done in a software engineering method. So start using these tools that can ensure that your cloud development is happening securely. Next, start combining results, everything is code and utilize vulnerabilities reported by app sec tools, OSS vulnerabilities reported by SCA tools, combine the IAC misconfigurations together, combine other type of security results that you receive, build your security as code policies and programmatically detect your bridge paths. Not only detect them, but you can also generate remediations now and provide the remediations back to the developers in form of full requests and allow the developers not only to detect the security problem but also fix them programmatically. This is where your meantime to remediation and cost of security is going to reduce. Next, do last mile enforcement, like use admission controller based enforcements to make sure that if somebody, some security vulnerability is still able to bypass from step number one, two and three, you are able to do some last mile enforcement using your policies, which will not allow the worst case scenario to go through and get deployed on your runtime. And of course you always have to do runtime security but start detecting your runtime problems also programmatically and respond to them. DevSecOps is not a replacement for runtime security, but you could reduce the load on runtime security so much if you were to start doing step number one, two, three and four. That's all I had to present today. And if you're interested in knowing how Acurix and using our open source initiators such as Terrascan, we are helping community to achieve these steps, step number one, two, three and four. A developer first approach for DevSecOps. Don't forget to contact us. You can reach out to us at acurix.com as well as on my website or at acurix.com. Thank you so much.