 Hello, my name is Jim Garrett. And in this very short video, I'd like to talk about implementing multi-layer container and Kubernetes security with OpenShift for an automated DevSecOps environment. In my 35-year history of developing software, I've been personally involved in dozens of software delivery projects and on multiple occasions have been responsible for creating automated delivery pipelines used to build and deploy your code. One thing I've seen countless times is the fact that teams often don't cover security requirements until very late in the process, sometimes not until the end of the process. And when this happens without fail, the end result is delays in the actual delivery of the software. In today's discussion, I'd like to talk about the topic of including security in the development and operations process, something known as DevSecOps, something I'd like to think of as the marriage between development, security, and operations. With the implementation of DevSecOps, the goal is to prevent security challenges early rather than later. To facilitate this, security tools are included directly in the build process. And the end result is the delivery process is able to perform security auditing from the start and not jeopardize the release schedule of the project. In this demonstration, we've created a DevSecOps CICD pipeline leveraging multiple technologies that, when brought together, provide an amazing, reusable pipeline capable of producing rock solid products faster and with security built in from the beginning. As the foundation of this pipeline, we're going to be leveraging the OpenShift pipelines using Tecton. Then, starting at the left, we'll be using a GOGS source code repository to store our source code. We're also going to use JUnit as the means by which we perform source code unit testing. We also have a tool called SonarCube, which is going to be used to perform source code quality checks, and a tool called Nexus, which is going to store any of the build artifacts that are generated by the pipeline. And then next, after we store that generated image, we have Red Hat Advanced Cluster Security, which is going to be used to scan, check, and then deploy the image. And then on the far right, you see we also have OpenShift GitOps, which is going to be based off of Argo CD. And this is going to be used to deploy an image in preparation for penetration testing. And then finally, after we perform our penetration testing, we use a tool called Gatling, which is used to execute performance tests against the deployed image. So the first thing I'd like to show you is the OpenShift user interface and how it can be used to not only create but then visualize the build pipeline that we've created. So to look at the build pipeline, we can switch into the developer view. And you can see that I have two pipelines that I've built, one for development and one for stage. If we look at the development pipeline, you can see the exact steps that have been defined, which mimic the information that I showed on the previous slide. For example, the first thing that we do is we get a copy of our source code. And then simultaneously, we can do our unit tests and a dependency report. We can release the application. We can build the image. We can do the scan, the check, and the deploy check all in one step. Update of deployment, do performance tests, penetration testing, and then another performance test. So you can see that this pipeline is pretty all-inclusive. Now, if we wanted to look at the actual YAML of the pipeline itself, we could see all of the different steps and see exactly how those steps are executed. For example, if we look at, let's say, we look down on the list, we see that there's a task called source clone. And it has parameters that we pass into it, such as the URL, what revision, the subdirectory. And then it uses a task called get clone to actually clone the source code from our source code repository. Now, you'll notice that if we look at pipeline runs that there are no runs of this pipeline as of yet. There are multiple different ways that we can actually initiate this pipeline. For example, we could come up to the action and say start. That's one way. And then, of course, this pipeline would start and do all of the different pieces that it needs to do. However, what we would really like to see is how a development team is going to use the pipeline. And typically, the way a development team works is when a new file is added to the source code repository or maybe you change an existing file, it can be set up to automatically kick off a build. Now, one thing I'll mention is this build pipeline. Again, this is done using Tekton. And when this pipeline is generated, the end result is it creates a container inside of OpenShift which physically runs this pipeline. In fact, if we go back to a topology view, we're going to see not only all of those products that I showed you in that previous screen. For example, the GOGS repository and Nexus and all of these different pieces of the puzzle. But we also see a container which is called a webhook. It says EL webhook. And if we actually go and look inside of our source code repository in this case, which is called GOGS, you can see that I have a project setup with some source code. And if I click on the settings of this project, you can see one of the items that's displayed is something called webhooks. And if I click on webhooks, this is a URL to the actual webhook container that was created inside of OpenShift. Now again, the scenario that we'd like to show you is what happens when somebody changes a file or adds a new file. So to show this, I'm going to click on this readme file. And I'm going to edit this file. And I'm just going to make a simple change. I'm going to put the word test at the end of this title. And I go in and I commit the change. Now, automatically, you can see if we go back into OpenShift, this pipeline now has one specific task that's started to run. And again, we can watch that pipeline run. We can watch the details of it. And we can physically see it go through each one of the steps that we've outlined before. And of course, the first step is it's going to get a copy of the source code from Git. And if we click on that particular step, we automatically start to see the output from it. You can see that it's going to perform a checkout of this Spring Pet Clinic source code. And eventually, this exits. Now, again, if we go back to the details, you can see that it's completed. And now it's simultaneously running these unit tests and dependency checks. And basically, it's going to run throughout this entire pipeline. Now, this pipeline takes a few minutes to execute. So I'm going to pause the recording and wait until it finishes. Now that the pipeline is finished running, we can see that the pipeline has actually failed. And it's in this step called Image Check. Prior to looking at that, let's take a look at these other steps that were performed. For example, we can click on each individual step and look at the log file information that comes from it. We've already seen the step where it sources or clones the source code. We also have a dependency report that gets generated along with some unit tests. Once the unit test is finished, you can see that at the very end of the unit test that it's going to build a jar file from all of the artifacts that make up this particular application. In the release step stage, the jar file that's created actually gets uploaded to the Nexus repository. And this repository is what's going to be used to store this image and artifacts that go along with it. We then see a build image step. Now, what the build image step is going to do is to physically create the container that is going to run our application. And then if you recall from the pipeline, there were three steps that were executed simultaneously. Deploy check, image scan, and image check. However, you can see that the image check step has failed. And there are several violations that have occurred when it physically looks at the image that's been created. You can see that the first set of violations convey that there are some CBEs that have a severity that's high enough that's going to cause this to fail. For example, a CVS 7.5, something found in component Jackson data bind, or something found in the Tomcat that's being used, Tomcat web server. We've also got some packages that have been deployed to this container, which in the long run could cause this container to expose security vulnerabilities. So you can see that this image check actually went and looked inside the container and was able to find issues or vulnerabilities with that container. And you may ask yourself the question, how does this happen? And the reality is that when developers create things, for example, maybe they create a Dockerfile which defines how a container is going to be constructed, it's very possible that that container is using either old technology, old container images, or maybe the creator of that container included something in it which would break a security violation. And it's important to catch these things sooner rather than later so that you don't get all the way into production and all of a sudden realize that you have a vulnerability with your code. So before we move on, let's also take a look at the image scan step of the pipeline. Again, when we click on the image scan log, we can see a list of all of the vulnerabilities that it has found inside of this container, some of which are severe, some of which are not. And then at the very bottom, it conveniently conveys a link to our advanced cluster security, or ACS module, which we can actually go into and view using the web browser all of these different vulnerabilities. So when I copy and paste that link into the browser, this is what it takes me to, again, the advanced cluster security, and specifically the vulnerability management tab for this container. And at a high level, we can see all of the information about this container. For example, what's the risk priority? What are the number of critical versus important versus moderate versus low CVEs that are brought up? And it actually allows us to drill down into the components that it has analyzed. So take, for example, one of the components which it says is fixable is the Tomcat web server that's contained inside of the container. And again, if we drill down into the Tomcat web server, we'll see, first of all, that the version of Tomcat is 9.0.31. However, all of these CVEs or CVSSs that are conveyed are fixed in version 9.0.54. But again, in this particular Tomcat web server, we can see that nine of the CVEs are rated as important and four as moderate. And if we minimize this particular summary, we can then see all of the things that are technically fixable inside the Tomcat web server. If we want to, we can also drill down into the CVEs and see a list of all of them at a high level. If we go back to the Overview tab and minimize the image summary, we can see all of the image findings. And again, it lists those various things that are fixable inside of this container. Next what I'd like to do is show you how you can fix your build pipeline so that it gets past the image scan step. And to do that, it's going to involve a couple of steps that we do. First of all, if we go back and look at the image check step, we see that one of the violations that's coming up is the fact that some packages, the RPM and the YUM package, has been installed into the container image. And the other set of violations have to do with some CVEs, which are failing. Well, the first thing I'm going to do is I'm going to fix the creation of the container image. And I'm going to fix it so that it no longer installs these two packages. And the way that I do that is the task, the pipeline task, which is used to take your code and lay it on top of an image is called source to image. So first thing I'm going to do is I'm just going to delete that particular task. And then I want to recreate it. And in this case, I want to recreate it in a way such that the source to image is correct. And it does not have the RPM package for YUM and I guess for RPM. So to do that, I've got a command window that I can use. And you can see that in this command window, I am opening or using a YAML file, which can be used to basically redefine that source to image task. And of course, we're going to use this kubectl to apply it. And we're going to apply it to the ocp-workshop namespace. So I recreate that task. And if I go back into OpenShift, you'll see that the task is now back in there. So that's step one. If we want to take a look at that particular YAML file and exactly what it does, we can scroll down and we can see that in this YAML file, it's removing those packages that are in question. Now, the second step as we saw when we looked at the pipeline is to disable, I'm just going to temporarily disable the CVEs that are coming back in error or with a problem. So in this case, I have four CVEs. And I'll show you how I do the first one. And then I'll show you how to disable it. And we can do the second one, third one, fourth one as well. So to disable the CVEs, we go back into our Advanced Cluster Security. And I want to look at the Vulnerability Management tab. And specifically, I want to view all of the detected vulnerabilities. And basically, I want to search for, let's say, search for the first one, CVE202025649. So we go back in here and let's do a search for that CVE. And it's really simple to disable it. All we do, let's say we want to disable it for a day or a week or you can even disable it indefinitely if you want. Now, of course, eventually you would want to fix it. But for now, we're just going to make it simple. And we're going to disable it. So boom. There now it's disabled. Let's go back and get the next one. And again, we're going to disable this guy for just one day. Get the third one and disable it. And the fourth one. We'll disable that as well. And I think that's it. So we go back. Yep, that's the last one to disable. We'll go ahead and clear this out so we see all of them again. And now if we go back into our pipeline, let's go back and run this pipeline one more time. And again, this pipeline takes a few minutes to run. So we're going to pause and come back when it's finished. So you can see that the pipeline has now finished. And it actually went all the way through past the image check step all the way to the end, where it does its performance testing, its penetration testing, and obviously the results for the performance test. If we go back into the image check task, again, no failures. All is good. We can then look at the remaining task if we want to just to see what the log presents. But in short, the pipeline has finished. It's done every single step. And we can go back and look at all the different things that we need to look at. So that concludes this demonstration. Again, this showed a pipeline that was written in Tecton that incorporated a lot of the different DevOps features that you're used to, but then also added the DevSecOps features, which is the advanced cluster security component, which is actually formerly known as StackRocks. And it was that ACS module, which was able to scan the container images and check them prior to updating the deployment and pushing it through to the end of the pipeline. I hope you found this demonstration useful. Please feel free to contact me if you have any questions. If you'd like to learn more about this solution, visit redhat.com slash en slash partners slash DevSecOps. Thanks for watching this video. I hope you found it useful.