 So now in the demo, we're going to move from the inner loop to the outer loop. And the outer loop is where the automated processes occur that actually do the deployment of their physical application to the target environments. In order to do that, we're going to use two OpenShift capabilities. OpenShift Pipelines based on Tepton and OpenShift GitOps based on Argo CD. Tecton is a fantastic new technology that actually puts native Kubernetes objects into the Kubernetes engine, allowing for the control of pipelines. Whereas Argo CD is a compliance operator that controls the state of objects on the system itself. What you're going to see in the demo now is a combination of these two in an automated fashion to show the outer loop in action. So now I'll pass over to Gerald. As we can see, the code change Natalic committed to Git has automatically kicked off the pipeline via Webhook. This pipeline will take a few minutes to complete, so while it is running, let's set the stage with an overview of the technologies being used. OpenShift Pipelines, as shown here, is a cloud native continuous integration, continuous deployment solution. It uses Tecton building blocks to automate deployments across multiple platforms by abstracting the way they're allowing implementation details. It leverages Kubernetes resources for these building blocks, enabling users to compose tasks into a pipeline, which defines the overall orchestration. OpenShift Pipeline ships with a number of tasks out of the box. However, users can use additional community tasks provided from Tecton Hub or define their own as needed to support their workflows. You can see an example of Tecton Hub here with all the different tasks that are available. And in this particular demo, we're actually going to use one of these tasks in the messaging section. We'll be using this task here to send a message to a Slack channel. Well, OpenShift Pipelines will be handling the CI portions of this demonstration. OpenShift GitOps will be handling the continuous deployment or the deployment aspects of this demonstration. OpenShift GitOps provides the capability to deploy and manage configuration and applications using the GitOps methodology. OpenShift itself uses the declarative model for deployments and configuration, which is a very natural fit with the GitOps approach. This entire demo, in fact, was provisioned with OpenShift GitOps. And if we switch over to the OpenShift GitOps view, we can see all the different components that have been deployed to support this demonstration. This includes infrastructure components like OpenShift security, development tools like Nexus and SonarCube, OpenShift Pipelines, the operators being deployed via OpenShift GitOps. And over and above the infrastructure components, we also have the application components as well. So we're deploying three different environments to support the cool store application, the development environment, the stage environment, and the production environment, as well as a CICD environment to host the pipelines that we're running, as we saw in the previous example. If I switch my view, because right now I'm logged in as the administrator, I see everything. If I switch my view to a regular developer, you can see that I will get a much more restricted view in terms of what's available to me. So in the develop view, I can see the development and the stage environments for the cool store application, as well as the pipelines environment. But I do not see the production environment, nor do I see any of the infrastructure components. In this particular demonstration, we're going to be deploying the production with an approval in the sense that a person has to approve that deployment. And as a result of that, this organization has decided that the developer should not have access to the production environment, and hence why it doesn't appear here in this particular tile. Going back to the pipelines, we can see that the pipeline is still going to take a while to run. So rather than wait for it, what we're going to do is we're going to go in and actually look at a previous pipeline run that I did and review that one. So in OpenShift Pipelines, as mentioned earlier, it is really composed of a set of tasks, and these tasks are represented in these bubbles that you see here. So we have variables tasks, get clone tasks, etc. Within each task, you can have one or more steps. So the variable task only has one step, whereas the Maven task has two steps. Similarly, holding the mouse over a particular task will show me how long each step has taken to execute. If I drill into that task, I can see the logs for that particular task, and the developer can then review those logs if they need to identify a particular problem or error condition that happened during the execution of this pipeline. If I go to the tasks runs, so a task run is the running instance of that task. Every task when you run it becomes a task run, and if I look at an individual task run here, and I look at its YAML, you can see that each of these task runs is being signed by TectonChains. TectonChains is an add-on for OpenShift Pipelines. It is currently in tech preview, but will GA quite shortly. And TectonChains is responsible for signing all of the task runs, as well as the artifacts that are generated by the pipeline. And this is really ensuring that organizations can ensure providence, providence for their particular application that they're building, and ensure a secure supply chain as they're building that application. So here I can see that it's signed if I scroll down a bit. Here I've got the actual signature that has been attached to this particular task run. And you can also go and look in Recor for the metadata for this particular task run that's associated with the signing. So TectonChains is really providing all of that providence information and ensuring that we have that secure supply chain as we go through things. Going back to the details, if I go back to my pipelines, pipeline runs, back to that pipeline, well, actually we can see the, and then as we continue on. So that's a brief overview of the tasks and how they work. They're really set up in a directed graph, one after the other, and executed in sequence from left to right. We can now dig in and actually look at how the application itself is being built. So for the first step of the pipeline, what we're doing is we're using this variables task to aggregate a variety of information about the environment that the downstream tasks will then use to interoperate with a variety of different technologies. So for example, for OpenShift security, we need to know where it's being deployed to and the token that we need in order to authenticate the pipeline to interact with OpenShift security. Similarly, we'll do things with SonarCube and Nexus to allow that interaction as well. Once we have that information collected, we can then proceed with building the actual application. So the first step is to clone that repository and for the source code for that application. And then we'll move on to actually doing a build using Maven. Now this is a Java application, so we're using Maven to do that build, but if this was a different language or framework, we'd use a completely different build tool to do that build as well. Within the Maven environment, we're executing two goals. Within that Maven environment, the first goal is the package goal, which will actually build the Java artifact, the JAR file, that will eventually be deployed to different environments. And the second goal is to take that Java artifact, that JAR file, and publish it into our Nexus artifact repository, which is a third-party repository that we're using, and make that available to developers via that mechanism. Once we've got that artifact built, we're going to move on to the static code analysis. Here we integrate with SonarCube, one of our partners, and perform a static code analysis that can be used as part of our downstream gating checks when we move to production, and we'll see that a little later on. Once we have our artifact and we've done the code analysis, we can actually build the container image at this point in time. So we will build that container image, and then we will publish that container image in an OpenShift container registry. Once we have that built and published, we will then branch the pipeline into two parallel streams. In the first stream, the upper stream, what we're going to do is we're going to take that container image that's being published, and we're going to send that to OpenShift Security to scan that image. So that scan will then give us a list of vulnerabilities that have been detected in that image, and you can see that here. And each of those vulnerabilities will have a severity associated with it, along with a link that a developer or a security team member can then use in order to understand the impact of this vulnerability on this particular application. Now, if you're building a lot of applications and you have a lot of vulnerabilities, which is not uncommon, it can be a lot of information for a security team to digest and handle. So a way to make this more palatable and approachable and avoid the force for the trees issue, what we can do is we can actually run that scan, that image, through a policy check in OpenShift Security. And that policy check will then use policies that the organization has created and curated for the needs of their particular organization in order to validate and verify that that image is in compliance with their requirements. For our particular demonstration here, what we've said is that we need to have at least a critical vulnerability before we will fail this particular scan. We only have one policy violation in play right now, which is that the Red Hat Package Manager is in the image. However, that's a low severity. And as a result of that, we're only getting a warning out of the policy check instead of a failure that will allow it to proceed on and go to the next step in the pipeline. Now, if we did have a failure, we can then send a notification through Slack to let the developers and the security team know that there is a problem that they need to identify and rectify as they go through the process. So that is the upper parallel branch where we're running our security scan and validating compliance. The second part of the branch here, the lower part, is where we're actually going to deploy the application into our environments. So we have two environments that we're going to be deploying into. One is the development environment and the other is the stage environment. The third environment production is going to be done separately and it is going to be done through a manual approval process. For these lower environments we're actually going to do continuous deployment, i.e. no gating, no approval required. We can just push all changes made to the source code directly into those environments. In order to push these changes into that environment, there's a three-step process represented by these tasks that we're going to go through. The first step is what we're going to do is we're going to clone the manifest that OpenShift GitOps is using to manage the deployment in the dev environment and then we'll take the digest, the container image digest that was generated in the previous step and we are going to update the image reference that we want deployed in the dev environment to the new digest. At that point in time then we're going to commit it into Git as we see here and make it available for OpenShift GitOps to deploy. The next step is a very simple step. It is really just coordinating the deployment of that image in a synchronous fashion with OpenShift GitOps and ensuring that that image gets deployed successfully in the development environment. Once it's deployed successfully in the development environment we are then going to do an integration test on that application. Now the application that we've deployed is an API-based application so we're going to do an API test on that application. In order to do that we're going to use a tool called Newman, a third-party tool called Newman to execute those API tests and ensure that that application is functioning to specifications before we move on to the next step. So once we've done that we can then continue on with deploying in the stage environment and we're going to follow those same three steps that we did previously and we'll go into them in detail again and deploy into that stage environment with a Git update, a sync and an integration test. Now once both of these branches have completed this task will then execute next. What this task does is it notifies the developers that the build is being completed and gives them information about the output of that build and the way that we're doing that to allude to what I mentioned earlier about Tecton Hub that is done through a Slack message. So we look at a Slack messages here you can see this pipeline I believe this is actually the notification from the pipeline that we're running where we see that the pipeline executed successfully and that we are giving the developers a variety of information here about that pipeline execution that they can go and look into for additional information about the execution of the pipeline. So there's a link here back to the pipeline run that we're showing here. There is a link to the OpenShift Container Registry where the image is being stored. There is a link here to OpenShift security so the developers if they want to review that scan in more detail along with the policies they can go and view that here. And then finally we have a link to our SonarCube code analysis where the developer can review that as well. So these three things here these three items we're going to review in more detail when we review the next step in the process which is deploying the application to prod through a gating check. The final step in the pipeline that occurs is a finally task and that's indicated by this white box and this finally task is executed on a conditional basis. So whenever you see this little diamond here that means there's a condition attached to it before this task actually executes and in this case for this last task the condition that we're executing is did the pipeline complete successfully or did it fail? If that pipeline failed it will send another notification indicating that there's been a failure and drawing attention to the developers who will then be able to investigate that failure and rectify the situation. As I mentioned the pipeline that we executed earlier is now completed so just to review that really quickly this is the pipeline that was just executing and we can see now that it completed that process that we just walked through and now we're ready for the next step which is to push this application to production. In the previous flow we reviewed our OpenShift pipelines and GitOps are used to deploy applications using continuous deployment in our development and staging environments. However in most enterprises it is common to have getting requirements in higher environments. In this part of the demo we will see how we accomplish this using Git workflows. Specifically we're going to use a pipeline to automate the creation of a pull request which must be manually reviewed and accepted before the change can be deployed to a production environment. The first step is identifying a specific release that needs to be deployed to production. Many organizations have an existing change management process solution and these can be integrated with OpenShift pipelines in a variety of ways with Webhooks being one of the more common mechanisms for doing so. Now in this particular demo that we're doing here the pipeline is still executing in our cluster that has a dev stage and CICD environments but we have a separate cluster where we're actually deploying the production environment. So what this pipeline is going to do is going to execute a variables task first similar to what we saw in the previous pipelines which will aggregate some information that the other tasks are needing. And then the next step that it's going to do is to get the image digest in order to update the production environment. The reason why we need to execute this task is the notification that we're getting from the container registry only contains a reference to the tag i.e. the prod tag. So we need to go back to the container registry and translate that prod tag back into an image digest. Once we have that image digest we can then update the reference in Git for the production environment. And we saw this task in our previous examples for the development and staging environments. However, there is one key difference that we're doing here in this particular execution of the task. In this case, we're telling the task to create a new branch which we did not do before. So we will clone that repository but we will then create a new branch. We will update that digest in that new branch and we will commit that new branch back to our repository. Once that is completed we will then create a pull request in our repository that a human will then be able to review and approve. If we go and look at our pull requests here we can see we have that pull request ready to go. And if we look at our code you can see that I've got a couple of branches now the main one and this new branch that was created Inventory Quirkus prod which is where the pull request is being sourced from. If we go back to the pull request itself we can see that it gives us a nice title telling us what the change is that's being requested here. And we can actually see what the change is beforehand. In this case we're simply updating the image registry as we discussed. And then in order to approve this change we can assign reviewers to it if we want to but in this case I'm just going to go through it and approve it myself directly. But we also get a task list here of things that need to be done before this change can be approved and merged into production. So the first part of that task list is reviewing the image vulnerabilities and the open shift registry. So if I click on that we'll open that and we go to the registry we can see that it's highlighting the production tag we can see there are two medium vulnerabilities. We can look at the vulnerabilities get more information about what this is about and decide whether or not this is something that we're okay with proceeding. In this case let's say yes we're fine with proceeding with those vulnerabilities that have been identified and we'll click that box. The next step is to look at the scan and policies in open shift security. So let me go ahead and click that one. This will take us to the open shift security portion where we can review the image. So we can see that this has a risk priority of 20. It has a top CDSS score of 7.5. It's used in two deployments namely the development and staging deployments and we can see a list of all of the CVEs the vulnerabilities that open shift security is identified for this particular image. Now in this particular case we don't have any vulnerabilities that are failing policy but this for example with this moderate one was a critical vulnerability. Whoever we investigated it and we know that it's not going to be addressed by the vendor in a timely fashion and it also does not impact our particular application. We can request that this CDE be deferred and the security team can then review that request for deferral and approve it and at which point we will start passing our policy checks and we can continue on with deploying to production. To review the policies we can go to the deployments that we've done already and we can see the two policy statuses here that we've passed all of our policies. So we're good to go in terms of deploying this particular image into our production environment from an OpenShift security perspective. So I'm going to go ahead and I'm going to check that box off. And then the last step as I mentioned in the previous flow is that we integrated with SonarCube we can go look at our standard code analysis and get an understanding of the overall state of our application in terms of its code. We can see there are no outstanding bugs or vulnerabilities that have been at Don Fiver's SonarCube but we do have six code smells. We can drill into those and have a look and see if there's anything here that's of a particular concern to us in our case there's nothing here that we're really too concerned about we're willing to accept this. So again, we'll go ahead and we'll check that off. Now at this point in time is she ready to merge this pull request but before I do that what I'm going to do is I'm going to break this off into a split screen. So we'll take this tab here and split it into one half and we'll take this tab and split it off into another half and go back here. Let's go ahead and merge this confirm the merge. So once this gets merged at this point in time OpenShift GitOps so you can see that's happening here we'll go ahead and actually deploy this application. So you can see the old pod is there. The new pod is now being stood up with a new image and then the old pod goes away and we've actually deployed this change now automatically through OpenShift GitOps once we've merged that change into GitOps. You can also see there's something else that's running here. This is a post sync hook from OpenShift GitOps that is triggering a testing pipeline so that we can test our new deployment in production just to validate that there's nothing wrong. We can run some automated tests against it and you know get that warm fuzzy feeling that our deployment was successful. So if we look at that in more detail we go back to pipelines let me change back to CICD oops sorry wrong cluster what was that don't do that anymore go back to this cluster pipelines and we can see our production test pipeline has run and we can go ahead and have a look at that. And we can see there's really only two steps here we run an integration test similar to what we did before using Newman to test the API and make sure that nothing is broken in production and then we send the notification to Slack to ensure that to let the developers and the team know that we've deployed this to production so we just go down here we can see now that we've gotten a message saying that production has been synchronized by OpenShift GitOps and we've succeeded at that and we're good to go with our production deployment. Now that we've run our pipeline a few times we can go in and see detailed information about the metrics of the pipeline in terms of how it's performing. So we go in and look at the metrics tab of the pipeline we can see there's a set success ratio for the pipelines the number of pipeline runs that have happened what the duration of the pipeline looks like so that we can see if it's trending up or downwards over time and we can see the duration at the individual tasks of that pipeline.