 From a high-level perspective, when we look at the software supply chain we have four distinct steps that make up the supply chain. First is we're developing our code. Next is we're building it. Then we're deploying it to our platform of choice, and then we need to monitor our code's behavior on that platform of choice. And all these need to be as secure as possible. So how do we do that? First, whatever you're using as foundation for your software development efforts should come from a trusted source. The source that provides curated content with attestation and provenance, be it application libraries, runtimes, or even base images. We're talking a lot about shifting security left. Well, what does it mean? It means we need to support our developers. We need to give them tools to develop code in a secure fashion, identify possible problems early on in the development cycle, but also not stand in their way. Developers want to develop the next cool feature and not deal with security tools all the time. Well, it goes without saying that we should automate our build process as much as possible. Additionally, we should think about augmenting our build process with security steps. So our pipelines should automatically sign images when they're built. Verify artifacts have they been signed. Automatically generate attestations and S-bombs, software builds of material. And the same is true for our deployment process. Whatever we deploy to our systems should have been checked and scanned thoroughly. We should make sure that whatever we deploy adheres to our best practices and standards, our policies, and comes from a trusted build system. Has it been signed? Can we validate its provenance? Does it have an S-bomb attached? Is it salsa compliant? And additionally, in the case of Kubernetes, does it follow deployment best practices? Have limits been properly set? Last but not least, in our efforts to evolve from a software supply chain to a trusted software supply chain, we need to monitor our systems and we also need to be able to do a risk assessment. If a zero-day exploit hits or if a CVE has been identified, where am I using it? Is it affecting me? And if so, what's the blast rate use? With that said, I need to be able to evaluate S-bombs, software builds of material from our own development efforts, but also from trusted vendors that are provided to me. Now that we know what needs to be done to evolve from a software supply chain to a secure software supply chain, let's take a look at what Red Hat trusted software supply chain products can do to help. Well, the first step is pretty straightforward. As Red Hat, we have been providing trusted enterprise open source software for more than 30 years. From Red Hat Enterprise Linux to Application Service, from the Red Hat Maven repositories to OpenShift. So let's see what tools we can give our developers to support them in building secure applications in a secure and trusted software supply chain without getting in their way. We're starting with Red Hat Developer Hub, the enterprise ready and Red Hat supported build of the popular backstage open source project. At the core of Red Hat Developer Hub is the catalog, where a developer can look up and find all components of a company's development landscape. I have been asked to work on the accounting system, so I'm looking it up. I'm checking the accounting system system diagram, and I can see what components it consists of. I could drill down into any of these components from right here and start working on them. But let's look at another example. So I have been assigned ownership of the DB backend component. I open the components home page and check the documentation, which is by the way documentation has code. And as such part of my Git repository and surfaced here in Red Hat Developer Hub. I can open a browser based IDE to start working, or I can open the Git repository right from the component home page. As a developer, I know how this works. So I'm cloning my code and start working in my preferred IDE. Using OpenShift Dev Spaces, following the link from the component home page, I can start working on my code directly here in my browser. OpenShift Dev Spaces based on Eclipse J provides me with a fully working development environment, natively on OpenShift. All the IDE features, plugins and tools needed for a specific task are configured via a dev file, which is also stored along with the components code in the Git repository. No need for complicated VDI setups or developers struggling with lockdown workstations for security reasons. We can work with our code and tools needed right here in the browser. Maintaining existing code is okay, but developers want to build things. Where do I start? To avoid reinventing the wheel for the 100th time and give developers a meaningful starting point and also provide coding and security guard rails, we start building with software templates. Templates tailored for specific tools, languages and requirements. As part of the Red Hat trusted software supply chain, the example templates are built with security in mind, that can be easily adapted and tailored to your needs and existing tool chain. As we can see here, using a template starts with picking the one that suits my needs and filling out the wizard's forms. I just gave my new component a name, since this is Java also an artifact ID, selected the container image registry and the Git repository location. And now a lot of things are happening under the hood. Based on the input, a Git repository has been created with meaningful code as a starting point for the developer, based on what the requirements were, be it a REST API in Quarkus or a backend component in .NET C sharp. Along with the starter code, which can also include guidance on how to use it, we have been provided with the OpenShift DevSpaces configuration that is needed for this type of component. Another part of the package are the Argo CD, aka OpenShift GitOps, manifests to deploy this application. But before we can let Argo CD do its magic, we need to generate something that it can deploy. So, following the true GitOps approach and pipelines as code, the secure pipeline needed for this component, along with the webhooks, has also been instantiated and published. The new component that we just created has also been published to the catalog. While we can open the newly created code repository from right here, let's check out the components homepage in the catalog. We can see that Argo CD is now syncing the application's manifests and creating the requested OpenShift namespaces along with it. Let's make some code changes now and see what it means to shift security left. From our Git repository, we copy the repo URL, clone it, go into the directory, and start working with it locally in our IDE of choice. We're following a documentation as code paradigm here. So, developers can add their documentation right here in their repository while they're working on their code, which is closer to developers work than regularly or maybe not so regularly updating some other wiki page. We can also see some other documentation has already been provided as part of the template. Let's just update the start page here quickly. Next, let's modify our new application start page so we can see our modifications later in the running application. We're just modifying the call of welcome message here. We said this is a Java application since we used the Quarkus Java template, so let's quickly check out the Maven POM file to see the dependencies used and maybe add some others as this is a starter code provided by the template. Opening the POM file automatically kicked off the Red Hat Dependency Analytics plugin, checking for known direct or indirect vulnerabilities in your project's dependencies. These findings are shown as individual problems, but we can also open the detailed report to assess the findings. In the report, below an overall vulnerability status of my project, I can see individual vulnerabilities, their severity, impacted dependencies, but also available remediations, along with some background information about what is causing the problem. Let's close the report for now and go back to our POM file. Problems introduced by dependencies are also highlighted accordingly. Let's just say, as a developer, I have been searching the Internet for a specific solution to a challenge I have. Why reinvent the wheel, right? So I want to use some code I found, along with the required dependencies, and I paste them here. Ctrl C and Ctrl V are my friends as a developer. I can see now that accepting them verbatim was probably not such a good idea as some critical vulnerabilities have been highlighted now and I have now 23 direct vulnerabilities in my code base. Unknowingly, I have introduced two of the nastiest vulnerabilities of recent times, log for shell, and the prominent struts vulnerability that was the cause of the Equifix breach. But fear not, as we can see in our detailed report, remediations, along with background information are available directly from the report. Giving this critical information to the developer enables him or her to fix these issues early in the development cycle. While there are other security measures in place in our trusted software supply chain that would have caught these, the developer needs to fix them anyway. So shifting security left, giving the developer the right tools, saves time. We have fixed these issues by removing those dependencies from our palm file. Ctrl Z is another close friend. Let's move on and commit and push our code. We want to see it in action. We're ignoring the vulnerability that just popped up for now. Give it a commit message, add a docs, change dependencies, and commit our change. Now a browser window pops up, asking us to log in. We're using git commit signing with Red Hat trusted artifact signer, Red Hat's built of the six-door project for keyless signing. That means there is no need to manage, issue, and distribute key pairs that need to be revoked, rotated, and that the developer needs to take care of and have yet another password to remember. The signing process is tied to the OIDC identity and trusted artifact signer issues a short lift key and stores the event in a transparency log. Back on our new components homepage, we check the CI tab and can see that our commit has triggered a pipeline run. As developers, we might not be overly interested in the pipeline steps details, but we want to see that our code commit made it through. So we want to see everything from the context of our development process, which we can. We see all the details of the pipeline run right from the component page without having to go to yet another system. Regarding the pipeline, we want to see as much green as possible. Green is good. Without expanding the visualization, we can also check our pipeline status from this progress bar. We can also check the logs of each individual pipeline task right here. In this case, we see that this pipeline verifies our Git commit. No unsigned commits or commits by the wrong people for this application will even be built, let alone packaged and deployed. We can see that this commit verification has passed. The developer has given this code commit the commit message wrong signature as the previous commit was signed by the wrong person and wouldn't go through. So he tried again with his ID, ultimately taking responsibility for the commit. So the verification and signature check is okay. It also has not only been signed and the signature could be verified, but it also has a matching corresponding entry in the trusted artifact signer transparency log. Called Recore. Another insight are security where developer can get directly from here is the pipeline output. In this case, a security report from Red Hat Advanced Cluster Security for Kubernetes. During pipeline execution, the build steps generate a container image. This image has been scanned and checked. The output shows image scan results, image checks, which are policy checks taking the scan results into account and a deployment check, verifying if best practices have been adhered to. Before moving on, let's quickly check what went wrong with the previous pipeline run. Opening the verify commit log, we can see our regional commit with its commit message. The commit had been signed, but none of the expected identities had signed it. Which identities are acceptable will depend on the use case and can be adapted to specific requirements. So we're not only checking that a commit has been signed, but also by the right person. As a developer, I want to see my application running. So from the component homepage, I go to the topology tab. I can see that following the successful pipeline run, Argo CD has deployed by application in the development namespace. As proof, I can see my modified greeting. Additionally, following the documentation as code mantra, I can see my updated tech docs right from the link on the component homepage. Nicely rendered, as I would expect it. Let's close the loop and move one step further to pre-prog, our next stage. I create a tag in our code repository, and as we can see in our CI tab again, another pipeline run has been triggered. This pipeline is the promotion pipeline and consists of different tasks than the build pipeline. It uses enterprise contracts together with trusted artifact signer to verify SALSA compliance, among other things. SLSA, or SALSA in short, stands for Supply Chain Levels for Software Artifacts and is a security framework of standards and controls that prevent tampering, improve integrity, and secure packages in infrastructure. Core pillars of SALSA compliance are the existence of build attestations, signatures, and software builds of material, all of which have been created in our pipeline and have been attached to the image in the registry. While this can be easily accessed here through the image registry tab, a more detailed view is available through the link to the Quay registry. Here we can see that the images we created have been signed via COSA. As an additional security layer, Quay scans images at rest. If your pipeline hasn't been run for a while, or if an image is pushed to the registry that comes from a less secure environment, Quay is scanning all images in the registry. From here, you have access to CVE details as well as security advisories pertaining to those vulnerabilities. The same vulnerability information as well as access to the security advisories is available from the image registry tab on our component page as well. Going to the topology tab, we can now see that our LCD has synced and deployed our application to the pre-prod namespace. Closing the loop, we create a release from the tag we created earlier, and we'll see that another promotion pipeline is triggered. While this pipeline is running, let's quickly take a look at the previous runs pipeline output. When we have a detailed view of the rules and policies that have been checked using enterprise contract, validating salsa compliance of our artifacts and build system. Going to the topology view once more, we can now see that our application has been deployed to the production namespace and the cycle is complete. So far, we have looked at the build or CI continuous integration process from a developer's perspective. How we can support developers to make their lives easier while stepping up security measures at the same time, effectively shifting security left. Now let's look at our build pipeline in more detail and which Red Hat products specifically provide these capabilities. To automatically sign a code commit, we use git configured to use git sign. Git sign itself was configured to talk to Red Hat trusted artifact signer and points. This challenged the user to sign in and tied his identity to a commit signature. Similarly, verifying a code commit uses git sign for retrieving and verifying commits against the keyless signing infrastructure provided by Red Hat trusted artifact signer. The build sign image task uses Red Hat trusted artifact signer and cosine to sign the image that has just been built. As a next step, an S-bomb, software bill of materials of the image is being generated and stored in Red Hat trusted profile analyzer for managing S-bombs, CVEs and security advisories across your company. Additionally, that S-bomb is attached to the image, both of which are stored in the Red Hat Quay container registry. Red Hat Quay then starts scanning the image for vulnerabilities. The three ACS tasks in our pipeline use Red Hat advanced cluster security for Kubernetes and its Rux Control CLI binary to scan the image, check if any of the scan results violate the policy and furthermore checks for Kubernetes deployment best practices. For example, if there are appropriate limits set on a container deployment. Moving the pipeline visualization a little further to the right, the Query Recore Sign Provenance task uses another trusted artifact signer capability, the Recore Transparency Log. This task checks if the build provenance or the attestation of the same has been signed and attached to the image by querying Recore for the SHA or SHA value of the attached attestation. But how is this attestation of the build process generated? Red Hat OpenShift Pipelines, the enterprise-ready build of Tecton provides Tecton chains. Tecton chains is a Kubernetes controller that watches all task-run executions in a cluster. Upon completion, it creates a snapshot of them, signs them and stores them, in our case as a signed attestation attached to the image. The check mark signed next to a completed pipeline run notifies the user that Tecton chains has run and signed the task-run snapshots. For the signing process, Tecton chains also uses the Red Hat trusted artifact signer infrastructure. To enhance the security of our deployment pipeline, we need to verify attestations and signatures. Nothing that doesn't provide appropriate attestations and signatures should ever be deployed on our systems. For the deployment pipeline, Red Hat trusted software supply chain uses Enterprise Contract, which in turn uses Red Hat trusted artifact signer to verify signatures and attestations. Enterprise Contract applies a number of policies defined in the Rego policy language to an image, primarily validating itself as a compliance, but can easily be extended with additional policies. The last pillar of any secure and trusted software supply chain is monitoring our systems and being able to react to new exploits or security threats in general, managing our overall security posture. In the Red Hat trusted software supply chain, we use Red Hat advanced cluster security for Kubernetes to monitor the platform. We have already seen it during the build process, where we used its scanning capabilities and policies to assess an image's overall security profile. Let's quickly take a look at these policies. Here, for example, we have a policy that explicitly checks for a specific CVE, in this case, the notorious struts vulnerability. The policy is defined to be active during the build and deploy life cycle phases, and the response is only informed in this case. When I edit this policy, I can change its behavior to also enforce the policy. If I choose this option, a build would fail, and clusters with admission controller enabled would not deploy it. To use it in a build pipeline as we did, I can download the CLI binaries here. If runtime was part of the enforcement life cycle, a port violating the policy would be deleted. But we want to avoid malicious or vulnerable code to reach the cluster in the first place. Red Hat Advanced Cluster Security for Kubernetes comes with a large number of policies, protecting against different life cycle threats, for example, cryptocurrency mining, should someone try to deploy a port that contains such code. Another possible attack vector are networking threats. Let's select our production application as an example. It's fairly simple, but we can see it has no network policies attached, so it's unprotected, from networking access by other pods in other namespaces. Now, let's assume for a moment that somehow one of our GitLab pods has been compromised, or think a WordPress pod in a WordPress namespace on the same cluster. This pod could then try to traverse the network and infiltrate unprotected namespaces and pods if there are no outgoing or incoming network policies defined, as in this case. Red Hat Advanced Cluster Security for Kubernetes allows us to create network policies via the UI, which could be made part of our templates, for example, so appropriate network policies would be available from the inception of an application. Red Hat Advanced Cluster Security also provides compliance checks for a number of industry controls but also allows to define specific individual controls. One example is the CRS Kubernetes. We can drill down and check which controls are not compliant. Opening a control allows me to drill down to the node level should that be required and check for control compliance on that level. Last but not least, a comprehensive list of all policy violations, from security best practices to detected vulnerabilities or abnormal runtime behavior allows to assess the overall risk status of a cluster and search for specific violations. As an example, here we have the violation of a security best practice where a container image contains a package manager. This should be avoided because a package manager could be exploited to load malicious code at runtime. Another very important aspect of the trusted software supply chain is the ability to manage S-bombs, software builds of material, and consume security advisories by trusted vendors that provide me with software and services. Given the rising demand for salsa compliance, more and more vendors will provide me with S-bombs of their products. If I sell a software product, customers will ask me for S-bombs and salsa provenance of the artifacts generated. Red Hat Trusted Profile Analyzer provides me with the capabilities to store S-bombs, security advisories, and CVE publications and easily cross reference them. If a vulnerability is disclosed, which of my products are affected, internal products, or vendor provided? What is the blast radius in my risk profile? Maybe it affects a product, but a component that I'm not using. Red Hat Trusted Profile Analyzer can answer all these questions. So, a new CVE has been disclosed. Let's check how it affects us. I found the CVE in the search. Opening it provides me with a list of products affected. In this case, just one. Opening the dependencies shows the actual component that is affected, as products are being built from many components and packages. Opening the component details shows all the vulnerabilities that this component is affected by. I can also go back from here to check which of my products are using this affected package. From the specific vulnerability, I can access the advisories that have been published, either for a specific product or a number of a vendor's products. Checking the vulnerabilities tab from this security advisory, I can see available remediations in Red Hat Trusted Profile Analyzer. I can also manually check an S-bomb that has been provided to me against the vulnerabilities in advisory's database without permanently storing it. From the search menu, I can also access all products stored in the database. Download their S-bombs or assess an overall product security status. As all product S-bombs are continuously checked against vulnerabilities and advisories as they are ingested by Trusted Profile Analyzer. Uploading and storing S-bombs, for example from a pipeline as shown earlier, can be done via the Trusted Profile Analyzer REST API. In this video, we have seen what a secure software supply chain is and how Red Hat Trusted software supply chain implements it. For ease of consumption, Red Hat Developer Hub, Red Hat Trusted Artifact Siner and Red Hat Trusted Profile Analyzer can be subscribed and installed as a single product named Red Hat Trusted Application Pipeline containing these. However, these products can also be subscribed and installed individually. As we have seen, a Trusted Application Pipeline also benefits from platform security and compliance monitoring, as well as a container registry that also scans container images for vulnerabilities. Red Hat Quay and Red Hat Advanced Cluster Security for Kubernetes are, among other products, part of the Red Hat OpenGIF platform plus offering, but can also be subscribed and installed individually. Thank you very much for watching the Red Hat Trusted Software Supply Chain Workthrough video.