 Hello all. Welcome. Good afternoon. I'm Krishna Rajesh. I'm a senior architect with IBM, along with Brandon Kelly. Today, we're going to talk about our proposal to fight back against cyber risk in software supply chain with a secure and compliant DevSecOps pipeline for regulated environments. Here's the Roger and Duffer today. What is cyber risk? And how does it affect the software supply chain? What is the current regulated environment landscape and what are the regulatory requirements? What is DevSecOps and how can it combat cyber risk? We'll talk about continuous integration, continuous delivery, deployment, and something new, continuous compliance. And we'll do a case study on buying where we implemented our DevSecOps practices. Let's get started. Cyber risks. Cyber risks can bring down the infrastructure of an organization and can damage its reputation and can cause financial loss. Software supply chain security is often neglected. We always talk about observability, monitoring, alerting, et cetera. But remember, prevention is far better than cure. Identified potential risk before it even reached production. Government predicts that by 2025, 45% of organizations worldwide will have experienced attacks on their software supply chain. That is a third-fold increase from 2021. 4.35 million is the global average cost of data breach. How can we make sure our software supply chains, software suppliers, and IOSVs, independent software vendors we work with, demonstrate the best security practices to minimize these security risks? Regulator environments face unprecedented risk from multiple vectors. Software supply chain attacks like Okta, SolarWinds, which everybody know. Open source security vulnerabilities. Compromise dependencies. Need for rapid development, potential to bypass critical CI-CD systems. Increasing complexity in IT security and automation tools like CI-CD systems. Compliance and audit readiness is really hard to maintain. Auditable change lock. Our goal here is to build an automated system that's secure against these risks with less complexity. We have to prevent new vulnerabilities from reaching production. Ensure frequent updates to production with quality and control. At the same time, collect evidence for handling security audits, still tracking issues and documenting changes and even reporting. So what are supply chain risks? Here is an illustration which stores different stages in a deployment. You can see there are different stages like development, source code management, build step, packaging, and deploying onto dev and production itself. So let's get started with development. What are the risks here? Developer can push vulnerable code into your SCM. Now SCM, source code management tools like GitHub itself can be compromised. If you remember the case with Octa, their GitHub repository was hacked and malicious code were inserted. In the build step, the build container platform can be compromised. What if hackers can get into the API keys of your DevOps tools? Probably a cloud DevOps tools and can access your source code. Now the next step is package repo. Package repos like container registries can be compromised. This is especially the case if your container registry is a public registry. And eventually this container compromised images can get onto your development environment and onto your production itself. So any stage of this delivery pipeline can be compromised. Now what if you also, for example, you tested everything, deployed onto your development and then onto production and then eventually new wall levelities are identified? How will you know that? How will you continuously scan your code for continuous security and compliance? So these are the questions we are trying to solve. Let me now invite Brandon Kelly to talk about our DevSecor pipelines in detail. Brandon? Thank you very much Krishna. So you have described the problem and now let me describe, I suppose the solution that we have devised to combat the risk in the software supply chain. So our pipeline or tool chain consists of three separate pipelines, continuous integration, continuous deployment and continuous compliance. So I'm just going to talk about the principles of this pipeline before talking about the specific elements of it. So first off, absolutely everything is defined as code. The infrastructure the application runs on is code. The pipeline itself is defined as code. Even the deployment is code and the new components to the system that makes it so special, the compliance, the continuous compliance is based on code as well. This is to make it as manageable and reusable as possible. We aim to make this as broad as possible so we support multiple development languages. We ourselves have tested Java, Node.js, Python and Terraform infrastructure as code on this pipeline. Somebody else in our team has also tested Golang but to be honest, it's a custom framework that's extremely extensible so it can actually provide for any language whatsoever. We define a consistent approach for any application. So for example, you have Java and you have Node.js, they both have their own unit testing frameworks, they have both have their own vulnerability scanners and they are both implemented in that way but we intend for this to be a consistent approach. They all go through the same types of tests even if they're defined differently. And we also have shared pipeline templates to allow for this. Now let me explain DevSecOps a little bit for those that aren't familiar with it. So DevOps is a combination of development and operations to make a more consistent, more efficient, more fast system for developing and deploying applications. DevSecOps introduces security into the mix. What we do is we shift the security left by finding security problems as soon as possible before they reach production environments. Ideally, and you'll see this in some of our pipelines, before it even enters the main branch of a code base. So in the pull request stage we might be able to find some problems. Now security in application development is usually found to be an impediment to developers. They find that it slows them down, that there's some rigorous requirements that is very painful. We aim to make this DevSecOps process as painless as possible while maintaining the rapid application development processes you all know and love at this point. Security, I'm not going to short-record this, it is a challenge. There is a lack of knowledge in application teams about the deep security problems that are in infrastructure at the moment. The cloud has made things quite difficult in that sense. We also need to be able, especially as our presentation here is about regulated environments such as banking and finance. Auditing is a massive, massive problem. I've gone through this a couple of times. I'm sure some of you have as well. It's a massive manual. I first huge expense to it and it's extremely painful. As you'll see in the coming slides, we have developed some solutions to mitigate the auditing problem. And also, I've discussed this already, the continuous compliance pipelines. We aim to detect new vulnerabilities, zero-day bugs, as soon as they're reported to databases such as SNCC and the CVE database. And we also aim to look for potentially troublesome configuration changes or malicious code additions directly to infrastructure. This is built upon open-source tooling. We'll be happy to hear in the Linux Foundation. It is built upon Tecton as the pipeline runner itself and plenty of our tools, processes, and formats are open-source too, including SonarCube, the Wasp's app, the CIS benchmarks and our software build materials, the key component of our pipeline is based on the open-source Wasp Cyclone DX formats. So naturally, I'm going to start with the continuous integration stage. For those unfamiliar with the concept, is the application build stage where you build your code, maybe run some unit tests on it, package it into a container and store it in a registry for use later on. So the general flow of our pipeline is quite similar to most continuous integration systems. We unit test, we build, do some dynamic scans. So I'm going to focus more on the specific features of our pipeline system that make it so special and make it what it is. It has robust code review and branch protection checks. So your code must be reviewed by another member of your team or maybe higher up, depending on what you want. And you are also not allowed to push your code directly to a main branch without going through our CI pipeline. The pipeline itself can detect if your branch protection is enabled. We have a series of vulnerability scans, such as static code scans through SonarCube. Dynamic codes and application scans through OASP's app to determine if there's any application vulnerabilities while it is running. We have dependency checks to see if your open source packages or even your private packages have any problems reported to them. We have base image and built image scanners as well. And as mentioned, I also use the CIS benchmarks on our code, on our application. Secret detection is to determine if someone has accidentally or deliberately put in credentials or passwords into their code where it could be easily found by anyone and we don't allow that. But the more interesting part's coming out. In our continuous integration pipeline, we have signed built artifacts. This is enforced in the continuous deployment stage, which I will show in a few minutes to verify the provenance and origin of components and microservices of an application. This provenance includes the signed hash based on a GPG key. We have scopio integration for containers and we also provide for third-party sign-in services. This signed build artifact also links back to the very ID of the specific pipeline one that created the artifact. So that further verifies the provenance of where this artifact came from for auditing purposes and just for general security as well. I've mentioned the software build materials. This is extremely important for securing a software supply chain. What it is, is it's just a document that lists all of your application packages, all of your dependencies, and also all of your, for say a container, it will be all of your operating system packages as well that have to be installed for your application to run. The software build materials also includes open source licenses too, and the regulated environments can be quite strict on which licenses you can use, such as Apache. So the scanner can be configured to check for those if necessary. Now, I'm going to stress audit readiness again, because it's very, very important to how this pipeline works. It's one of the most powerful part of the tool chains. So in order to make audits as easy and painless as possible, we constantly gather evidence in the pipeline run that certain tasks have been run, all of the tasks, in fact, all of the unit tests, all of the vulnerability scans, all of the vulnerabilities that may be detected. This evidence will link straight back to the artifact and the pipeline run, and any gate issues that may be automatically created by the pipeline in the course of this, in case any vulnerabilities are detected. We then store that evidence in both gates for developer reasons, so that they can read us and use it, but because that's not very secure and we want to maintain the security of this evidence, we also store it in the mutable cloud object storage buckets. And finally, we build the release inventory. I'm going to explain this in detail in just one minute, but for now, just think of it as a list of components of an application, such as all of the microservices that make an application, deployment files for Kubernetes, Helm-Jaritz, any configuration for an application, et cetera. This is all referenced in the inventory. We think it's back to the signatures, back to where the artifact is stored, et cetera. And now we're going to move on to the continuous delivery or deployment stage. Again, I'm just going to do a brief explanation of what that is. This is where we deploy the application onto, for example, Kubernetes cluster. So back to that release inventory. It's got all the list of the applications, all of the microservices and the problems of each of those services to be deployed. But it's actually quite a bit more than that. First off, it can detect if there are changes in this. So if those applications are already deployed, it will not go any further. There's no need. It's also our implementation of a full GitOps-based release system. GitOps is that it's the management of both infrastructure and application configuration that using Git repos as a source of truth. So Git branches in that inventory represent each environment you may be using, such as QA, such as stage and finally production. By using this inventory, we can quite easily track which application versions are deployed in each environment and all of the configuration that it does. So we use some, sometimes we use deployment files. Sometimes we use Helm charts, which is referenced from that inventory back to a Helm chart registry. Regulatory environments almost always require a powerful change management system to be in use when you're making any changes to a production environment in a banking or financial system. So what we have done is we have implemented this using Git issues. It is either completely automated in continuous deployment or it could require manual oversight that would be continuous delivery. And that depends on your application, depends on your system, it's completely configurable. So for this area, we're gonna focus on an automated deployment. What happens is the change management system will perform a deployment readiness calculation based on the evidence gathered up by this continuous integration and the continuous deployment pipeline. If there has been a vulnerability in your dependencies or in your code itself, it hasn't been exempted by the security team or by your manager. However, you want to decide to configure that or if the scan failed to complete or maybe if the evidence doesn't match expected values that would be evidence of tampering. We actually secure our evidence using SHA-256 hashes. So if that were to change the hash wouldn't match in the deployment pipeline and it will fail. The readiness would then be deemed to be false and manual approval is required through an emergency deployment mechanism if you really do want to deploy the application with a vulnerability intact. Otherwise it's deployed automatically and completed. What our change management system also includes is an aggregate of all the evidence gathered so far and also the software bill of materials. And as mentioned, it's designed to build artifacts from the continuous integration pipeline. We also have the GPT key in the deployment pipeline so it's able to determine the veracity of that signature and further ensures the provenance on the artifact itself. This is a very flexible continuous deployment system. We ourselves have deployed to Kubernetes to Red Hat OpenShift to satellite locations and it is capable of deploying to mainframes, virtual server instances and has integration with stuff like other CI tools like Ergo CD. So this is just a diagram of the process so far. So typically a developer will create a pull request with their code changes and this is where shift depth comes into us. Before it even enters the main branch of code, we do some of our tests. We do secret detection. We do unit tests and we do vulnerability scans on your dependencies and your packages. If all of that goes well, it enters the build stage of the pipeline in the main branch where we do some of the same tests just to make sure and we expand upon that with the software build materials generation. Dependency vulnerability scans, static code scans, license checks, branch protections and then we'd actually in the CI stage we would actually deploy the application to a development cluster. It might just be your own microservice. It probably isn't going to be your entire application at this point and we will perform scans like OSP's app to see if there's any port vulnerabilities, network problems or stuff that's very powerful in doing that. All going well, all of that will go into the inventory at which point the continuous deployment automatically picks up a change on that and starts deploying. What we do is we then check those artifact signatures. We go into the change management system that I mentioned in the last slide. Assuming all of well, we deploy the artifacts, we run some acceptance tests to ensure everything is running properly and we then update the change request to say that everything is deployed successfully and the change request is enclosed. As you can see, both the CI pipeline and the CI CD pipeline are constantly feeding evidence into that evidence locker, both gist and cloud object storage for audit use. Now this diagram is a little bit different. I've replaced the CD pipeline with a continuous compliance pipeline. This is the piece in my opinion that delivers the most value in our system and is the most unique part of our system. So I'll show it on the diagram. We have very, very similar stages to the CI pipeline. We don't need to do absolutely everything again because it doesn't have access to source code. But it does recompute the bill and materials to ensure that it is exactly the same as expected in case any packages or code would have been surreptuously added to a system, maliciously potentially. We do the vulnerability scans again. Why again? Because you may have entered your code, you may have deployed your code, your application to a system, and there were no vulnerabilities, but zero day bugs exist. New SNCC and CPE vulnerabilities are added every day. So when we run the continuous compliance pipeline, it will actually detect those vulnerabilities and report them back as soon as they are known to the wider community. This is to ensure that no new vulnerability is detected since deployment. And this is for all dependencies, for packages, for native code that you have written. We also have a solid vulnerability exemption system. So maintaining compliance with various regulations and rules in the financial sector, means staying on top of these exemptions. I know it's not feasible to always fix vulnerabilities as soon as they are detected. This finance system does not work that fast and the applications are far too complicated for that. So we do allow for exemptions, assuming it goes through the proper chain of command, but a security team has looked at it. And we have designed this pipeline in order to not allow developers to add their own exemptions if you desired because they don't always know. And it's probably best in a financial environment, but the security team would look over these vulnerabilities and determine the risk of those vulnerabilities entering the system or not. So this guarantees that all artifacts and their dependencies are periodically scanned for vulnerabilities and failures and it can be set up to run typically daily, but you can run this as often as you want. And just going back to this bit here, this is a work in progress, but we have some small work in this. It works a little bit. We have an automated remediation system. So for certain vulnerabilities, if the system can fix them, it will actually create an automated pull request back to the call base, fixing those vulnerabilities. And by, for example, looking at a CVE, there's generally a recommended fix for a package. So it will update that package and any required packages in, for example, an npm package.json and push that back as a pull request. The developer will then test that code and it will go through the same system again, just like that. So at this point, I'm actually gonna hand it back to Krishna and he's going to explain a case study on how we actually applied this tool chain. Thank you, Brandon. So as Brandon mentioned, this is not just another DevSecOps tool chain, either key differentiator being immutable evidence locker. And we have central inventory repository. On the main thing, we have continuous compliance which continuously scans you to your code and your deployments on a regular basis and reports if new risks are identified. Security's top priority for us and while designing this DevSecOps new process. Now let's talk about the case study on Byne where we implemented this DevSecOps process. Let me introduce Byne, if you do not know about Byne. Byne is a banking industry architectural network. You can go to Byne.org to know more about it. Byne is a collaborative not-for-profit ecosystem formed by leading banks, technology providers, consultants, and academics from all over the globe. It was created to establish, promote, and provide a common framework for banking interoperability. And Byne-Colas initiative is to solve the challenges presented by legacy core infrastructure, hence the name Colas, initially by developing API-based microservices, covering consumer payments, customer offers, and consumer loans. The project in this case study is Colas V2 scenario. We are also working on V3 at the moment. So in this V2 scenario, a consumer procures a loan tailored for her needs through a safe and secure online channel offered by a bank. The bank application employs an ecosystem of partner applications interoperating on the Byne architecture to deliver this service. Here's the Byne architectural diagram. You can see that there are multiple SDs, service domains, where multiple ISPs, independent software vendors interoperating. The different service domain, for example, are consumer loan, customer credit rating, customer offer process, each developed by an independent software vendor. And sometimes even running on, in this case, SAPN is running on Azure, thought machine is running on AWS, but still integrated with IBM Cloud. And here's the Byne pipeline flow using our CI-CD process. Byne V2 was deployed in IBM Cloud for Financial Services using this DevOps pipeline. Application codes are kept on a central git repository, as you can see on the left-hand side of the screen. And the CI-CD and CC tool chains will pull and deploy them accordingly. Each application has its own CI pipeline and will be triggered by a code change in the report. Now all these pipelines update the central inventory report, which record details of artifacts that are built by the CI tool chain. A single CD tool chain, you can see in the center, will be triggered for the production deployments after successful promotion pipeline pull request. This promotion pull request merges the inventory main branch into specific branch, for example, dev branch, production branch, et cetera. This can be entirely automated or for production environments in particular can allow for manual intervention. Continuous compliance, you can see on the right-hand side, ensures that the deployed artifacts and their source repository always secure and compliant and can be scheduled to run on any time frame. So what are the lessons learned in Byne? Leverage continuous compliance to assess your compliance level, detect vulnerabilities and track them. Look to eliminate vulnerabilities as much as you can. You can of course add exemptions for non-critical issues if you have to. Combine deployment tool chains. Use inventory to deploy each microservices. The most important thing is you have to execute your software vendors, ISPs, to use some kind of continuous compliance. So we have to make sure every package getting deployed onto our production environment is vulnerable free. Reusability, you shared libraries if possible. Combine CI to chains of similar applications. In our case, we have multiple Java applications and each of these Java application will refer to one central config inventory. So we have all our scripts kept in our central config inventory. One for Java, one for Python, et cetera. Our DevSecOps CLI is Coco CLI is used by our pipelines to create inventory change requests, evidence, et cetera. We can also use the same core Coco CLI with other systems like Jenkins, GitLab, et cetera. This allows incremental adoption of DevSecOps pipelines for legacy applications. So you can reduce your cyber risk by improving your pipelines with DevSecOps capabilities like having a reliable and reputable automation. Everything as code. Shift left, mitigate security risks as early as possible in your pipeline. Start with continuous compliance, that is very important. Maintain compliance throughout by continually scanning your code and your deployments. Make sure to gather and store your evidence in an evidence locker. Improve your tool chain further, add new functionalities, learn more about DevSecOps capabilities and tools such as Coco, scanning tools like open source, OVAS, vulnerability detection, rehabilitation, compliance mechanisms, et cetera. Join us for a detailed session at IBM Boot tomorrow where you can get more details about our CICD process. Thank you very much everyone. Yeah, thank you everyone. I don't know if you have time for questions. I think that's up to, yeah. So yeah, sure. I'll bring my mic down to you, it's fine. You guys are talking a lot about vulnerabilities, yeah, about code, but compliance is not also about code, it's also about following regulation and following rules that you have to comply. And you have also gathered some evidence that you follow the rules. And you have to provide it to the governments or other companies. How do you guys achieve that also in continuous compliance? Okay, I can answer that. So we are currently developing policies, code integrations with this, such as Pulumi and HashiCorp Sentinel. But it's more, we have other options as well. So you mentioned governance and we need to maintain compliance with some of those policies, for example. What we would do in that case is you can do a custom piece of evidence to if you have a tool to determine, such as again Pulumi or HashiCorp Sentinel for your infrastructure, there are other systems for application code as well that they are following governance policies and stuff. You can write a custom piece of evidence to write into your evidence locker and when the government comes around to audit your system, audit your code, that will be in your evidence locker ready for them to see. I hope that answers your question. Thank you. Sorry, let me have my microphone first. Is this an official project that we can use? Well, our specific implementation is based on IBM Cloud, but I do believe that the co-co-collects, the air command line tool is open source and you can use that to start gathering your evidence in your own CI systems and also building your inventory. So all of the building blocks are available in that. I do believe the actual IBM booth, you can see actually a demo, very interesting. Oh, sorry, I think this guy was first, yeah. Yeah, for the... Ah, okay, we actually do have a system for this, is unfortunately not open source. It's called the IBM Security and Compliance Center. What we do with that is we have profiles set up to check the pipelines themselves. I think it's called IBM Cloud Security Best Practices. And what it does is it checks the pipelines themselves for evidence of if they can run unit tests or if they do run unit tests, if they are locked down for open access so that they are property access controlled that I've mentioned the unit tests that all the scans are completed that branch protection is enabled in all your Git repos. So that would manage the pipelines themselves in that case. And the product he's talking about, we can't save the product name. Obviously it has a dashboard where you can view all the vulnerabilities of all the pipelines in that, in one single dashboard. I already mentioned it. Any more? Or I think we can wrap this up. Any more questions? Okay, I think we'll wrap this up. Thank you very much. Thanks everyone. Thank you very much. Thank you.