 We've been enjoying the sessions this morning or afternoon in your location. We're going to kick off our next session with Hugo Guerrero, now hunting down the monster hidden in your software, your fly chain, as a just a logistic reminder, any questions you have during the session, push them into the comments section and we'll try to fill those at the end of the session. So with that, I'll turn it over to Hugo. Thanks a lot. Thanks a lot. Thank you very much, Mike. I really appreciate the opportunity to be able to talk with you, all of you, whatever you are and whatever the time it is, I really hope you are enjoying your day and you're enjoying the session and overall the event. So in the next minutes, we are going to be talking about software supply chain and the different challenges and attacks that some of the monsters are talking your process. So I hope you enjoyed the session and my name is Hugo, as I was mentioning some introductions were part of the Red Hat team that is part of what we used to call middleware, now application developer view that focuses on development productivity and how we can enhance your developer experience within your organization, either by providing tooling, platform engineering solutions as well as frameworks and software development tooling. So in this session, I'm going to be focusing more on the software supply. So let's get started with some of the of the data that we have for this. So when we're talking about the monsters that we have on the software supply chain, it is not if they're going to be able to attack or if there really exists. The problem is not that it's really when that's going to be happening, right? We know they exist. We know they're hunting there. We know that they're going to be trying to stop you and being able to attack. So that's why you need to be prepared for that. The thing is that with the calming of an extended software development process in any single organization, where we are just adding more and more software resources and software processes, the management of how that software is being provisioned, how that software is being created, how the software is going to be delivered to your web application, your desktop applications, or even your phone applications, means that you need to take special attention on how you're procuring that kind of software. And this is what we call the software supply chain, when we are aware that not all the software that ends in our site, in our mobiles or in our desktops, it's coming from our own developers. Yes, we have some of the software being provided by them. But in reality, for developers to be productive, they need to be able to build on the foundations of more software. And most of the time, that will be perhaps other libraries from different teams, like if you are working in a big organization, you will have other teams working on different parts of the software. But also, you will be relying on third party software. It could be some software provided by one of your vendors, for example, some software that comes out of the box, or perhaps some libraries to access a specific type of middleware or resources that are available on your development platform. But every single year, we have more and more presence of open source solutions that are being available through Git Gripos from package managers. And those also provide by third party developers, where some of the times we don't have a direct contract or use of those. And this is some of the things we need to take into account, because when there is no strict relationship or the type of licenses as is, sometimes our software can be one of the points of entry for attacks, for these monsters to become present and show in our backyard. So that's why we need to be able to take some measures to be able to work around these potential issues, these potential threats and attacks that can become present at any time. And one of the things that we usually try to do, it's first getting to the monster hunt hunter mindset. What it means is that, well, we know that monsters exist and that they're walking around and they're just waiting for a moment to be able to attack. So what you need to do is become this person that has the capabilities and it has the willingness to be able to be aware of its environment, being able to gather some tools and some information about what are the potential threats, and then being able to go and chase these monsters and being able to hunt them. And if we think about the software supply chain, in overall the building of software, DevSecOps is the practice that has been helping us to deal with this. It is a change of culture. It is a change in the paradigm of how we build software. And this is where things like Shift Left becomes critical to be able to detect threats and being able to stop potential vulnerabilities before they move forward in the chain, right? So we start with building software and deploying and running down production. So every time that we are able to catch and kill one of these monsters in the early stages of the process, that makes our overall process more secure. So yes, a lot of initiatives from companies are on building these teams that have this Monster Hunter mindset by putting together all the teams and all what we have built around DevOps with the addition of the security teams, involving the security teams to be able to get into the same objectives to be able to protect the software at any of the different stages of it. So from cloud to microservices to any kind of development, it is important to make this change and be sure that even though security is something that used to be focused in the past in the later stages, every time we bring it to the left, it's going to make it more secure for our development. And then one of the things that we are trying to figure out is, OK, I have this Monster Hunter mindset. I know that I need to be able to be aware of my surroundings, have some toolings, but yeah, where do I look for this monster? Where you should expect them to appear and being able to, you know, try to defy them back. So as I was saying on our supply chain that it has different stages and different phases, in all of them, we are there are different type of monsters that live in each one of those steps and on those areas. So most of the times when when we see potential threats, when we see that there's vulnerabilities can be either in this one of one of these three phases of the software development lifecycle could be when we are starting to code our application, when we are in the early stages of the process, perhaps doing some architecture, being able to throw the first lines of code, bringing the first dependencies and libraries that are coming from repos. Then the second phase is when we are actually building our systems and checking those. So code phase is the first. Second one is when we are running our usually CI and integration pipelines, where we are building our software, when we are pulling together all the dependencies, when we are packaging artifacts that will then will become our applications and will be deployed into our target environments. Finally, the third ecosystem where we usually see monsters stocking, it's when we are actually during runtime, right? When we are deploying those artifacts, those applications, when we start to see that there's misconfigurations of there's drift between configurations in our environments that allow to become exploitable by, you know, malicious actors. But also one of the important things is that sometimes we just forget about this kind of threats. We feel very secure because during the process we were able to clean up most of the monsters that we were finding. But then we forget about this. And we know that all software and the software that is not being updated, it's also becoming a risk. So during runtime, is there also an area where you will find some of these kind of monsters that will be threatening your software supply chain. So let's get a little bit deeper on this kind of monster. So very similar to the cards that we are able to share for finding some certain monster. We can find information on this kind of monster. So the first one that I want to present is the monster that we call the unverified changes to a git repo. And this kind of monster, his main attack, it's when he tries to spoof a person's identity and want to impersonate someone and then commits code to a git repo. And because most of the times when I say, oh, you know, Mike has already committed some of the tickets and work on the tickets that he was working on, I trust Mike because usually we know that he is one of the lead architects. And even though the commit looks a little bit off, I know that Mike's always sending good code. So that's why we, some of the times, you know, just neglect some of the security precautions or revisions on that kind of code. So this is how this kind of monster attacks, right? So when you are able to, when they are, the threat is that they are able to then commit code that is not review and that it's, you know, skipping some of the reviews on your pull request process. So if you're a monster hunter, one of the actions that you can take is doing things as easy as signing your git commits. So if you're using something like git sign, you're able to provide cryptographic keys that will help you verify the identity of the person that is signing those git commits. In this case, we will assure that if there is a key that is very well protected, that the identity is secure or we're using things like tokens, we can then be able to fight back this kind of monster. So this is one of the monsters. Another monster that I want you to be aware of is the unreliable dependencies. So this is a very common one. This is found everywhere. And if you remember Loughford J issues in the past, you will certainly identify this kind of monster. And the problem of this monster is that he attacks with dependencies that have security vulnerabilities. And mainly he hides on the complexity of the tree of those dependencies. And mainly to the dependencies that are transient. When we explicitly put some dependencies on our pump XML files or our package.js, and we know exactly which versions are, and we can just, you know, run simple queries to be able to get that information. However, sometimes those dependencies have other dependencies that become transient, and then transitive. And then we are not able to easily check which one of those dependencies are on versions and releases that have some vulnerabilities. So to protect you and being able to hunt down this monster, I recommend you to use trusted curated content. So be able to assure that the content that you're pulling, the dependencies that you're pulling that being provided and they are being verified by your procurement process. Also being able to automate software composition analysis and dependency analytics. So being able to roll out and complete all that tree of dependencies and being able to review each one of those. Also it's something useful for you to kill this kind of monster to be able to have this signed and verified artifacts that you know that have not been tampered, have not been contaminated or been a place where this monster can hide. And you will ask, well, what are the minimum requirements, right? So most of the times there's some processes and practice for compliance, but also there's some now requirements as part of the specifications and regulatory guidelines to be able to say, you know, that I trust this software or this source code. And this is where things like the software bill of materials, what we usually call S-bombs become some of the tooling that we can add that can help us to verify that the origin and the source of our code, it's secure. It's more secure. So also we have files to be able to handle vulnerability management. We also can do software composition analysis as we were saying, which parts of the software are being collected. One of the tools that I really like it's for dependency analytics, it's the Red Hat, dependency analytics plugin for BS code where you can just go over your phone file if you're doing Java, for example, and being able to run on-demand information on your dependencies, being able to show the information and how they are being ranked. As you can see, there's like high risk, less risk on information that is being provided by authorities like Red Hat. We are obviously building software and open source software that needs to comply with many of these requirements. So we know exactly, we have a very huge catalog of vulnerabilities and CVS and information regarding where are certain libraries standing in regards of security. So this is a very useful tool. It's available from your own IDE, but also you can get onto cloud.redhat.com being able to query that if you are coming from like the security team. More monsters that you need to be aware of if you're doing containers, if you're doing OpenShift, one of the most common monster that you will find is the tampered containers. And this can be sometimes a malicious actor being trying to pollute your containers, but actually most of the times it's just outdated versions, right? So when you are building on top of certain container images, if you are just pulling them from the internet and that could be something that is being corrupted by some malicious agent, but most of the times they're just of the outdated versions. And the problem with outdated versions is because now the container has more information regarding on how the application is being executed, not just our artifact, but also information regarding the whole stack. This is where we can find things like operating system, package vulnerabilities, very old versions of OpenSSL or very old versions of Node.js or the right Java runtime that are being used as part of the container for the runtime environment. And perhaps, yes, we built our artifacts using trusted software and running on secure machines, but suddenly when we are running those and on outdated containers or containers that still have vulnerabilities, that becomes a threat that we need to be aware of. So how you can take actions against this monster? Well, always try to do image scanning. So validate that your container images are healthy. So things like quay.io is doing a scanning. Docker Hub also has their scanner with a scout and also check that their images can't have signatures. So you can also assign these containers to be able to be sure that they are being checked and there's the information on that. Also, always compliance regulatory requirements. So we have been talking about a lot of signatures. So this is why it's important. It's a way to be able to have some kind of attestation of your software packages, right? So it's basically a way to say, yes, this is the information and this is like a keyword where we have been using to sign our content so we can then avoid it. Somebody says, no, this is not the version or this is not the artifact that I created. So signatures are something important. And as I was saying, the vulnerability exploitable exchange documentation, it's also a way to get information about common CVEs and other requirements that, for example, as I was saying, Red Hat can provide you for being able to track and check what are the kind of level of potential weaknesses that we have in our software. So having that information around, it's important. But the most important thing is to be able to have them organized and being able to have that in a single place. And this is where the last place where monsters hide, it's available. It's basically on the runtime side. So when we are already deploying and we want to be sure that we are gonna be able to secure our software supply chain. So one of the things that are also part of the software and sometimes it's a little bit neglected, it's the APIs. And this is where you have a monster, like a typical broken of authentication authorization in APIs, an API that this kind of monster attacks APIs when suddenly we build something that thought we were gonna be handling internally and then suddenly it's gonna be shared with partners or it's gonna be shared with third parties and final users. And then we realize that the API is exposing a wide attack surface on our information, right? We are exposing database internals ideas for our data or it's just simply has incorrect authentication mechanisms to be able to secure some information. So almost 10 has an updated version of all the different risk of API. So the actions that you can take to try to kill this monster is to be able to, well, first, try to implement your proper access control, being able to use roles and permissions, check that nothing, it's publicly available if it's not part of a role. Obviously check the unpatched vulnerabilities of your directories or the runtimes of your APIs as well as these configurations. This is a way to try to avoid and kill this kind of monster. And finally, one of the other monsters that I need to make you aware and prepare you to count, it's the untracked and scattered security information. This is very common because we are suddenly resting and we didn't know that there's something on behind the bed and this is because some of the products that we use that we are deploying or software that we are running, it has no vulnerabilities but nobody's taking care of those. There's no remediation, there's no identification and basically it's mostly because we have a lot of unpatched software. So the best way for you to take actions from this is well, being able to compile and put everything together and have a single source of truth of this information, being able to track the applications that have exploitable vulnerabilities and being able to know exactly who is the owner, who is running that, who is building those applications and being able to track back to them and being able to help them out to correct issues and obviously map the relations between your apps and open source code. If you're using open source, being able to see exactly where all your developers are gathering those resources, being able to help them out to identify potential threats that need to be patched and be updated because of the information that now you are getting directly. And this is how we ambition the overall process, code build, monitor your trusted software supply chain from the beginning where we have your applications, your language runtimes, your container based images like universal based image. You're able to secure the source of those components where you are using the trusted content where you're using signed images, then you can start building your capabilities to kill these kind of monsters. When you're doing the coding, there's the tools like software composition analysis, signatures, when you're building the artifact process of building the Decton pipelines or the Jenkins pipelines that you're using can benefit of checking signatures and being able to sign and generate S-bombs that will provide the manifest with the information and the metadata of how that artifact was built and then being able to provide them when you're running on your cluster to policies like using Red Hat Advanced Cluster Security where you can check if the signature of the artifact is not valid or is missing, you can then apply policies to avoid the risk of deploying that kind of artifacts into your cluster. And this interacts with other areas like your GitHub repo, your GitHub accounts with your registry like Quaid.io on top of any kind of work of your workload deployments. So in sake of time, I just want to be able to invite you to try our developer sandbox for Red Hat OpenShift. There's a new experience now there where you can get 30 days of self-service free access to an OpenShift Kubernetes cluster. So it's something really easy to do. You can just go to that link, we'll transfer you to the Red Hat Cloud Console where you can try OpenShift, you can try develop the OpenShift Dev Spaces or even you can try our OpenShift AI clusters. So thank you very much for the time. I really appreciate that you were able to hang on with me and if there's any questions, we are happy to take any one of those. If not, remember, you can put it in the comments. We're going to be hanging around. So we're going to be able to try to answer those offline if you don't want to make any right now. Great, thanks. No questions in the Q&A here. So, but as you said, well, here will be eventually available on the YouTube channel. And if there's any questions, we'll be hanging around the chat on the later session. So please join us over on the main stage at the top of the hour for our keynote with the Red Hat Nintel and we'll also be inviting over our, or sure, announcing the winner of the Hackathon recently as well. It has had a really great presentation to go with that in discussion of the winner there. So with that, we'll say thanks and pop over to the main stage and then rejoin us after that for some more great presentations later today. Thanks a lot.