 Good morning. Good morning, everyone. Welcome to the OpenShift Coffee Break show and I will make sure that I will stop any echo. Good morning. Welcome back to the OpenShift Coffee Break show here at OpenShift TV. Sure. I'm Roberto Caradela. I'm based in Madrid and I'm part of the EMEA Specialist Solution Artist focusing in OpenShift and also in SCS. Yeah, I'm Rodrigo Alvarez. Hello there. I'm based in Dubai and now it's 4 to 1 degrees. So, yes, as you can imagine, it's freaking hot. And I'm part of the same team as Roberto. So, yeah, today I'm here to talk about the SCS and ACS. Yeah, and it's based in Stackbox that we acquired a couple of months ago and we now have the... You know, we're celebrating the GA of the ACS operator. But we would like to celebrate it with a live team. Kind of also as a way, read that way, you know, the old things, always live. That is very cool. So, we have this agenda. I think Roberto and Rodrigo is going to show this demo about that SecOps pipeline. So, would you like Roberto to introduce the concept? I don't know if you have any diagram to share. Yeah, I have a couple of slides talking about the SecOps in Halberd Cloud with Red Hat and also why the SecOps is important and why this ACS and the integrations with all the very nice products like OpenShift ketops, OpenShift pipeline and so on. And we can go through the demo. I will share my screen if everything goes well. I don't know if you can see it, my screen? Yes, I can. Okay, perfect. So, I will present us and we can speak about the SecOps in Halberd Cloud with Red Hat. And my first question is what we need to answer is why the SecOps is important? Why can this affect my business and my applications? So, the SecOps allows IT and security teams to tackle challenges across people, processes and technologies and allows improving, for example, speed and efficiency or just consistency, making things repetitively and improving also collaboration. And the things is that security can no longer be only the concern of the only security teams and needs to be included in earlier conversations, for example, with the development and the DevOps teams among others. So, security team needs to be part of this conversation in our DevOps team earlier and also having the possibility to add security in the different processes of development, deployment and runtime of our workloads and our business. And on the other hand, DevOps necessary needs to have these appropriate tools that help the security management in the task. So, it needs to have a very nice tools in order to improve the different layers of security in our pipelines and our processes as well. The thing is that in most organizations focus only in the application itself, in the application pipeline, but also it's important sometimes to add security process across the entire lifecycle using, for example, the SecOps. So, adding integrity of the libraries used called scanning, we will show in the live demo the whole lifecycle using also security steps in this SecOps pipeline and also other possibilities that we can discuss about. And we need to think that security must be continuous and realistic. SecOps allows you to approach this security continuously and realistically across the application and infrastructure. They've added in different phases like, for example, build, run, manage, adapt, but we need to think that this needs to be a continuous walkthrough. And why Red Hat for DevOps? How can Red Hat help in the DexterCops? Red Hat OpenShift platform have a vision to have a hybrid cloud platform for enterprises to build, deploy and run application in security at a scale. So, for example, Red Hat delivers continuous security for containers and Kubernetes with OpenShift platform. Using, for example, and providing trusted content, having this lifecycle of the platform, using also a strong role-based access control, having a network isolation and container isolation and so on. And ACS, and this is the good part, extends the security also to application layer, for example, as vulnerability analysis or the configuration app analysis. And also, for example, using compliance assessment or risk profiling or even in the runtime checking if there is any threat and making the incident response as well. So, ACS, Red Hat Advanced Cluster Security for Kubernetes, have and focus in three different parts. The secure supply chain, giving tools to the different developers in order to have more integration and have more tools in order to, for example, integrate and scan in the different pipelines. Also, have the possibility to secure the infrastructure, giving the security post management to, for example, identify if there is any threats or just remediate any wrong configuration or something like that. And on the other hand, also in the runtime and it is very, very important to secure the workload, to maintain this thorough trust execution and do this workload protection as well. And we need to remember that Red Hat Advanced Cluster Security for Kubernetes, ACS, is the first Kubernetes-native security platform. So, it's the first and it's built for Kubernetes and running in Kubernetes. And for this reason, have a very nice integrations, for example, using the image scanning that it's in the industry, like Clare, Tinego, their own anchor, also the integrations with different registries. And the registries in SAS or itself with Kuiayou or other registries with Hub or something like that. Also the integration with CACD. In this demo, we will show how well integrates ACS with OpenShift pipelines, but could be also integrated with other CACD tools. And obviously you need to know what's going on in your platform, so you can connect with Devote Notification. And we can see also the Slack connection or the Microsoft Teams from some alerts that you can notify if anything's wrong or something like that. And also the Sion, for example, to bring these different logs or different activities and bring these different log systems and activity threads to an Splank or a Sumo Logic or also interact with another SAS tools like AWS Security Hub. So the first thing is that with all of that, we can secure the containers and shift the security left. Once this shift security left, it's trying to give the different developers and the different teams tools in order to bring the security in the early stages. For example, when the developer is trying to build the code, giving them tools in order to identify with their pipelines if there is any vulnerability or if there is anything that goes wrong with the configuration. And answering also securing the Kubernetes platform, having, for example, a strong admission and checking, for example, that no privileged spots can run. Also, if there is any critical vulnerabilities in the pod, I won't allow to deploy these pods and also having these compliance and risk assessment in the different platforms and the different Kubernetes and open shift clusters that we manage with ACS. And finally, securing the container runtime, because this is very, very important, how we can secure our workloads, how we can try to, when we are running our applications, detect if there is anything wrong, if there is any attack and try to isolate this attack in order to having, for example, micro segmentation with network policies or very strict policies in order to not running some crypto miners or anomalous behavior. And if there is any threat, we can kill and enforce the pod that offends and files the violation itself. And the pipeline, we want to also show in the demo, the integration with ACS and also other open source tools that we bring. And did it's more or less the CHCD pipeline using also and introducing also security steps, not only focusing in build and deploy our source code and building the images and deploying on open shift. Also, having, for example, unit test code analysis security scanning for detecting the vulnerabilities and so on. Did it's very, very important in order to bring and add more steps of security to our DevOps pipeline itself. And our demo of Dexacops pipeline. And it's more or less with that we will bring together the three things that from my perspective are also that are open shift pipelines based in tecton. On the other hand, open shift ketops as well. Based in our city. And ACS as well with advanced cluster security for Kubernetes. Also, we will use another tools like a good server with basic goals. Units sonar cube nexus subproxy and gotten all of them are open source and you can build your DevOps pipeline. Very easily adding more and more to the scenario. So what do you think guys, we can check the pipelines if we wanted to go for it. I think it's a go ahead. And just you. It's nice to say as well basically that today's demo is going to be focused on and to when from the developer point of view to delivering the application itself. And we will not focus basically on just on ACS okay just it's going to be like the entire life cycle. So think that you are a developer, and you want to deploy your pipeline and choose and also integrate with ACS. The first thing that we need to know. It's what's what we can do to install our different components. For example, I install I already install open shift key tops that brings our city into our scenario and we can use key tops in a very nice way to the continuous deployment. Also red hat open shift pipelines that it's based in tecton and we will use this tecton pipeline and open shift pipelines itself based in tecton and also this guy that it's the advanced cluster security that this is the operator that when GA on I think in Friday when Saturday. The good thing is that with the operator. It's a very nice way to install it. You have also other possibilities to install ACS that in the past you can have the possibility to install it with the gem itself, or using help charts, but this is a very nice way. And the most impressive thing is that you have also the channels and you have this approval that it's automatic. So when a new ACS release pop up, you can automatically have an upgrade your entire ACS cluster without doing anything. So it's very awesome. We can check for example, if we go to the stock box itself. We can check the central and we deploy it using the create central. So central it's imagine that central is the brain of ACS and it's everything that needs to have and needs to analyze. It's going to the central. So you can install it in a very nice way. Just deploy the central and afterwards the security manage of the different clusters. So first of all, you deploy the central. And afterwards you deploy the manage security and the different components like the central and so on that we can explain now. But this is the first thing that you can do after you install the ACS operator. So for example, you have in here that you can control the admin password, also the exposure and you can expose like an open shift route that I did in this demo. But also you can have the possibility to explore all the possibilities with load balancer or load pause or something like that. Also use your own certificate or for example, if you want to have your scanner or not of basing in the stock box. Imagine that you are using the query or scanner and you don't need it so you can disable as well. And also the good thing is that you can install ACS in a fully disconnected way. So in this case, you can control if your ACS connects to the Internet or not deploying your operator and deploying your cluster fully prepped. So it's very, very easy and we install it like this way. And after that install it, we have and I will go to the developer view. And in the developer view, we have the different pieces. Did it's the central that it's the brain and also the scanner and the scanner DB that is the scanner and the vulnerability scanner, like where you are from stock box from ACS. And also the different pieces that are from specifically managed cluster. So when you want to manage one cluster, automatically you can deploy from the operator and the different pieces that connects to the central. In this case, the mission control the cluster like the Kubernetes and obviously for very in a very easy way using this directly or using also their own operator. So in the own operator, you can see, for example, let me go again to this, you can check also the secure clusters and you can add the secure clusters whatever you want. For example, you need to connect to the central endpoint and so on. And we have also in store OpenShift GitOps, OpenShift pipelines and ACS that are the three building blocks that we have in here. But also we need to check other things that are very good stuff for our developer Dexacops pipeline. In this case, we have different things like Gox servers, the trigger and so on that we can explore when we launch. The first thing that we need to do is simulate that we are a developer. We are a developer that wants to launch our pipeline. So what's the first thing that we need to do? Go to our Git server. This Git server, it's a basic Gox, but you can use whatever you want. So for example, in this case, we have the source code that this is a Sprint Pet Clinic, a very nice Sprint application. Yeah, that is a popular Sprint Moot app. Yeah, that's right. We have in here we have our repo and also we have also the other repo. This repo represents and have the GitOps way. So we will have the deployment use, the customization, the root and the service and everything that we need in order to have it. Nice. Also, there was a story to interrupt. There is a question in the chat because you mentioned it. So, Alos Doug is asking, what's the difference between Kweios scanner and the one included in Stackrocks? Are they complementary in some way? They are based and they are scanning amongst different vulnerabilities and different CPEs. So you will receive the different, for example, approaches, but basically it's more or less the same. It's scanning the image for different vulnerabilities going through different sources. So it could be that have a slightly difference between the scanning one image and another using Stackrocks scanner and Kweios. But basically do the same thing. That is scan your image in order to check for vulnerabilities. And also it's nice to highlight that you can integrate ACS with Clare, for example. So basically you have that Kweios integration. Yeah, so can you show that? Yeah. Exactly. So you can integrate with Clare. You can basically integrate with Red Hat, Kweios. So it's quite simple to integrate. So you don't need to just rely on the Stackrocks scanner. So you can basically integrate with existing ones. Yeah, this is very nice because you can. Awesome. My pleasure. Also, we have in here the different deployment, for example, and we have the deployment and so on that we will use in a very Github's way. So we have our Agosidit in here and our Agosidit. The thing it's doing it. Open it. Yeah. It's having the next Spring Pet Clinic and this is in the environment that having depth and states using the customization to bring the Github's and also syncing the different things. But in this case, it's syncing everything but not the deploy because we are not building our image and the image is not available. So we need to first of all introduce a change where we will introduce a change in the Pet Clinic. So in the Pet Clinic, we will introduce a change. For example, if we look in or if we want to introduce a change in the rhythm, like for example, I will reduce demo time and in this demo time, we will introduce that. And there is a webhook and this webhook triggers our pipeline. This pipeline goes through the different steps. In this case, we will source and clone the different code that is in here for our application. Also, we'll go through the different code analysis, unit testing and dependency report. So we can check that effectively it's cloning our application itself and afterwards in parallel for saving time, we will run first of all the code analysis based in Sonacube. So we have also our Sonacube. Also, our dependency reports for now the different dependencies that have our stream application and also the unit test that is the unit test based in the unit. So in the details, you can see the different things that are going through. And after that, we can check because we have this code analysis and dependencies reports in a very nice way to show the developers. So we are introducing the static analysis of our code, providing tools to our developers. Also, to having for example, before then building the image and building the article, we are giving them the possibility to give us a static analysis of our code. Also, the unit test as well. And seeing the dependencies reports for for example checking if there is anything wrong or not. This can be checked also when it's built the dependency reports in a in an engine server that it's called reports repo that I've used because we rely on the different parallel things and this spins up different parts. And the nice way that it's going with OpenShift pipelines is that you can parallelize and you can bring the different automation and Kubernetes native pipelines to do that. For example, you can have also the exact same pipeline using Jenkins, but you need to rely in a central Jenkins server and in a very nice way. Tech Tony is doing that. So when this is finished, that it's launching the different unit test, we will have our result that it's going through takes like 30 seconds more. But the thing that I want to show is nice. We have the build success and we run a lot of tests for two tests and everything is okay. And afterwards, we are using Nexus for two things. This Nexus will have first of all, for once we built our job, our artifact, we will push directly to the Nexus for what reason, because we want to control where it's located the different jobs. But also we will use Nexus as the main thing. But meanwhile, we can check the different sonar cube results. So if we go to the sonar cube, we can check the different projects and in this case, we have the pet clinic itself. So we can check the code that have and we can check also the vulnerabilities. So this is important also to show and to have the proper tools to the different developers, because in this way, you can give them the possibility to check the different analysis of their own code. Also, in the Nexus that we showed, I will close some tabs, because if not, my computer will go. Okay, in here, after this release of this app, this release of this app, this will push directly to the Nexus. So we will using Nexus for building our proper job. And if we go to the snapshot itself, we can see in here that effectively we have our job in here with our artifact ID and the version that we built. So once we have in here and the developer knows that the code is okay, or at least having static analysis of the different phases and also have the dependency reports and the unit test, we can build the image. We built the image based in Java 11, but we know that this is a trusted source. For what reason, because we know that this went from the registry.drexha.io that it's a secure registry. And also we are using this because we know that these images have specific analysis and more information about that. So if we have this information, we need to use also as a Maven mirror. So we will use the Nexus as our, we will build our image using the dependencies from the Maven public. And why this? Because we want to show also where are the different and where are the three are awesome. These three pieces that runs in parallel are the integrations from our OpenShift pipelines to ACS. So we are and we will integrate and we will run an image scan using ACS registry. And afterwards we will analyze the different vulnerabilities and check if there is okay or not. This stuff here, right back. Okay. Now should be good. I'm sorry about this little issue, technical issue we have. And now it should, everything should be fine. We have some issue on the OBS side that we are using for streaming. And Roberto, yeah, I think we just lost the one step that the image scan. Yeah, the image scan. This is the interesting one because we are using this OpenShift pipeline and using the ROCK CTL. ROCK CTL is a common line tool for integrating in every CI CT tool and going also, for example, in this case, we are scanning the application that we, the image that we built. And also, we after that application build, we can see directly a very nice report of our own image that we already checked. So we have in here a lot of information and a lot of things of the different information that we have of our application. So you can check in here that it's Sprint Pet Clinic with the show. And we have the different CBEs that we can get more information. For example, we have these CBEs and so on. Did it's good stuff. But also, we can check different things among the image that we already built. For example, we don't want to anyone have the package manager in the image. And for this reason, this system policy failed because detect that in one layer of the image that we already built have MPM or YAM. Or on the other hand, for example, have vulnerabilities that could be fixable with modern scoring of seven. So these vulnerabilities are defined in ACS and are fully managed. And on the other hand, we can check this. This is from build and we can stop our build. Imagine that you don't want to in any way to build your image. So you can enforce this guy in order to, if there is any CBE detected in your image build, stop and fail the build. So you can prevent your developers to bring applications or bring or build different images containing CBEs or containing the different things that you don't want to. Imagine that you are detecting a cell shock or a head bleed in your image. You need to prevent to deploy this image. And also, in the deployment check, you can check not only in build. Also, you can check the deployment of your own application. In this case, we are checking the different deployment of Kubernetes. In this case, we are checking amongst the different system policies, the different Kubernetes checks. In this case, for example, we can check what are the system policies. The system policies of ACS divided in three layers in build, in deploy, or in runtime. For example, the thing that we noticed before. I, as a security team, don't want that my developers build any image that have CBEs with scoring more than seven. So we can check and we can enforce that. And we can just, for example, give them a heads up in order to, guys, you are building images that have this type of CBEs. Please solve them, because if not, could be introducing some risks. But on the other way, you can also have the possibility to enforce. Why is it enforced? So you can have the possibility to kill the pipeline itself. So when you are building the pipeline, if you are not complying with my policy, this checks and fails the build and prevents that anyone builds the image or just deploys the image. So in this case, StarCrocks ACS will fail when the image matches the condition. And if we rerun this, we will see, may well I will rerun this. You can see that effectively will fail the pipeline. But if we go to the pipeline, the last pipeline, also we can check other things because the magic of GitOps happened. And why the magic GitOps happened? Because we have in here one step that it's update deployment. We automatically in this pipeline and in the GitOps server, we pushed and we changed the image one specific chart. So now the image is pointing with VDD fork. This is the image built in our process, the image that went through the different stages of security as well. So it's good stuff because automatically you are pushing and you are changing the code without manual intervention. And this will notice and will spin up the different ARGO CD application. And if you go to ARGO, now we have in here of a very nice deployment that is already live. And in this deployment, if you check the port itself, the port have the VDD fork, the VDD five, that it's the exact same application that we already built. But the everything that it's okay. So if we go, for example, in to our application and to our namespace, we will see. Let me handle this in the Dexacops. Yeah. And we have in here our spent the clean application that we built in a very secure way. And also in the topology, and also in the pipeline, we introduce two steps more that it's the performance test. And this performance test, we are using coupling. This coupling, it's using and it's trying to load a lot of requests to our application. It's like a loading test. And it's producing in a very nice way that we can check in the reports. So we have these reports in here. And if we pick the reports, we can check our application here. And if we check this, we can check first of all, the different dependencies, and we can check the dependencies that we built already in the first place. But also, we can check the coupling. That is different requests that we try to reproduce in order to know if our application is okay or not. And afterwards, finally, up and testing up and testing for what, because when we have already built our application, we need to check from outside. We're creating different attacks and trying to pen test our application in order to know for, for example, if the hybrid vulnerability, it's okay or not. Or if we are using any weak authentication method or anything else, retrieving more information. Returning to the pipeline itself, we can see, let me rerun that because sometimes we can see the different system policies and the different topology. I will shut down this. The good thing is that in the system policies, also the system policies, we can integrate information with different notifiers. In this case, I used Slack. So I used and I integrated ACS with Slack in a very easy way. And now that we have in here, we can check, for example, if anything happens during the build. For example, I can check if one application or if one build is against the different system policies. For example, in this case, I'm checking that this specific build of this container have not the specified request or limits. So they would think it's every system policy, every policy that we already deployed, we can integrate into our systems. In this case, for example, it's very easy because we can integrate this policy automatically enabling notification with Slack. But they are already different. Like for example, you can, if the build fails, open Algebra ticket or just integrating with Splank for seeing the different violations. Or just integrated with Microsoft Teams as well, generating that channel. So this is a very nice way to try to get the information that for your team and for you matters in order to not receive 1000 notification at a second. Only the notifications, only the things that for you already matters in a very nice way. For example, you have in here and they are already detected that in the deployment, your application have, for example, the different things that includes and could include the DNF or LPM or YAM. Also give you the remediation and this is very nice because you can give the developer the different tools and the different information in order to, hey, Mr. developer, you are building something that are not okay, but that are not complaining with my different policies that we already defined. It's a compliance system where you can keep everything in base with your definitions, system policies, integration. I imagine one cool integration is going to be the ticketing system for enabling your process. Very cool. To be honest, I think when it comes to educating the developers to think more in a security point of view, they're going to start thinking about the basic image, right? So they need to be aware, they need to start asking questions like, is my base image updated? So what type of tools are existing in that image? So do I have core or do I have WGAP? So you don't need WGAP on those images, right? So how many vulnerabilities exist within that image? So the ACS allows the developers to be aware exactly what type of problems they can have in production, because you don't want to basically ship and you don't want to introduce new problems in your production environment, right? So ACS allows you to educate your developers like even before having problems during the build phase. Yeah, and also preventing, for example, and enforcing. Imagine that you in production don't want anything that can be built with the CVE or with the LPM or just not using request or limits. You can prevent that. You can check that your own deployment, your own name of Dexacops can have also the information about you don't have this request or limit. And also the rationale for what? And the remediation as well. Given this information directly to the developer and shifting left and adding more security layers and more information and more power to developers to solve the problems, not only for the security teams to be sticking around and giving the heads up every time to the different development teams. Did it very, very good. And also you can enforce. So you can prevent that if these checks and into the development team. And if this fails, you can prevent to deploy the different application or to fail the CI itself. So if we see that it's building the image, so in a couple of minutes, we can see that effectively this when reached the image check fails one specific system policy that we define it. Specific with CSS because we define it in here and we enforce. So in this enforcement, if this policy don't pass fail this policy against our image because our image checked, and we noticed that we have any CVE that goes more than seven on in the scope will fail automatically the CI and will prevent anybody This is during the build phase right so if you want to protect like production you should, you know, set on the deploy as well so that's going to basically lock any creation of the deployment with that that like problematic image. We have a question from the chat there's a vast asking, can you create the custom violations is there any integration with open policy agents. This is a very good question. You can define it wherever you want and it's very, very, very easy to define new policies to define the severity and also the life cycle stage. For example, you don't want to prevent you want to prevent to specific CVE to pop up. So you can prevent in the deployment goes to the description, and after that, for example, you can restrict and also enable the notification. You can define whatever you want. In this case, for example, with a lot of things that affects in built in deploy and in runtime. Imagine that you want to, you don't want to have the possibility to developers to deploy one image that have a CVE. So you can define the specific CVE. In this case, for example, whatever you want. In this case, we can use the example CVE that is called one one. Sorry, two eight. Well, the nice thing as you can see here so it's very user friendly so yeah it's basically drag and drop. Yeah, basically you can do your policy criteria based on drag and drop so. Yeah, and you can, for example, and prevent that anyone deploys a previous container, adding certain capabilities. For example, I can define it that no one can deploy one port or one deployment with a certain capabilities. No one wants to allow it, or at least just in forward in format. And when we have in here, we can define it enforcement. So if this, if anyone tries to in production on whatever you want to deploy in specific deployment with a port without certain capability or with a certain CVE, we can prevent and automatically ACS through an admission controller, the policy admission control will fail. That it's more or less the same thing that it's having with OPA, but in a very, very nice way. Because with here, you can drag and drop, as perfectly said, and we can, for example, define a lot of situations like subcom, privileged containers and so on. It's a very nice way to define it and to control your different policies. So you can define your own policies. And at this stage I don't think there is any integration with OPA and ACS. I think it's going to be something similar to ACM, right. This checking with the PMs and in the future we'll have different things. Okay. And finally, we have in here that effectively we define it the image check. So the image check, we have a violated policy. Because this have an enforcement. So if we have it. And if we check the fixable CPSS, we see that we have this policy enforcement that cause the failure of the whole CICD pipeline automatically because doesn't feel and doesn't pass our compliance because I don't want to anyone put any CVE building my image. And also you have more information because in the image check, the developer can have the possibility to check. Okay, this image fails, but for what reason? I need to have more information. I need to check. We can bring this, we can bring this CVE and automatically know to which components are affecting, to which deployments are affecting as well. And also having more information going directly to the information source. And this is a very nice way to do it. So you can control in a very nice way the different things that you have given the possibility and giving to the developers. More tools in order to control in the full pipeline, even having the possibility to full control the pipeline of your developers and enforce your compliance security policies. And that's all for our site. Wow. Oh, I mean, it was super lots of stuff. I think you know what I think this reserve also a second session where we go into more details. We had some I'm sorry for our attendees we had a little issue on the streaming so the stream gets split into. But we will make sure to come to join the two recording. So sorry about that, but we were able today to see a complete DevSecOps demo. Really congratulations Roberto Rodrigo that is awesome. And actually I know there's a repository if people want to show the repository as well. I think if it's the one that DevSecOps demo one, the one you shared with me. Yeah, that's the same. Time slide. So I was I was expecting to have time to show how to deploy ACS on CRC, for example. There are some tweaks that you can do with the resource that basically you can have ACS running on your laptop using CRC. But maybe next time. Interesting. Let's do this. It was so interesting that I think it deserve a serious DevSecOps series. We can have some kind of we did this introduction for the next time we're going to do a series where we go into CRC or local development with ACS. And I would like to see also that penetration testing the vulnerability assessment we have seen the wall pipeline today. Maybe we can focus more on the vulnerability assessment of live violation. Like I have my application running this application is doing a live violation. So we have seen the CI part. What is and we talked about the CD part with the ARCO. I was wondering if we have something to say also on, you know, running processes. Yes, of course. A lot of things with the running and detecting also the violations, for example, preventing if anything goes or if anybody do a netcard or an Nmap inside of our container, prevent to do that and itself also kill the pod if this will enforce it or not. So we can have this series for sure. Fantastic. I'm looking forward to it because it's so interesting. I think people really enjoyed also in the chat. There was lots of interaction. I just shared the demo. So the demo was made by Roberto Rodrigo that working in this fantastic demo. I really recommend to try it out. But you can try on CRC, for instance, which is the local development for OpenShift, and you can try on any of your OpenShift cluster available that I'm putting in the chat the link to, you know, start trying OpenShift, start trying ACS, which is now GA. And you can deploy the demo that Roberto Rodrigo today made for us. So, you know, I would like to really thank you for this awesome demo. Recording will be available on the OpenShift YouTube channel. And you know what, we will see again here at OpenShift Coffee Break with a new DevSecOps series, because it's so cool, so interesting. I really enjoyed. So thanks, Roberto and Rodrigo for joining us. Welcome. See you next time. See you next time. And for us, folks, you can stay today on OpenShift TV. We have our regular schedule, if you would like to stay. That goes into from, you know, EMEA afternoon time, and then it goes into the normal schedule. If you go to OpenShift TV, you see all the all the schedule that we have. Let me put the link so you can follow with next things today. We have lots of also episodes, our recurring series. And we see each other for our next appointment that will be July the 28th. We will come back with a pipeline as a code topic together with Jafar and some people from the engineering talking about tecton pipelines as a code. So thank you very much for joining today. Thank you for attending. I look forward to see you on the next OpenShift Coffee Break episode. Ciao. Thank you.