 Good morning, good morning, everyone. Welcome to the OpenShift Coffee Break. So my name is Natalia Vienta. I'm a product marketing manager for OpenShift. And today I'm presenting this show, our OpenShift Coffee Break episode about hybrid cloud automation with Ansible and read that ACM. I'm here co-presenting this show together with Thero. Hey, Thero, good to see you. Your mic is in mute. I think you are still in mute. Maybe you didn't have your coffee shot yet. So you should unmute your mic, Thero. But in the while, I would like to present our super guest today. So Fass, do you want to start? Hey, sure. Good morning, everyone. I'm Fass Sviggi. I'm one of the Ansible specialist solution architects in EMEA in Red Hat. So I focus on Ansible cloud automation platform, mainly. I'm based in London, but covering EMEA. And I work in the same team as Natalia, Thero, and Andres. I've been meeting them in a minute. Great to be here. Great, thank you. And Andres, do you want to introduce yourself? Of course, good morning, everyone. And thanks for joining. My name is Andres Valero. And as you can notice by my beautiful accent, I'm Spanish. I work in the same team that Fass, but focused around the OpenShift technology and more concretely around the ACM. And yeah, based in Spain, deliver EMEA. And I'm happy to be here with you. Awesome, awesome. Now my speakers work. Here we go. Yeah, I have some problems with the sound. Hi, I'm Thero Honen. I would say co-host with Natalia in the awesome show. And I'm an OpenShift specialist in the EMEA Tiger team. And I just do everything around OpenShift, whatever you can, so basically everything, because everything is OpenShift now. Thanks for joining. Of course, thanks everyone for joining. So let me give another bit, a little bit of context of what this show it is. So you see it there is from Finland, Fazer from England, Andres from Spain, I'm from Italy. So this is kind of a virtual coffee break and the coffee machine in the virtual office worldwide, right? Did you got your coffee shot? That's the way to go. And the idea is that we took our coffee shop in the morning just to wake up ourselves and talking about cool things, the new technology, new cool technology things around OpenShift. And today I'm really thrilled, Fazer and Andres, we have a fantastic talk. I think that people are thrilled about the possibility to combine automation with Ansible and a multi-cluster capability of OpenShift. So please, do you wanna start showing us this fantastic use case? Sure, I think I can go first if that's okay Andres. Yeah, so when Andres and I were talking about how to prepare for this show, we thought about having an introduction into automation, but then we thought we don't really need to have that because we don't need to get into the details of what automation is nowadays or why we need it. Thing is established by now that the only way that we're able to keep up with all these rapid changes we're experiencing in our environment is to automate. We know how crucial it is to be easily adaptable to the changing business and market conditions while we also need to deal with the enhanced security and compliance demands. So we're left with no choice. We need to be scalable, consistent and reliable all in a secure way. And we cannot possibly do this manually. We have to automate. Yeah, sure. And in fact today, we will hear a lot about the decision states because the automation behind Ansible, behind Kubernetes, behind ACM, it's all about getting your infrastructure or the different business of the IT in a desired state. It can be done in different ways that we will see more about it, but the idea I want you to keep in mind is we want to define the state and we will use ACM, OpenShift, Ansible to do that. So that's the idea. Do you want FAS to tell us a little bit more about Ansible? I actually have one question for FAS. You've been working with Ansible like several years and how do you see that customers are currently using Ansible? Because most of the audience knows Ansible but they only know their own use cases. So just in general, how customers use Ansible? Well, Ansible is just extremely flexible and powerful. So it fits in a large variety of use cases. You can use Ansible in anything from provisioning, configuration management, your application development, networking, security, cloud on-premise and so forth. So pretty much anything you can think of, you can use Ansible to automate. It can be used as something as simple as running just a simple playbook or the automation piece. They're called playbooks in Ansible for people who may not be familiar with that. You can just run that, start by running a playbook on the command line. That could be as simple as adding a line to a file or manipulating some configuration files or adding and deleting users or it could be using more complex scenarios like complicated orchestration use cases. The example that I can think of, it could be if you wanna update your web servers as an example, you will need to stop what's monitoring them. Then you need to stop the service and the services on your service. You need to take them away from your load balancing. Then you need to do your updates. Then again add them to load balancer and then monitoring. All of this can be done. They can be done in simple automation pieces and then chain together in a workflow to enable you to do that end-to-end automation. So the answer is it's very flexible. There's so many different use cases. It really depends on what you need to automate at that point in time. Okay. Sounds like a hammer. It does sound like a hammer. Yes. You know, there are lots of questions about how to meet this automation with Kubernetes. No, so Kubernetes does also its own orchestration automation. So how to mix the automation world, let's say agnostic one with Kubernetes. So this is the key question and I'm glad to have you all today to try to answer to this question. Sure, do you want us to get into that now or should we just do the demo? Do you want me to answer that now? So yeah, this is from really open format. So let's have this discussion then we can go to the demo. Yeah. Okay, sure. So I'd like to talk a little bit about how Ansible fits into the Kubernetes world in general. There are so many similarities that we see between Ansible and Kubernetes. First of that is both of them being really widely used open source projects and they both have wonderful active communities behind them. They both use YAML to model the declarative design states of our targets and they both make automation and orchestration easy for us. But go back to your question is it pops up a lot that why would I need Ansible if I'm already using Kubernetes or how can I fit in Ansible with my Kubernetes clusters and cloud native environment? There are a few things. First of all, even if you're using an enterprise solution like OpenShift for your Kubernetes installation that takes care of everything such as networking firewalls, DNS, TMS, and LAP so forth. But sometimes there's some additional configuration very specific to your environment that installer may not necessarily take care of. So you really need something specific to your needs. So you need an automation tool for that and that's when you can use Ansible. Another thing is not all infrastructure can be hosted or replaced by a Kubernetes cluster. You may have invested significantly in your current IT infrastructure in all the skills and operations that you currently have are working for you. You don't necessarily want to replace any of these things but you want to be able to connect and integrate your current infrastructure with your Kubernetes and your cloud native environment. You need a tool. And again, that's when Ansible comes in. Also, there are some use cases that we will see in the demo as well later on that sometimes you are deploying an application in cloud native on your Kubernetes but you also need to rely on another tool like notification in a messaging app and seeing the demo. You need a tool like Ansible for that. These use cases could be extended to things like possibly updating configurations on your networking and load balancing or creating a service ticket and things like that. So you do still need an automation tool to take care of those kind of last mile operations that goes with your application deployments. Yeah, in fact, we will see later that we have some integration with ACM and Ansible. And I always like to say that ACM speaks Kubernetes but ACM or OpenShift are not able out of the box to interact with the load balancer, with firewall, with, I don't know, with many pieces that are currently existing in your infrastructure. And we need something that allow us to establish that communication. And that basically is Ansible. And we will show an integration with Ansible for sending our notification to Slack. It's nothing super advanced but you could, for instance, you can set, when you deploy applications using ACM, you can set pre and post tasks and we will see and we will explain more later on. And maybe you need to open a ticket in ServiceNow, deploy an application, and then configure in a load balancer. That's something that now you can do all in one with integration of ACM and Ansible. So definitely there are two worlds out there. There are different ways to automate, but they complement each other, really. I like what you say, Fass and Andreas. So they complement each other. The power of the automation of Ansible is empowering Kubernetes orchestration and automation features. This is one key message I would like to send today. And with your help and your super cool live demo, I know you prepared something really cool for us. So I know you know that Thero is the person that tells us to do only live demos. So we follow his wise advice and we do only live demos. No jokes. There will be live demos, don't worry. Cool. Yeah, if you would like to proceed with that in the while, I will take care if we have any question in the chat. If you have any question for Andreas and Fass in the chat about Ansible Kubernetes in the while we are doing the show, please write it down in the chat. We will take care to answer them during the show. Yeah, so let's start this. Okay, so let's kick off the demo. So let me try to share my screen. Okay, I'll share a screen. Okay, I'll share a screen how this thing works. Okay, this one. So hopefully now you're seeing my screen. Yes, we can see your screen. Okay, we spoke about Ansible, about ACM, but now I like to speak about a little bit and I will count on this also with the collaboration of Fass about operators because operators are basic piece for automation in Kubernetes and OpenShift mainly. So the operators are part also of advanced cluster management for Kubernetes and also operators can be created with Ansible and Fass will tell us a little bit about this. So basically operators are based on controllers, on Kubernetes controllers. And a Kubernetes controller basically is a control loop that watch the state of a cluster. More importantly, it watches our resource and it reacts to force that resource to be in a desired state. And again, we are going to hear a lot today about the desired state. So basically the idea behind all the Kubernetes is defining as code what you want and using these controllers to make sure that this desired state is matched in our cluster. And it's also the idea behind Ansible. So basically, and I'm going to show you a stupid slide that I like that is pretty explicative now itself. So basically when we speak about operators, they have three pieces, right? We need a custom resource definition because the operator will look into that concrete resource. Then we need a custom controller that will watch this definition and then a certain specific domain knowledge. I mean, and we are going to see it in a demo with ACM, with the installation of ACM to be more concrete. So basically we're going to watch what is happening with a custom resource definition using a custom controller. And when something happens around these kind of objects, we will trigger an action based on the domain knowledge. So we watch something, we detect an action and we trigger a reaction. And this is basically the pattern for an operator. And fast, I think you have something to tell us about these operators and Ansible. Sure. So like Andrew said, an operator is designed to watch and respond to resources in your cluster. So to help you run your application as you like. So bring that domain knowledge into a way that you can apply that in a scalable fashion now. Rather than being in someone's head now, we are codifying that using an operator. Using Go for developing Kubernetes operators is a common approach. It's very powerful and it gives you fine-grained control. But with its power comes complexity and you will need to invest the time to develop expertise to become proficient in Go. As we keep reiterating about the simplicity of Ansible, a simpler approach to developing operators could be using Ansible operator SDK, which allows you to deal with Ansible code instead of Go. So you can use the power of Ansible and its ecosystem and therefore have a lower barrier to entry because the chances are it's very likely that you're already somewhere in your organization using Ansible and you can leverage that power and that existing knowledge. You can develop full-featured operators using Ansible operator SDK. And one advantage is that it provides you the scaffolding that you need for your operator development because a lot of these generic functions need to be implemented and managed in Go. But when you use Ansible operator SDK, you have all of these implemented and taken care of for you so you don't need to worry about that. So you can see straight away, you get a lot of efficiency if you use Ansible instead of Go to develop your operators. So your developers can just worry about knowing Ansible and the way that the Ansible operators work is by reading a watcher's file. You said it's looking into what's happening in your cluster. So it's reading a watcher's file, watches that the ammo file and it's monitoring the events on the cluster based on what's specified in there. And when it finds a matching event, then the operator SDK wants the Ansible automation associated with it. It does everything it needs to do and then when it's completed, then the SDK binary would take the results of what's happened in that specific Ansible automation run and update the status of the custom results that's associated with it. So as I just mentioned as well, as an operator developer, when you're working with Ansible, Ansible operator SDK, all you need to do is to provide the watcher's file and the Ansible content that manages your application lifecycle. So I need to spend the time to build up the expertise with Go. So yeah, with Ansible SDK, you can just introduce a lot of efficiency and start adding value straight away. So very cool. So we can use Ansible either for creating the operator that we're going to use for our automation. So this is another cool point to focus on. And you say that writing an operator with Kubernetes as we know is a way to deploy the software in Kubernetes. But I was wondering how much it would be easier for people that already knows Ansible writing down an operator like the one you told us. It's pretty easy to be honest. And around this, I got questions usually from customers, like, do my staff, they need to learn Go to do this. And no, I mean, you have actually playbooks and you have roles that do something that you need to automate and you want to bring that knowledge. And that experience to the Kubernetes world, just use the operator framework that will easy that task for you. You will recycle the role or the knowledge that you already implemented in Ansible and you will be able to reduce that. So the idea here is offering you solutions. So you have a knowledge, like fast said previously, you have already a knowledge. You train your people in Ansible and now you are starting or you are onboarding on Kubernetes and you don't want to throw away everything and start from scratch. You want to recycle what you have. You want to take profit of your knowledge and expertise and onboard on Kubernetes. And with the operator framework and Ansible, you can do that. So very cool, very cool. In the while, in the chat, I shared the link. If you would like to download a free book about how to write an operator in either Ansible or Go, we put the link in the chat. You can get this link and download this free e-book. And yes, let's continue with our demo. Andres, one simple question. If I have an Ansible Play book that creates, let's say, as TreePocket and you need that when you deploy application, how much work it would require to move that playbook to an operator and run it in OpenShift? Not really much. I mean, if it's a simple playbook that produces an S3 backup and you need to consume that using the operator framework, it will be done, I don't know, maybe the first time it will take a little bit longer, but probably around 20 minutes, 20 minutes or an hour, you can have it running the first time because you have to start using it, installing, configuring, but it's something that is going to be pretty fast once you know how to do it. So it won't really take a lot of time or you will need, again, a lot of time investment to do that. The operator framework will easy that for you and will help you to recycle that. Okay, that sounds really easy. Guys, I didn't have here the recording that we delivered a meetup around operators and the first time we ran that meetup, I didn't have much experience about operators and I did create life, an operator with Ansible. That's great. Yeah, so you're telling that even me and Taro can write down an operator right now. Okay, of course you could. It's not that complicated. Shall we try? Maybe you should try Natalia. Yeah, yeah, let's do it next time. Maybe for the next call. So now I'd like to show you a little bit about operators and then... Only one thing, Andres, can you increase the default so we can see better the screen? Yeah, it looks better. It looks better. Let me check. Yeah, it looks much better. Thanks. Okay, so basically I pre-installed the ACM operator but ACM is not running. The application is not showing up here. So basically what we're going to do is we have an operator that is the ACM operator. And this is waiting for us to create something that is a CRD called multi-cluster hub that the operator itself created. And now it's waiting and it's waiting for the creation. And this creation will trigger the installation of ACM and I'm going to show you that now if we go to the developer view in OpenShift we see few pods. That's cool. And now you'll see when we create this new custom resource it will trigger the action on the operator side that will deploy ACM. So basically I need to create a multi-cluster hub object and what I'm going to do here is select a pull secret that I previously created, a multi-cluster pull secret. I see some people is joining also now. So I just wanted to say it again that we are doing a demo using Ansible and ACM which is a tool to do multi-cluster, multi-cloud Kubernetes orchestration. So just to, for people that are joined now just to give again the context and Andres is installing right now the ACM via an operator. Yeah, so the operator is installed and is waiting for an unconcret action and this concrete action is the creation of a multi-cluster hub CR. So we're going to create it and now you see that the face says it's installing and if we come back to the developer view we will see here in that there are pods starting to create. So basically we had our operator installed but we didn't have ACM installed. ACM was waiting on this object that is the multi-cluster hub object and now that we created it's starting to actually install ACM and we will come back later because this takes like six, seven minutes to finish and it is not worth to wait here and let's move on to an actual ACM. And ACM as Natale said is a multi-cluster managing tool and it can manage not only OpenShift but also the public cloud Kubernetes services like AKS, EKS, JIKI, ROGS. So we can manage different Kubernetes platforms not just OpenShift. And in here we see like four boxes and these are the four areas where ACM can help you in the multi-cluster management. So and today more confidently we will be focusing in these two areas. So how the automation of ACM can help us for governance risk and compliance. So to configure and secure our clusters and the application lifecycle. And to do this again we are using controllers and this is the reason that I introduced it and fast and I introduced the controllers and the concept of operator. And we will see here how ACM can help us to automate the staff and also we will see in a second part of the demo how we can integrate from the application lifecycle with Ansible to achieve a more complex solution. So let's start for instance with this application lifecycle and we will see here that I have a Pacman application running and it is affecting two clusters. We can see here, I'm going to make it bigger. Yeah, if you can increase the bigger you make the better so because we have a little window. Okay, cool. Thanks. So we can see here these pieces that are the pieces that use ACM for managing the application and to automate. And I will introduce a little bit later. And we see here that using... And we will see that we have... No, it's learning a little. Finish is okay. It's not that hard. We can have a next episode about that. Okay, you can help me on that because I'm probably sure that it will take me some time. So we have here two clusters where we deploy this Pacman application and we can go to the root and just click here and go to Pacman application. And in fact, we can obviously play. So we can... Andres, can you explain from the topology we that which are ACM entities and which are actual Kubernetes entities? And that was what I was planning to do right now. So this top part is the ACM part, let's say, okay? And I'm going to explain you a little bit more about these pieces. And down there we have the Kubernetes pieces or the OpenShift pieces as a stereo set. So basically to deploy this, we are using some deployments. We are using a PB to store information and we are posting a service in a root. So we can actually access the application. And now I'm going to show you a little bit on here. And of course I'm going to make it bigger. Sorry. So we have this application lifecycle. Okay. And you see here that we have two different folders. And this is because ACM uses as a back tool GitOps. So for applications and for policies, we need to define things in a Git repo or in a hand chart in a way that ACM can consume it. And any change that will be propagated in our clusters will be done using GitOps. So in here to make it simple, I put this application and I put these resources in two folders. So ACM understands that in here and defining the pieces that ACM needs to manage this Pacman repo. So the piece that is going to be GitOps affected is this folder, this Pacman folder. And this repo, this other folder, creates all the resources that we need. So basically there are three pieces. This is the channel. And the channel basically we're telling, okay, I need a source of information. And this is the same for applications and for policies that we will see later on. But we're seeing like, okay, this is a channel. I'm defining a GitHub repo. That's cool. I have my Git repo. So now I need what is called a placement rule. And a placement rule basically is defining where I want to deploy. And using placement rules, applications, subscriptions, and channels, we are able to manage at scale. And you will see how that's how it works later on. But basically in here in the placement rule, mainly I'm saying, okay, I want you to deploy this on the environment that clusters. That's cool. And then we have some extra information here. So we're seeing that we are specifying, okay, I want you to deploy the applications in clusters that have this label, but also in clusters that are actually online and reporting. And there is another automation piece here that is cluster replicas. And we will see in the demo, basically I'm telling it, okay, I may have five clusters that can be affected, three clusters, 25, but I just want to deploy this in two clusters. And if in any moment one of these two clusters is not accomplishing this condition, so it's not online reporting because something happened, ACM itself practically will move or redeploy one of these two replicas to one of the clusters that are available and have this labeled. So in case that something happens, disaster recovery, ACM will practically move or change that for you. And last but not least, that is the subscription because in fact, this is the most important piece. The subscription basically is joining the channel. So it's using, okay, this is the source of my information is using the placement rule. I want it to deploy here with certain conditions. And now I want you to deploy whatever is inside this channel in this folder, in this branch. I want you to deploy it in the affected clusters. So let's going to see how it works here. At this moment, we saw that we are deploying in the tech clusters. So if we move to the clusters, we'll see that we have full clusters. So we have environment depth in three clusters. So in Amazon, in bare metal, and in the local cluster that is also in Amazon. And there's this other one in Google Cloud that is not affected for the depth label. But if we come back again to the applications, we see that despite we have three clusters with the dev application, we just deploy it in two, in the bare metal one and the Amazon cluster. So, and again, it is small, sorry. Thanks. We can see that it's deployed. Yeah, please. Just a quick question. You must have a global load balancer somewhere in front of all those clusters. Not in this occasion, but if you had, you could use Ansible to configure that load balancer when you redeploy and change the application from one cluster to another. Not in this case. This case is just deployed in two different clusters, but we could configure a global load balancer in front of them, but not in this case. In this case, we have two routes with two packments. Okay. So at this moment, what I'm going to do is going back to the cluster area. And we see that we have, I make it too big. We have three clusters with the right label and it's deployed in this bare metal and in this one. So I'm going to remove the label from here, the environment tab, and this is kind of a cheap simulation of that this cluster went down. And what is going to happen is now ACM is going to react, is going to detect that this cluster is not affected no longer for the placement rule and is going to redeploy the application. And in fact, it says now that it's one remote and one local. It is deploying now the application in the local cluster. And we can see here that is in fact not yet deployed. It is working, it is advanced and in here it changed from AWS managed one and bare metal to AWS managed one and local cluster. So now it's redeploying the pacman application in the local cluster. So this is how we can automate in case of problems or we can automate just changing labels how to, where to deploy our application. So you define a git repo, your application, you create the pieces and you deploy. And here we have as a cool editor if we want to use it to make sure that it's easier for you to deploy an application. So if you have a repo with an application and you never choose ACM and you never created this subscription, this placement rule and channel pieces, it will create it for you. And in fact, all this pacman demo is coming from the open cluster management in here. This is the upstream break for ACM. And here you can find application samples, you can find demos, you can find policies. And this is something that we are going to use now the policy collection. And you have a lot of information around the ACM and how to use it. So this is a repo that you should bookmark. Okay. Yeah, I will share this in the chat and I take the opportunity to ask you. So is open cluster management the upstream version of ACM? Can I take open cluster management and use it on the micro Kubernetes? It's not yet fully open source. It's almost there, but it's not yet. The team is working around it and it will be as soon as possible. But it's not yet. But even though it's not yet fully open source, you can find the policies, you can find demos. And of course, you can always ask for a trial subscription. So you can start using it and testing it in your environment if you want. So it's something that definitely you can do. And now that we see this and you can see if I make it bigger, of course, that now the applications are deployed and I can go now to this route in my local cluster and I have Pac-Man running here also. And regarding the GitOps I introduced, I want to make a change now so you can see. So basically, you see that we are using a blue background Pac-Man, okay? So it's okay. Now I'm going to change that using GitOps. So basically I'm going to go to the repo and let me, it's big enough. You can read it. If you increase a little bit, it's cool. And also if you can share that link, so we can share in the chat for live playing. Yeah, I can try. Let me, is here the repo? There, I already shared the repo. So if you- Okay, it's the Pac-Man. Okay, yeah, sure. That would be great. Of course. Everyone wants to play Pac-Man. So I want to do it. You can copy the Zoom chat, so I take care of sharing in our chat. I don't know now what is the Zoom chat, to be honest. So I'm going to share to you in Toleram, okay? Okay, cool. I'm not able, everything got minimized when I started to use sharing my screen. And now, I don't know what it is, to be honest. So basically, hey, let's be frank. Now, basically what I'm going to do is, in this repo, I have this definition of my Pac-Man inside. And there is, okay, somewhere, a deployment, Pac-Man deployment, this one. And this is using a Quake image. And this is a demo that the team of ACM set up, so I just, I'm using it. And they are using this later. So I'm going to change this for green. And now, we hit that, hit commit, and hit push. Okay, it looks like I have something spanning to go. Meanwhile, Anders is fixing the demo effect. I have a question to fast. Earlier, automation, Ansible, Puppet, Solstack, whatever, were created to kind of automate the, let's say, stateful servers. Let's say VMs, bare metal. And now, we can see that there is going to be, like containers are Linux. So you are deploying a new host. And there is a multiple deployments, keytops, and everything. And the, everything around Kubernetes is moving so fast. Do you think that Ansible is keeping the pace? How fast the industry is moving forward? Do you like stateless and keytops and sect DevOps and DevOps and everything? So I've been asked to reframe from using keytops. So I'm going to refer to keytops as DevOps, just to keep my dear colleague happy. So definitely, when we're talking about something, DevOps, we need to be able to codify that state of our IT infrastructure. And Ansible is one of the best solutions and the most simple and most powerful solutions that we can do that. So absolutely, it fits in really well with your DevOps strategy, because then you'll be able to keep it as your single source of truth and then have your infrastructure modelled in a declarative way using Ansible. So definitely that front. Yes, in terms of, you mentioned depth setups. Yes, Ansible is moving forward in the area of security operations and automating security operations. So specifically, we have a lot of development into getting security vendors and security solutions on board as our certified partners that are extending into the SOC or security operations centre automation as well. So yeah, we're keeping up. The architecture of Ansible is changing. So the way that you run your Ansible is all going to be containerized with execution environments. So yeah, we are moving on. Okay, super cool. Yeah. As you can see now, the Pac-Man background is now green. And of course, I prepared something that is maybe very simple. But it's, for instance, this case, no pipeline simple. I just pushed the code and the code went to the proper clusters. But you could also involve pipelines with Tec-Tun with any tool for pipelines you're using and push a new image to the registry. So in this case, it's in Quake. And basically with this subscription channel and placement rule, what is going to happen is that the clusters themselves are getting the new definition, the Jamel definition of this application that you will have to push, let's say, with your last step in your pipeline. In this case, we change our deployment file. So the clusters will be checking that repo, that resourcing repo. And when something happens, they get that update automatically. And as you could see, I didn't change anything from here, from ACM or I just pushed some change into Git. And the clusters got the change automatically. So again, this is the automation background. Sorry, my English is the training. The automation background for ACM. I'm using Git. In Git, I'm a start, the definition of what I want. And this definition has changed and it got propagated automatically to the cluster. So ECPC, not really anything super, super complicated. And again, now I'm going to show you, for instance, how policy works. And this is pretty similar. And in this case, we are going to use this policy that upgrades OpenShift clusters to this release. So in this case, if we come back to the cluster area in ACM, we will see that this manage is clusters already in 4619. And this is this local cluster that is 468. And what we're going to do is at the moment, I'm not going to push yet the change because this is the cluster where ACM is running. And I'm going to make just a change. I'm going to remove this from here. Those are levels that are applied to Kubernetes labels. Nothing fancy, nothing complicated. We, I like to say that we are not reinventing the wheel. So what we are going now is we change that label from here to here. And we are going to move to the policy area. And we will see that we have a policy that is now checking but not acting these clusters. So it's still getting the status from this new one. But basically, if we go to the status, this policy is telling us this is compliant. This is in the specific version that we need to be. Sorry about that. But this one, it is not in the version that we need. And what we are going to do is check the policy in the repo that is now in inform mode. And this means that it's checking but it's not acting. It is not behaving in the way you will expect from a controller. So the controller is realizing that it's not in the right state but it's not acting. And now what we are going to do is to change this remediation action, and I like to make it bigger, from inform to enforce. And what is going to happen is that the policy controller is going to change the cluster version object on that cluster that is not matching. So we change this to enforce and we commit the changes. So you're committing in the repo we shared in the chat. So you are live demoing this live commit and something's going to change in your clusters controlled by ACM. Now, as you can see, this says enforce and it's saying it's not matching. Oh wait, it already changed. It's saying that it's compliant. And now, if we go back to the cluster area, we'll see that our cluster is upgrading. And I didn't do anything super complicated. I just said, okay, I want the clusters in this cluster version. And again, this is automation. This is open shift. So this is how it works. And we're running out of time and it's taking to one. And I'm going to show you the integration with Ansible. I'm sorry for taking to one. So let's move on to the ACM demo. And sorry, but this repo is private because I have a token there and I'm too lazy to encrypt it. So let's go see. Yeah, tell me. But everyone has your repo, your handle, GitHub handle. So you will push it publicly. So it's going to be available as soon as possible. Of course it will. And now we are going to deploy an other path application. And this application basically is going to deploy that. And we're going also to send a message to SLAP. So we are now deploying the application. So if we move on to the application area, we will see that in here we have the application deployed. And basically or deploying to be to be more specific. And let's move on to another repo. And basically in this, we are going to see some differences. And in here, we have this other path application. And inside the definition of the application, we have this post hook folder. So at the end, we are just sending a notification when we deploy the application, but we could also set a pre hook. So we could do something before deploying the application and something after the application. It's deployed. And if we go there in here, we get another CR. That is called Ansible job. And it's using a secret to access tower. And we are deploying an other path application. So nothing super fancy. And I'm going to activate the SLAP just to get the notification. And it looks like something is happening. Okay. It's the plugin now. And it looks like always lack application already run. So let me show you. And hopefully, as we can see today, we got this messaging service. The application other path has been applied successfully. So basically what we just did is using Ansible. And you can see here another piece that is the Ansible job. And it's saying, okay, slack notification. We send a notification in a slack that says, okay, we deploy this. And the only thing that we did is basically installing in the same cluster where ACM is running. We installed this Ansible automation platform resource operator that is created with Ansible and uses Ansible in the background. And this from ACM called the tower API to run a playbook in this case. And I think fast, maybe when to show you a little bit the workflow in tower. Sure. Thank you, Andres. Since we are sharing your screen, yeah, so what you see here is Ansible tower. And this is how we are making that integration between our advanced cluster management. So our cloud native sort of deploying that application and linking it to our existing IP systems, which in this case was a slack. So this is the interface to Ansible tower. When we finally login, you'll be able to see that. But the point is this is the interface to tower, but the power comes from connecting to the APIs of tower in this case or in any case, really. When you are using Ansible automation platform and Ansible tower as one of the main components of driving your automation, the real power that you get is through this ability to get to the APIs of tower because we write and organize our automation pieces, our playbooks, then we connect them together in order to achieve an end to end automation. And what the API can actually to tower does for us is just to call this in a totally automated way if you want to and we'll be able to. So it just opens a whole new big world for us. We can do anything and everything we like now that we've got access to Ansible tower. Anything that may be our existing automation or anything that we want to create new in our existing IT and then connect it to our Kubernetes clusters and so forth. So in here the only thing we have is a simple playbook that allows us to send out notification to Slack, but the possibilities are endless. This could be anything and everything from provisioning your VMs, configuring them, then if you've got a set of applications that you're already running into your non-clad and even off-cluster sort of setup, even your application deployment everything can be automated using Ansible this way and because of the connection you get that's when you get the real power. So we've always talked about throughout the years about Ansible as the automation group because it's so flexible and it works with just about anything. And yet again this connection as you've seen today it proves to be a great solution for bringing the traditional and off-cluster and your cloud native and on-cluster infrastructure together in a simple effective way. So in effect you have a single workflow to manage your complex hybrid cloud environment without having to choose between one or two. So you have a single workflow that achieves everything for us and enables us to path that into an automation. One simple question. Super cool. So you actually you just created one YAML file because DevOps is now YAML files and that linked the ACM to a playbook in Tower. You didn't do anything else. You had the operator running but that was all. Yeah the operator communicates Tower with ACM. Okay that's easy. Yeah it's not. As Fath said the cool thing is you have Tower you're already using Tower you can reduce. The only thing you have to take into account and this is coming from the Kubernetes world that there are reconciliation loops that can be run at any time. And when this happens the playbook will be run again. You see that the notification arrived. So you have to take into account that the playbooks you are using alongside ACM has to be need to be idempotent. It is important. Otherwise you probably are going to break up something sooner or later. So it's important. Oh yeah that's important. So we say the many words. idempotent, hybrid, DevOps looks like a good episode today. Tero what do you think about it? Yeah one question. I don't have time to ask but that's too hard. So we don't ask it today. Maybe next episode of Advanced Ansible Automation with ACM. Yeah I think we we got to keep it for the next time because we it's here it's 11 o'clock. Your time is 12. So your lunchtime finished lunchtime. So we start the closing. Andres if you stop sharing the screen we can start the closing. I'm trying but it looks like this thing is possible. No it's not working at all. I don't know why. Yeah so okay in the while what I wanted I would like to thank Fass and Andres for preparing this super cool live demo. Live demo. We have seen today that with ACM and Ansible on top of OpenShift we can have also connection to tools like Tower for connecting automation to non-container workloads and also container workloads with Kubernetes. So this is really cool. Thank you for joining us today Fass and Andres. The next episode would be for us on March 31. We're gonna have a topic about inner and outer loop with Java with other super cool guests. Today in the shadow we have in OpenShift TV we have ASCEN admin and OpenShift admin and then Scalable multiplayer game design with OpenShift. So take a look on the calendar at OpenShift TV and I would really like to thank you Andres and Fass for joining and Tero thank you for starting this other cool episode that we have here. I need only to say hello to my friends Bidelli that are following us in the in the stream and yes so see you on Wednesday 31. Thank you. Thank you. Bye bye. See you. Bye.