 Good morning, everyone. Welcome to this talk. So first, that introduction, Ricardo Cruz de Cruz. Everybody just can call me, you can call me Ricky. Everybody calls me Ricky. I'm a principal software engineer at Red Hat. I've been at Red Hat six years and a half. I'm a member of the product decision team of Ansible. We're responsible for building, delivering the various components of the Ansible automation platform. And we're also involved into some of our offerings in cloud and software and whatnot. And we're also responsible for the installers and the operators for Ansible. Prior to that, previous company, HP, I was working heavily in Office Pack. I was part of the team, both upstream and downstream. Then I joined Red Hat for the engineering, did networking, CSD, cloud stuff. Then moved on to OpenShift Engineering after networking. Then moved on to the office to do some telco stuff and back to my model ship, Ansible. And I'm very happy about it. So I've shoveled it around quite a bit. So first, super brief introduction to Kubernetes. I assume that you know what it is, but just mandatory slide. So what's Kubernetes? It's an open source platform that was focused rates container as applications. So you have to utilize applications and you declaratively put a manifest. I wanna run this application with this many replicas and want to expose this board. And Kubernetes does its magic and make sure that they get one and all the things. Developed by Google, now a CNCF project. So we have a lot of contributors, Red Hat included. We have a lot of people working on it, especially because we have the OpenShift product which is Kubernetes distribution. There's a wide ecosystem of tools and projects around it. If you go through the CNCF landscape, you will see there's a massive picture of observability, monitoring, provisioning, all sorts of different use cases that are around Kubernetes. So it's a big thing. You can either run it on a Brem or host it in class. You have Kubernetes in AWS, Azure, Google, Linode. We have Red Hat, OpenShift in all those clouds. You can also run it on Brem. At the score, Kubernetes, it allows you to self-heal your applications. So you make sure that it will be always running. If, for whatever reason, it dies, you know, it will bring it back up. It allows you to do scaling or it's also scaling. And it also allows you to do running updates. So if you update your application, it will do zero downtown update and it can also do like a rollback. So when you adopt Kubernetes, they're typically like what they call another architect, you know, it's like a known term or, you know, pattern. It's like they call it like anything in date phases. So typically in Kubernetes day zero, you gather the requirements for your, the workloads that you're going to run in Kubernetes. How are they going to run? Whether, you know, where's it going to be your data? Is it going to be on Brem? Can it be on clouds? Will you be having like hybrid cloud Kubernetes, like on Brem and on cloud? How you're going to connect that to your environment? So in this phase, you do all that, you know, requirements gathering, the design, and the architecture. Once you have that, then you move on to the next day, which is installing the cluster. So you install the cluster, so you probably have some months of playbooks or to reform or cloud formation or, I don't know, ARM templates, whatever, you know, you're running an installing your cluster, you install it and then you configure it. You put the initial users, you probably hook it up to your infrastructure scope repositories with your CSD, and that's the initial bootstrapping of the environment, so to speak. And then in day two, that's where you deploy the workloads, and then you do the ongoing maintenance of the whole applications and the underlying infrastructure. So this is where, you know, you take care about observability, so you monitor your applications, you monitor your infrastructure, is it sane? Is it like, maybe, is it performing well, not? That kind of thing. Then you take care about the scaling. Maybe your workloads grow over time and you need to scale the nodes, so that's something that you have to do in day two. So how are you going to do that, right? Security, you know, there are back near users coming in, users coming out, maybe contractors. How you're going to handle certificate management, secret rotation, that kind of thing, is what you do. Storage, same thing, so probably, you know, you will run out of space at some point. If you're on cloud, maybe it's taking care for you by the underlying cloud provider, but maybe you're on-prem, so you need to, how can you expand the storage so you can still run your business applications? Networking, imagine that, you know, you have like a cider, initial cider for your cluster, and then you run out of IPs. How can you expand that? Or how maybe you need to interconnect to some other internal site? That's another thing you do in the network. Update management, both from the application of one of you and the cluster. Inevitably Kubernetes is like a fast-paced project, so new versions come out, and you may want to keep up and update them. Disaster recovery, backup and restore of, you know, ITCDs, you know, volumes, databases of the applications, do you have a strategy for that? How can you backup, how can you restore, in case of a total disaster? How can you bring up a mirror cluster environment, or maybe you have a pilot-like environment there, and when there's a recovery, you can quickly bring it up? That's the kind of thing that you do. So, in Kubernetes Day 2, you have tools and practices for doing that, typically for, within workloads of Kubernetes, you use Kubernetes operators, so they're like a Kubernetes native applications that contain the operational knowledge to maintain the life cycle, manage the life cycle of applications. So typically, if you have a password SQL that you use in your environment, you will have a password SQL operator that will do the installation, but we'll also take care of the backups, restore, upgrades, that kind of thing. You probably want to use GitOps to manage and deploy application infrastructure, so keeping Git declaratively the configuration for both your underlying infrastructure and your applications, and have that system to automatically apply the application, so no human intervention, hopefully. And when you're done for the things that you've done, fall into the two previous buckets, you probably have like runbooks that define Day 2 processes. So best case scenario, you probably have like Ansible or Bash Scripts to automate that. Okay, scenario, you will probably have a DDoC or some wiki space somewhere. Worst case scenario, you hit an issue and you're just trying to figure out how to do and fix it. So now I'm gonna talk about Ansible, I assume that you probably know Ansible, but I don't like to presume things. It will be quick. So Ansible is an IT automation tool. It allows you to automate tasks in your infrastructure. So open source, that's what we do at Red Hat. It came by an acquisition. Very easy to learn and use. It uses YAML. The learning curve is really, really easy. You can get going, you know, really quick and automate things. It's agentless, meaning that for the things that you want to automate, be servers, network devices, firewalls, I don't know, cloud endpoints, you don't have to install anything on the target. You run your automation from a bastion control machine and you perform the automation against those targets. Highly sensible with plugins. Everything is a plugin in Ansible. So it's a very, you know, extend the extensible system and you can swap in, swap out, you know, behavior by changing plugins. Use cases, configuration management. So create users, install packages, you know, system these services, firewall rules, that kind of thing in servers. For structure provisioning, so we have modules for provisioning pretty much everything, like for AWS, Google, GKE, Azure, VMware, Office Stack. There's also stuff for parameter provisioning. Application development, you can use Ansible to deploy containerized applications or legacy RPM applications or Debian orchestration. This is my favorite. Since Ansible is so easy to learn and it has such a wide integration with pretty much, you know, every system you can find in an IT environment, you can use it as a glue, as a universal automation language to orchestrate all that. I would go out, as we said, if you have public cloud stuff and on-prem, you can just use it to interconnect them. Networking, this is a big one. So Ansible has a wide variety of modules for managing networking devices. As a matter of fact, we have a content team just for that. So the Cisco, the Aristos, the Junipers, you can manage that. Same for security, we have modules for Palo Alto, Checkpoint, CM systems, IDS, that kind of thing. We're also getting a lot into Edge. There have been quite a bit of talks this week about Edge automation, so we're getting into that area a lot. So how it really works. So you have a control machine, which is a bastion, which is where you install Ansible. That's where you run Ansible from. Then you have your target nodes, which is your servers that you want to automate or maybe your network devices or your security appliances or your cloud endpoints. Then in the inventory it's where you keep your authentication and connection details on how to connect to those target nodes. And then the playbook, which is the artifact in which you define the things that you want to automate. Typical and inventory static file. Very easy to read. Here we have a databases group with that IP and a web servers group with two IPs, meaning that if I run a playbook against the databases group, it will run the task against that IP, run automation against web servers, it will run the task against those two IPs, for example. This is a playbook example. Here we have the host, so we're targeting the web servers group from the previous inventory file. We're telling, hey, we want to use like root to get into those machines. And the task, which contains the automation itself. So it's going to run sequentially this task. So in this case, as you can see, you know, it's very simple, it will install the HTTP package with YUM to the latest version, then it will sympathize with JINJA, the config file. So Ansible, so that's like, you know, from a generic point of view. So then, you know, if you drill down, you have like other components or other abstractions. So the modules contain the code to perform a particular task. So there will be a module for typically a Python for containing the distance logic for this task, for this task, for each one of the tasks, you will have like a module for doing the thing. They're mostly written in Python, but any language can be used, providing that a module, a module really just needs to like read JSON and output JSON. I mean, how it does it, you know, internally, it's not really, you know, important. I've seen, you know, modules done in Bash, even in Golang. So they're easy to develop and ship. So you don't have to like to have a module in your system libraries for Ansible or, you know, push up ER and hopefully, you know, it gets merged and then, you know, it's in a release. You can develop a module, put it into a library folder alongside your playbook, and Ansible will auto-detect it and use it. So, which is nice because it allows you to get going and also, that's modules that's easy. So Ansible has built-in modules. So whenever you see Ansible.built-in. Whatever. So those are built-in modules. So they're part of the Ansible core package. So it's something that you get, you know, bundled and built-in with that RPM package. They're also Ansible content team modules. So we have content team within Ansible for networking, cloud, also security thing, and also people doing edge stuff right now. We also have partners doing modules. So some other vendors from all kinds, they also have their modules and they develop and maintain them. And you also have community modules. We have a great and vibrant community and we get a lot of contributions there. So once you start doing a lot of your playbooks, you will start realizing that you can maybe refactor and reuse that automation for all their environments, not necessarily tying that to a specific host of your inventory. So that's when you use roles. So once allows you can encapsulate Ansible contents such as fast, change of templates, static files variables to automate a particular thing. So you typically would have like a role for, I don't know, NGNX, for Apache, for PostgreSQL. And that's self-contained and can be reused in whatever environment that you have. Once you have a role created, the way you use it, then it's you include it or we import it in a playbook as a task. So you would do include role, my role, and it would include it and run the task within your play. So that's the typical structure of a role. Here we have a common role which contain a task folder, a main YAML, some change of templates, static files, some bars that are specific to the role, and some defaults in case you don't have, you don't specify the bars on common line or in the playbook. There are some other things, but these are like the most common ones. You also have like handlers and some other things. You can also, you used to be able to ship plugins, but now with collections which we'll see in a moment, that's not anymore a thing for roles. So plugins, as we said, extend the modifier code ansible capabilities. There is like a large variety of plugins that you have connection plugins. So typically when you use Linux and you run on your automation, I guess Linux, it's going to use the SSH connection plugin, but maybe you have Windows machines. So Windows is not usually, SSH is not something that you're usually using Windows. So there's a Winner-Rend connection plugin for networking devices. There are like NetConf connection plugins. So it really allow you to extend and to adapt to different kind of devices and things that you may use in ansible. There is it to develop by using Python. These ones, you have to use Python. I don't think you can use other language. Okay, so now you have playbooks. You have plugins. You have roles. So how can you package that out? So back in the day when I joined, so we had ansible ansible. And we shipped all the modules, all the plugins and everything within one repo. Then that became hard to maintain and hard to contribute because we only had a certain bandwidth to review and get things going. So there was a decision to decouple. So I have ansible core as I talked about earlier to just contain the core of ansible, the binary and the built-in modules. And then the rest have it in a separate artifact which is called a collection. So we have the collection because they play both roles, modules and plugins. So they're typically like for domain automation. So you will have a collection for AWS, collection for Azure, I don't know, for Rista. And those collection will contain all ansible things for automating that particular domain. Distributed by ansible Galaxy Private Hub, you can just go to ansible galaxy is like the content store for ansible. You can just go to browse for content and download it and install it in your machine. You can also have like a private galaxy which is called Private Hub to install that. So for Kubernetes is no exception. So there's a collection for Kubernetes which is called Kubernetes core collection. It contains modules and plugins for automating Kubernetes and Upshift since Upshift is Kubernetes. You install it by a galaxy, ansible Galaxy election stock, Kubernetes core, you install it in your machine and then you can use that. So this is a subset of modules that are included in the collection. So you will have modules for Helm. Helm is a package manager for Kubernetes. It's like an RPM kind of thing. It uses something that they call it chart which is like a definition for an application. So you can just, instead of using Helm you can just use this module. It will use it under the covers. Then we have modules for managing Kubernetes objects. See this is probably the most used one, the hates module because it allows you to either slap into the module as parameter the definition for an object or you can also specify Kubernetes manifest that you may have meaning you don't have to redo your Kubernetes manifest in the ansible DSL. You can just reuse your Kubernetes manifest and just use it. You can exact in two parts, get information about Kubernetes objects, lock parts. You also have imperative modules like wallback and scale, not only declarative. Some other specialized like service modules which are probably not super useful because you can do the same thing with the case module. This is an example. So here we're using a Kubernetes manifest that is within testing the problem in YAML. This is a Kubernetes manifest and we're telling, hey, apply this manifest and we'll just do it. That's why I said that it's good because you don't have to use the ansible DSL. You can just reuse your Kubernetes manifest. You don't have to change anything in your environment. Here another example using an imperative module. So here we're saying, hey, the elastic deployment which is under name space, my project, I want it to be three replicas and it will do that, get to apply to have that replica set. So what's EDA? So it's a new ansible project that allows you to execute automation based off of events. So what is? It's typically an ansible. Okay, you have a playbook and you want to perform something. I don't know, maybe you want to do a backup and there's a human running a playbook or something happens and something opens you a ticket. Hey, I need to do a restore. Then you run the restore playbook. So what EDA allows you to automate that trigger of that ansible automation. You can think of it as the if this, then that, of ansible. So basically you, with EDA, what you do is you observe events from your environment. Then when those events trigger, you evaluate those events against some rules which you define. And then if there's a match, then you run some action. So EDA uses an abstraction or a concept that is called a rulebook. You're probably, hey, it's like a playbook. Yeah, so it's a rulebook. It's the playbooks for EDA. So rule probably gives you a hint what they're all about. So rulebooks contain a set of uniquely named rule sets. Then within rule sets, you have a host section which you define what you want to target for your automation. The sources you want to listen. So what kind of the events you want to react to and the rules to evaluate those events. Then the rules themselves contain conditions and actions. So, and then if the events match the conditions, if there's a match, there's an evaluation that is performed by the joules engine. It's a joule based engine for rules. Then the rule action is executed. Yeah, rather than me trying to explain things abstract, let's see a rulebook. So here we have a rulebook called Hello Events. It contains a host section in this case, local host and it has sources sections which this is where you put your events sources applied. So here it's Ansible EDA range which I'll go through it in a moment. Then you have the rules which have this one rule called say hello with a condition and an action. So in this case, the events source plugin Ansible EDA range is kind of a test events source plugin which basically does a Python range. So it will go from zero to five and it will create an event for each value. So there will be an event for zero, for one, for two, for three. Then as the rule, we wanna catch when the event.i the key i within the event is one and when there's a match, we will then run this play back. It's probably not super fancy but probably in this next thing you will probably see what we can accomplish with EDA. So in this case, we have another rulebook. Check my web app. We're targeting all the machines on our inventory and we're using the URL check event source plugin. What it does, it will pull for the URLs that you may have as a parameter. Continuously, you can put that delay or timeout value for when you want to do that check and then we turn back the status code 200, 400, 500, whatever. So here in the rule, it's called help and we have a condition, hey, trigger it when the status code is different to 200 and then run and apply who could just call and start that. So basically, this is a rulebook for self-healing, remediating a web app. So this is the kind of thing that we can do with EDA. So in the rules sections, as you can see, we can have multiple rules, not just one, it's like a list. Within each rule, you can have multiple conditions. You can have checking various things within the event. So different fields compared against different values. Each rule can have multiple actions. So when there's a match for a given rule, you can have a list of actions and they will be executed sequentially. Although I think they put a patch recently which you can have run actions in parallel. I need to get back to it and check. So the rule conditions is like a Python Boolean expression. It allows to put Boolean operators, or we may take operation and yes, so you can do things. Does my event contain this key or is this key within my event equals to this value, greater than, less than, that kind of thing you have found or not the usual that you would expect for that kind of expression. It can contain multiple conditions. You can have like list of conditions and you can also use like any all operators to say, okay, if there's any of the condition matching the event, then execute the action or whether you want to have like all of them to match in order to get down to the action. So the final part of the actions are like the final part of the event we have closed. So that we have event. We have the evaluation of the rule for the event and then if there's a match, that's the action. So that's like the final part. To define the execution part when the event is matched by rule. So when there's a match, then you run the action. You have like a built-in set of actions bundled with EDA. Event source plugins, they're like separate from the EDA core. Actions are like bundled within the binary that EDA uses. There are no plugins right now for actions, but I think they're open to also open to bundle actions of part of a separate content. So probably will be implemented down the line. These are like the actions that you get built in from EDA. You can run a playbook as we saw in the examples. We can run a module. This is like the same when you do Ansible all dash m thing imagine that you wanna do like a quick execution with Ansible you don't want to create a playbook for just one task and you want to specify the module and the parameters in a one liner. That's what you can use with one module. Run job template. So maybe you have AWX automation controller somewhere containing your inventories, your job templates and all your automation. You can hit that controller AWX and run automation. You can create facts just like in Ansible post events. So maybe you want to have like a flow of different events like in a flow chart. So you hit an event and then you wanna jump into another event. So that's something that you can do with post event. You can also retract that from the rulebook execution pull it out, print events, which is just for debugging and shut down, which I think is for restarting rulebook if something goes wrong or you want to restart it for whatever reason. So we saw event source plugins. They contain the business logic for the sources. So for the URL check, there will be a Python thing for doing that. For the EA range, there will be a Python thing for doing that. And for more interesting things, which we'll see in the next slide. They are Python files. They use the async at your library. So they are synchronous. The plugins can be either pulling for external sources or they can also be listening in a port. So an external source can send events to the plugin and then it reacts. Once it catches the event, it puts this into a queue that the EDA runtime gives it to the plugin and then it can evaluate it with the Drill's engine and then depending if there's a match or not, run the action. So how does it really look from internally a plugin? So this is like, you know, the meat of the URL check event source plugin. I'm not gonna say a guy or guy. I mean, I've got the in Python IWA, but it's not hard to read. It's an infinite loop in which it gets a session with the HTTP of client and then it tries to for each one of the URL that you pass up as a parameter, it will try to hit and it's gonna get the status and then it's going to put it into the queue as a deck. So then it can get evaluated by the Drill's engine. So what are the events for plugins? Oh yeah, so right now we have like one official one that the EDA team maintains and develops, which is the Ansible EDA collection. They have like a bunch of different plugins. These are like the most, it's just a subset. So you have like an event source plugin for Kafka, so you can actually get events from a Kafka queue. Also from alert manager, which is a companion service for Prometheus, Prometheus getting metrics, then alert manager alerting when the metrics go over or below a threshold and then you can get, you can trigger automation from alert manager. CloudTrail, the one of the services for AWS for security that keeps a log of the API calls for a given account. SQS is like a messaging system. It's also very handy Azure Service Bus, same thing. And Webhub, this is a good one because it's a generic Webhub event source plugin. This is like a listen event source plugin. So it's going to listen in a port and you're going to have external systems send webhugs to the port and then react to those events. So along the Ansible EDA collection, so we're getting partners developing their own is Ansible EDA collection. So those are some things that have been announced. So CrowdStrike, a security vendor, they are developing their own for the product Cisco for the NXOS devices lines and the Thosenight script, which I think is a monitoring product line. Dynatrace, an APM, IBM Instana, another APM, Tobonomic, I think is a cost management system. Palo Alto, security vendor, five networking security vendors, ZabEx, monitoring solutions. So they're developing their own ADA plugins. So then you will be able to integrate if you're using it in your environments to integrate with that. Besides events or plugins, you have event filters, which allow you to filter data from events. So at times the event from the plugin can be really big and there may be information that you're not interested. So they allow you to remove or transform the events. You can change the event filters. Imagine a Linux pipe operator, so you can change them. They define after the source definition, they're like a first source object. Built in, you have a JSON filter, event filter, so it allows you to include or exclude keys from a JSON, convert dashes to underscore to maybe, because maybe you need like a snake case, insert host to meta if you want to search some host as part of your role activation to the event, normalize keys, this is also something that you can develop like events or plugins and bundle into your EDA content. Speaking about just ADA content, how it gets distributed. So EDA content is going to leverage Ansible Collections Framework and tooling, so meaning you're gonna have like Ansible EDA role bugs and event source plugins and event filters using the collections framework. So you'll be able to just use Ansible Collection Install, do, and you put your collection and will install it. You can also publish it with this thing and it will be able to publish into Galaxy just like an Ansible Collection. So we said the Ansible EDA is the first EDA collection available. It's the one that, you know, bootstrap the project. Now, let's go dig deeper. So how, what are the components? So the main component right now for EDA is called Ansible Role Bug. It is the CLI and Workup Component for Venture and Ansible. You can either run it in a standalone mode, so you can run Ansible Role Bug, you provide an inventory, your rulebook file, and it will run in program. Or you can also run it in Workup mode, provide an ID and a WebSocket endpoint, so it can connect to a WebSocket and to get, you know, what rulebook should I be, you know, using? This is what's going to use actually for the EDA controller server offering. So the EDA team is working on the same way with Ansible, we have the automation controller, which is a central console containing your automation principle. There's going to be an EDA server containing your rulebooks. You can log in, log out, you can, you will get the, you know, the login of all your events, and it's a Web application, so to speak. So the Workup mode is what gets used, so we'll go connect into that system. So how we can, so we've gone through Kubernetes, Ansible, Ansible, Kubernetes content, EDA, how can we mesh EDA with Kubernetes for the way things. So Kubernetes clusters can emit events for changes, so the API server is constantly, you know, every single component in the Kubernetes system is hitting the API server. It's the one that is responsible for performing change in your cluster. So when you create a CR, deployment, a config manage space, you know, you can create things, you can update, the product operations, right? Now the cool thing about Kubernetes API that allows you to, what they call it, a watch. So a watch is like an API call you can put for a given resource version. So you can basically say, hey, I wanna put a watch for config map objects under this namespace. And then the Kubernetes API is going to stream me back any kind of config map changes in that namespace. Whether there are no ones, config maps are updated or they're deleted, I going to get those events. So given that kind of functionality, that's what we need that event source plugin for. We need to plug into that capability from Kubernetes for getting watches. So get events from the API and then so we can react to them. So there's an event source plugin for Kubernetes. It's called Saber1041.dda. It's a collection that was written by Andrew Block. It's another fellow bread cutter, I think he's around. I haven't met him. If you're interested, you know, tried to reach him, he wrote it. As a matter of fact, he also like problems of blog posts about the idea collection, how it gets used. So how it works, it uses Kubernetes libraries, the event source plugin. So with this event source plugin, you define which resource you want to watch. I don't know, maybe ABA version, V1, config map, namespace, full. And then it connects to the Kubernetes cluster and gets a bunch of watch against that. And then the plugin gets events from that watch. So that's one way of getting events from Kubernetes. Or you can use the generic webhook, which is, it's already bundled with the Ansible ADA. The webhook plugin is a generic event source plugin that receives webhook events. So it's an event source plugin that listens on a port. And you can get external systems sending webhooks to that port, and then, you know, that's the plugin will get the events in that way. So as you can imagine, so we're gonna need to have something because Kubernetes kind of just do that itself. We need to have something to do that watch mechanism for whatever resources we're interested in and send those events as webhooks to our plugin. So this is what we can use. Robusta.devcubewatch is a tool that was developed by Bitnami. It's a CLI tool, which basically you create a config file and you say, hey, I want you to get the watches for deployments, config maps, nodes, whatever. And then for those events, send webhooks to this endpoint. So what we can do, we can have Cubewatch running hooked up to our cluster, sending webhooks to our event source plugin running webhook on their port. So now we're going to do a demo. I don't know if this is going to be a hard one because I don't have a mirroring. I'll try my best. So. Can you increase the font size here? Is it good? That's better. So here, this is my laptop. So I have a cube config. So it's, I have a cluster. Okay, and again, just run to get the commands. So here I have Cubewatch with a config file, which is Cubewatch.yaml. As you can see, it's a yaml file which you specify the webhook section. Okay, so for the events, the watches that I put against my cluster, where I want to send those webhooks. So I'm sending to my machine on 45,000, which I'm going to have my event source plugin listed for events. And then for resources, I can put the kind of resources that I want to put a watch. So in this case, we want watches for deployments, config maps, nodes, and I think namespaces. Okay, so if we're running, it's on the program and it puts a watch for those resources against my cluster. So whenever there's something happening for those resources in my cluster, it will pick it up and send to a webhook. So if we, as an example, so if I create that namespace, Rickey test, okay? And if I go back to the Cubewatch, it got, oh, processing that to namespace Rickey test. So it got that event and it tried to send to 5,000 but there's nothing listening. So what we want, that's because I don't have the event source plugin as a robot with the event source plugin for webhook and that for listening yet. So let's do some cool demo. Let's imagine that we have users using our clusters, we have users using our clusters, doing deployments, whatever, and there's a certain image and tag version for that there's a CVE or for whatever reason, we don't want any users to deploy it in our cluster. So what we want, we wanna catch whether there's any kind of deployment using that into, in this example, the default namespace. We wanna catch that if it's using the bad image we want to scale it back to zero. So it doesn't run in our cluster and we want to notify to some slack that I have and post a message, hey, someone tried to do this, don't do it, okay? So if I go here, I'm going to show you, so this is the robot. So using the webhook events source plugin, listening on 45,000 and we have one rule which is name and scale down by the image deployment, what are the conditions? So get events that the kind is a deployment and whether you know the reason for the event is created. So basically we're gonna get, keep kettle create deployment. Now the thing is that the event itself doesn't give us the information about the image being used, so we will need to do some introspection based on the name of the deployment and that's something that we will do as part of the playbook. So when the condition is met, we're going to run this playbook, scale down by the image deployment that notify and we pass some bars, my slack token, the deployment name which comes from the event so we can do that introspection within the playbook and my Kubernetes API token so I can do the scale down, okay? If we look the playbook that corresponds to this robot, so here we have some bar that's called bad image which is NGINX 114 into two. So that's the bad image that we want to detect. So if there's a deployment using that image, we want to scale it down, notify, okay? So what the playbook does, by leveraging the Kubernetes modules, this is why Ansible is so powerful because it can integrate with everything and it's really a great, great glue for your automation. We're going to introspect the deployment and we're going to register into the deployment variable. Then I have a block. So a block is you can group tasks for a given condition or a variable expression. So meaning when the deployment that we get earlier, if we drill down into the JSON, because that's a JSON, so this gives us the deployment object, okay? So we'll introspect it, so we'll get into the part template of the deployment. We will check for the container whether it's using the bad image. And if it's the same, then we run the task within the block. And the task within the block is using the case scale module. So set the replicas to zero and then notify to Slack that, hey, someone is using the bad image so I have to scale down the deployment, please do not use it. So if I run the rollbook, which is this one. So now I have rollbook, listen in 4-5,000. I have Qubwatch, so get in events from Fubinitis and for whatever events it's configured to, it will send to 4-5,000, which is where we have EDA. Listen and on. So if we go to another session, here I have a deployment manifest, okay? It creates an NGNX deployment we replicas and it uses the bad image, NGNX 114.2. So we wanna catch that. So if I apply, as you can see rollbook got triggered and it's running a playbook. So if I switch back quickly and I do an OC get deployment, oh, it was quick enough. So if we hit the deployment, we replicas, but we catch that event and we scale it back down to zero. So we were able to remediate the thing in our cluster that we care about and we don't want to happen. And if we go back to here, hopefully I can see the, so this is my Slack. Yeah, so it's then 16, right? So let me, so we got the Slack notification on our Slack about what happened and we scale it down. So I hope that you can see, you know, how we can leverage all this because we can build that by using all the integration that we have with Ansible, we can react to events from our environment, whether it's at the in cluster level or at the underlying infrastructure where our cluster runs and we can do, we can run our dimension for it. And that's the end of the talk. That's what I have. Any questions? Anything that is not clear? Okay, I'm sorry, the light speed call yesterday. Yes, as a matter of fact, I know. Could you just repeat the question? Yeah, so someone asked, rather asked in the audience, if there's a way to integrate light speed, which is an AI assistant for writing playbooks in this code and assist you for writing automation to integrate with that, for sure, because light speed has an API so you could create that event source plugin or some automation and plug it. As a matter of fact, I know that there's some people within my team that they're looking into integrations of light speed at EDA because if you can get suggestions about how to remediate, imagine, you get an error in your cluster and you have no run book or automation for it. If you could hit light speed API and get a suggestion for a playbook to remediate that, that becomes extremely powerful, right? So that's a great question. Oh, there is. As a matter of fact, so my team is not my team up. So my team, we've been developing an operator for an EDA controller offering. So the same way you have a AWS automation controller for Ansible, Ansible is just the single tool for running previous, but you have a AWS automation controller as a central console for running your automation. There's going to be a product EDA server or controller which is the same thing for EDA. So you will have a central console which you can have users logging in, logging out. You will be able to create, upload your rule books, see your activations, your event data, that kind of thing. I want to also say it's a rather novel and young project. We would love to get contributions and people involved in the project, especially, as you can see, the Ansible EDA, yes, it could chase event source plugins, but sky's the limit, right? Ideally, we would like to have, it would be nice to have event source plugins for integrating with whatever you have in a typical IT environment, whatever cloud, whatever SaaS you use, whatever, whatever. So if you want to contribute, by all means, I mean, the EDA team will be happy to look at your PRs and contribute with them. As a matter of fact, we've already got some contributions. There was another talk the other day, Cubalix, I think, is the guy, he contributed to Diane and QTT plugin, which is really nice. And if you want to be more, we'd be happy to hear from you. How good? We don't have any questions from Mattrix, so that concludes this session. Thank you very much.