 Good morning, good afternoon everyone. I'm Alessandro, we'll see you in a few minutes for two years now. And today, thank you so much for being here to this session of live and stream and where we watch the recording. I'm so excited to be here and I would really love to thank you, all of you, for the trust that you gave me today because we will see this in a moment, why? In many of you, they probably have seen my face here and there, both on LinkedIn or something GitHub or whatever. And I'm actually working as a senior, special solution architect in Reddit. And basically, I've always worked in the open source community, both as a contributor on my project and of course, also with cognitive technologies. I started to look at Kubernetes, OBShift, and whatever it's related to. And one of the important things today is that, and this is the reason of the trust, is that it's the first time for me being here at DevConf and the first time ever presenting at a conference. So don't be so tough. Well, today's session will be mostly oriented towards Edge and how to automate an Edge using Event and Event-Driven Automation with Ansible. So I just prepared a quick agenda to guide you through what will be the steps that we are going to take today. So the first thing, we are going to start with an introduction of the Edge. So what is Edge computing? We are probably very knowledgeable about this, but it's always good to have a quick refresher and how we can automate some of the use case, the most common use case at the Edge and what are the benefits of automating the Edge? And then we are going to focus a bit more on events. So discussing a bit about events and what actually does it mean to automate with events? So what is the idea behind this different kind of automation? And I prepared a very small demo that will be covering some of the use cases that we will touch on during the presentation. And of course, at the end of this session, there will be a Q&A, so feel free to also ask questions, also during the session if you want. It's all fine, perfectly fine. So let's start from the beginning. So we heard a lot about Edge, Edge computing, and it's a way different way of calling something that we were used to deal with also in the past. When we talked about IoT devices, disconnected environments, we were already talking about the Edge, but now it's starting to get a very hot topic. Also because we have the chance to completely decentralize our computing and our different use cases can be really self-consistent, self-feeding, and basically doesn't require too much presences and too much power, for instance, or connection or everything that is created to a data center can be just completely, very little pieces. And one of the biggest benefits of this is that we can really start decoupling all of our different workloads and having different sections of our workloads, application, calculation, power, and everything, in sites that were never covered before or were difficult to cover because we had restricted protocols, we had different access to these kind of places because maybe it's a disconnected site or it's a very disconnected environment. And of course, one of the importance of the biggest thing when talking about Edge and automating that the Edge is the response time. Because sometimes we need faster reaction times in respect of other potential use cases like doing normal automation like patching a system or having configuration of a web server or configuring our infrastructure, we potentially need to react faster to anything that is happening at our site. And we don't have the control of that because maybe we cannot access that at the point of time, it is completely disconnected, or we have some limitation of any kind. So why it's a good idea to start automating Edge? Of course, one of the biggest use cases and one of the most common use cases that get Edge is of course configuration management. We want to streamline the configuration of our devices, our sensors, of our small computing units that we maybe have there for collecting data or having some interactions with local components that cannot be automated from a data center, for instance. Or we just want to have a consistent configuration across multiple kinds of devices. So maybe we can have also some sensors that require different, each sensor may require different configurations. So we need to be able to control everything from one security to the other. Security, because maybe in our sites, we want to enforce some security policies, let's see, let's tap for the CID access or anything that can make our devices and our environment compliant with our regulation or just cover the regulation if we are talking about the government agency or a government entity. And of course, one of the use cases that will also be covered in the demo is monitoring. Why? Automating Edge can be good for monitoring and remediation because actually, when we are going to take control of our devices, so having our configuration in place, being able to produce metrics based on any potential activity. So if it's a sensor, it can be the data from the sensor. If it's a computing unit, it can be how, for instance, the model that we are trying to work on for that particular device, how it is performing. If we have a weak potential leak or power leak or power outage or network coverage, we need to take care of in a very fast way and we can also implement this in a sort of self-feeding fashion because actually, and we will see this in the demo, we can also host the automation at the same site without even caring about that. Because it's there, it will be running, it will be checking for everything that is working and we can especially configure and extend this kind of automation. Another use case is the idea of having rapid developments. Whether it is a containerized application, whether it is a package that needs to be released, whether it is a firmware upgrade or a software upgrade that we need to deploy. We can do that at scale and we will talk about scaling in a second. We can do that at scale and potentially with very limited downtime because actually, we can take care of that and integrate automation in our release pipeline. We can, for instance, take decisions based on how the deployment is going so we can decide to extend the way of how our deployments is happening. And, of course, we need, we are talking about, when you talk about Edge device, we are not talking about a small bunch of leaks or a small bunch of sensors. We are talking about sometimes very deep numbers. We have some customers, also, that are starting to get more into the Edge computing and starting to, we are talking about 10,000, hundreds of thousands of nodes or sensors or devices. So we need to be able to scale in a very fast and streamlined way. And the last thing, of course, which we are really mentioning about, we need to be a panel, reaction panel. Because we cannot afford having that component starting to fail or we cannot afford having a deployment failing for one of the data collection units that we need to put fragmentation in place. If a device is failing, we need to find a way to take that device, take it offline or try to re-mediate what's happening and then we can be able to make use of it again. And, of course, all these things are pulled together, but we potentially need something different to do because the normal automation, the standard automation may not be the best fit for this kind of use cases when we are talking about Edge. Because we are talking about very definitive use cases and we don't have a dynamic adaptation of the automation. So we have some commands that we need to put and we write our labels if we are talking about Vensible because we will talk about Vensible. And, but we are going to just have a configuration in place. This configuration will be deployed, but what if we need something different? So what if we need to adapt our scenario to the automated, the optimization for our scenario? And this is how events can start to get in the game. We all heard about events. Events are potentially everywhere. When we are talking about a Mastery bucket, for instance, when you put a file in a Mastery bucket, it can generate an event. You can create a message, put a message on a queue. This is potentially another event. When the sensor is starting to ingest data in our center or in our devices or whatever, it is actually generating an event. It can be analyzed, it can be processed and we can do something based on the analysis. And one of the other use cases that we always use with our customers is also the interaction of tickets. So if we are, in our ideas, we are creating a ticket for deploying a note in the remote location that is not reachable from our data center for any reason. We can put in place some kind of automation based on events, trying to make this kind of automation reactive to what's happening. So when we are creating a request or a ticket or whatever, we can still process that and have the automation react to that message and also potentially close that message based on the automation. Respond to that message, escalate that message. And another important aspect is coming back to monitoring other things. So if we have our platform thinking of OpenShift, thinking of any other potential platform that we have in place in our environment, we can meet, we can meet to react to what's happening in the platform. So if the platform emits an alert, we need to be able to identify what's the alert is about, take a remediation on that and apply it. So we need a bit of a switch of context because actually we are used to traditional automation when we have some issued comments, we have a set of defined machines or potentially dynamic machines, but we have a potential issue in synchronous. When working with events and when working with dynamic data, we need to be able to react to each and single event that is happening in our platform. So it's not just about, okay, I need to configure this, I run my playbook, this is what's happening, I can also wait for the playbook to end because I have this task today, the playbook does this. When we are talking about events, it's quite different because maybe we have like 10, 100, thousands of events and alerts that are happening in our platform and we need to be able to manage all of those as soon as possible and of course just in time. So we need a faster reaction time even in this case. And this is actually how it is changing. So it's not about just having a command issue, but this is something that is driving by events. So it's the event driving the automation. So you have potential different sources that are emitting different kind of events all day long, all night long, forever and you need to be able to react each and every single time. So basically, if you think about events, we can define three big questions that we need to do. Is the first one is we are interested in when is the event coming from? So what is the source of our event? Is it a task vacuum? Is it a web book? Is it an MQTT listener? What's, where does the event come from? Then we need to process it because actually, okay, we receive the event. We are good to go. It's probably a JSON message or whatever, example format it, we can do this in any way. But we need to do something with that. So we need to also define how we are going to treat our event and what is the outcome of this processing? It's the work. So what do we need to do based on this one? So we need a source. We need something that processes the information and then we have something that is actually activated the automation based on information. And this is why we are talking also about Ansible. Because Ansible is one of the best bits for this kind of use cases when it comes to event treatment. First of all, because you don't have, you don't need an agent. So we all need, we all know that the Ansible is agentless. The playbooks are very easy and unagreedable content that we can reuse, implement and distribute. And we have another advantage that Ansible is community-driven and partner-driven. So also the community and the partners can contribute to the code base. And of course, covering different devices, different protocols, different connection methods. And this is crucial, especially when we are talking about a very, a very heterogeneous set of devices like it can happen at the end of the day. And the most important thing is the last one. So we have dedicated bits for event treatment automation. So we have a collection that will be also implemented in the Ansible platform controller that will be actually responsible for managing the event treatment part. So it is implementing, and we will see this in a second, new, let me say, new resources that are called Chromebooks where we are going to define exactly what we said before. So we are going to define the where, the how, and of course, the what will still be an action, a command, a playbook or whatever. So let's see how this is translated. So we have the sources. So in the event of an Ansible bit, we have the sources that can be, as I was mentioning, a Kafka, or KafkaToby. I recently created one for our demo, that is the MPTP connector. We have the community working on all the kind of connector, also, and your blog created the one for Kubernetes, for interacting the Kubernetes cluster. And this is why I was talking about it's also easy to implement. It's not something that is rocket science because actually, I'm not a programmer, so I was able to create a Python script for the MPTP source from scratch based on what was already available. And this is quite crucial because having this possibility of extending the use cases, potentially in an infinite way, only the, basically, our fantasy can be a limit because as soon as we are able to create something to interact with our source of events, we are good to go. This is exactly what we need. And of course, we have some rules. So once we have identified the sources and the event from that source, of course we need to do something with that. We will see a very small snippet from the demo later, but it's basically a sort of event processing queue where you're going to put the condition placed on the event, what is multiple condition, combination of conditions, we will be able to label our input, manipulate the event itself to make it understandable to our automation itself. So we will be able to process, to further process our events even before using it in the automation. And of course, the last part are the actions. The actions are basically a map, a playbook that can be run, relabeling our input, saving the input from somewhere. And these all together combined is giving us a real way of managing events and managing our automation with it. So just in a few seconds I will be showing you a very small demo that I created. And basically the use case, it's kind of funny because the next slide is a bit weird, but this is exactly the idea. And this happened to me last year. So with what we think about industrial use cases, medical use cases, that are of course a big thing. But what about the little things, like a fridge in our house? So we all have a fridge. And a fridge, you know, but at least in Italy, when you start to get in hotter and hotter, you can feel that the fridge is not actually working at a very good condition. So, you know, the groceries are not feeling so good. You are just starting to have a meat cold, meat hot water and stuff like this. And this is even worse when you are out of your house and you cannot control this. So what is the idea? The idea was, okay, let's think about a potential use case where you have different sensors. One in the house, one more, of course, in the house. A couple of sensors on the floor and on the roof of the room. And then one sensor in the fridge. Of course the fridge should be, let me say, a sort of intelligent fridge making it, like, just having an interaction with an external device, whether it's via wireless, whether it's Bluetooth or whatever. So you just need something that is able to interact with that device. And what is the action in this case? The action in this case is why we cannot adapt the temperature of our fridge, the working temperature of our fridge based on what's the temperature of the outside and inside sensors. So what I was thinking was, okay, I cannot implement it for real because I cannot make something here. So I decided to create the version of this use case in a sort of containerized way. So I created a purpose application for all the sensors and the fridge itself. And then we will see this in a second. So the digital version is something like this. So we have everything is based on readout device edge. So we have a potential device running in our house that is hosting readout device edge. Of course, coming back to the news session, if you were here, we were talking about the image there, how to create well, and sorry, my microchip, ISOs and stuff like this. This is exactly the same scenario. So I created my readout device edge in installation. I created the Quarkus application for all the sensors and I have a small configuration here that is based on the Quarkus application. So the Quarkus application has a configuration that is our, let me say, target of the automation. Maybe we are going to change that configuration with the Quarkus application and see how this adapts to what's happening. I've deployed a very small registration of full speedo. It's called terrorized mosquito. And on the other side, we have two other components, one that are aggregators and one actuator. The other actuator just takes care of getting all the information from the sensor aggregated and send it to you first thing. And the actuator, we will see the connection in a second, it will be the actual generator of our event. So it will take care of monitoring the data here. So if something changes, it will send a little event that will be done in process by Ansible and Ansible will take care of adjusting the configuration. And the good thing is that the actuator, now it's just a very basic Quarkus application. As I told, I'm not a developer, so it's nothing really complex. But the idea is that this actuator can be potentially anything. It can be an application, programming, implementing a model, an AI model, or a machine learning model to just react in a better way or, of course, adapting to every single change. So the cool thing about this is that it's not just having a very simple use case. Because it's a very simple, as I told you, it's not rocket science. But this really covers most of the topics that we touched on in the last minutes. So basically, the Ansible, the event in Ansible is containerized as well. So we have an MPTT connector that is taking the information from a dedicated topic on Mosquito. And it will actually take some decisions based on what is the event that is generated. But, of course, now we've seen the picture. Now, let's just jump for a second on what's the implementation of this. I will not go into details about the actual Quarkus implementation. The repository will be shared at the end of the slide. So there will be a QR code if you are curious and you want to go through that. It's quite, it was a bit, it's a bit more, I tried to make it as small as possible, so I hope we'll be able to just recreate as needed. But I just wanted to show you, so here we are, our whole book. So getting back to the discussion that we made before, so the whole book is something that is very familiar because it looks like a playbook, but it's called in a different way and also has different, from the implementation perspective, it has also different keys that we can see here. So we have the sources and the rules. So the sources are actually the actual implementation of the plugin that is connecting to a remote source or a local source or whatever and just taking the information. In my case, it's a very simple one. It's just taking a couple of parameters here. So what is the host of the MQTT broker? What is the port and what is the topic? Of course, this can be also extended at will. What is important here are the rules. So the rules are, okay, now we are in the, we are actually in the, we have the event. Now we have the event. Every single event has this prefix, event of course, and based on the implementation, it can have additional keys that are needed to just go through the tree of the event itself. Usually, if we are talking about JSON messages or something that is browsable, it will be just about accessing it as if we are just going through a tree. And the important part, so the what, is the actual action that we are going to do. So it's, as you can see here, it's just about running a playbook. So it's just a very simple playbook that is just taking care of analyzing the event. So it will further analyze the event and based on the outcome of the event, it will just adapt our temperature. So just to make, so we have our basics. So our sensors are basically just, so we have here our sensor data. So our sensor data, as I was mentioning, we have three sensors in the room and outside and one sensor in the fridge that is, and we have, I added some humidity, temperature and stuff like this just to make it a bit more real. And from the fridge perspective, we have also the power consumption, the eco mode, if we have the eco mode on or off. And of course, what is the speed of the fund that is actually working in the fridge itself. And we have a fixed temperature here. So this is the working, the operating temperature of the fridge. And this is actually, what is the sensor getting from the fridge itself? Now, let's see what we can do here. So it's very hot outside. So we are going to just raise the temperature a bit. So it's a very simple script that is just changing the configuration of the different sensors. So to simulate a sort of change in temperature. And let's see what's going on here. So if we have any change, sorry. So let's wait here. We will have a slight change. So basically the temperature just changed. So I increased like five or five degrees. So it's a very typical summer situation where you're just jumping from a temperature to another. And here it's not happening so much at this point in time because actually the data is starting to be transferred to the aggregator. So the aggregator is writing the data here. And we have our actuator that is actually gathering this data. And it is actually taking care of trying to find a sort of anomaly detection, let me say. It's a very rudimental anomaly detection based on some thresholds and on previous iterations. But I just added myself in this side. We have what's going on in the Ansible in the Event Drive and Ansible container. So what we hear, so something happened. So basically the actuator react, notice that there was something that was changed in the temperature. So we had a very slight temperature change. So coming back here, looking at the log, we just received this event that has this structure. It's basically a JSON message. And it is basically, okay, there is something that is not working. So it is going through the processing part. So it finds that there is an anomaly. It just brings some information about what is the delta of the temperature. Again, it's a very rudimental detection. So it's nothing precise. But it's still working. And it takes some information from the action deployments, changes the configuration. So it basically changed the configuration of the fridge. So the fridge was our target. And this is basically what's going on here. And as you can see, we just see how, and it's really nothing complex because it takes a very small amount of time to just have a good automation in place. Of course, all the part behind is a bit of course, preparation and everything. But again, it's even looking at the code. It doesn't require so much effort to start working with events. Of course, it can be complicated as much as we want. But as you can see here, again, I'm not a programmer. So I kept it very basically. And I really hope this was a good use case, to be honest. And, yes. So, this was the last part of the demo, actually. So I really hope you enjoyed that. So if you have any question, discussion, or you want to raise something, feel free to do that. I'll be more than happy to go a bit deeper on that. Yeah, let's go. So you mentioned that we could write rules and the events. Sorry. And then we showed that we could write rules. Now, let's assume we have a huge scenario where we really want to automate a lot of things and write it, let's say, thousands of rules to manage it. But then at the end of the day, my code would become busy, right? If I want to change something, if I want to change one of my rules, managing all those rules would be a little tricky. How do you manage all these rules together in one place and make it easier for the developer to bring in and change it? Okay, so that's okay, just to calculate it. So if you have... Yeah, so if we have multiple rules, how can we handle the granularity of managing all these different kind of rules when, especially when multiple developers are working on the same kind of automation? Well, this is actually how you would treat a very complex problem. I mean, you don't... You are not going to work on the big bunch of rules, but you can, for instance, split the different rules based on what is needed by what. So in the case of having, you know, particular kind of events like, I don't know, in the case of the temperature, let me say. So you want to manage the temperature, the humidity and a lot of additional data. You can also try to split those into some smaller sub-problems, like having a dedicated rulebook that is just taking care of that particular use case. So you can also have a rulebook for the temperature and having a different rulebook for the humidity. So if you are working on the temperature and I'm working on the humidity, we can still not interact with each other, but we can still work in a, let me say, sustainable way. This is a very good question because, of course, the number of rules, but if you think of also the sources, the sources is not, we have worked with a single source, but we can have multiple of them. And each one of them can produce different events. And even the same source can produce many type of events. So, you know, it's getting, what I would suggest is try to split it into, you know, better manageable problems. And I think it will be a good idea to start with. Please. So, okay, the question, if I got it correctly, is how I actually gather the events. So do we have a listener or it's just, you know, pushed from a client deployed? Okay. So basically the sources are just listeners. Can be listeners. So they are always up and running waiting for someone to push. So instead, think of a web book. So you have a sort of, see the web server that is running continuously and you have someone calling him. So you have a client. In case of a Kafka event, for instance, you have the client also on the event-driven side because you have someone that is connected to the Kafka topic and it's just waiting for something to happen. So you also have another client on the other side that is actually producing the event. And also, sorry, okay. Okay, thank you. And so when you're talking with, you know, asynchronous messaging, you still have two different clients but it's not, you know, waiting for something to happen. So, and of course you can also extend this kind of thing because it's, as I mentioned, it's just about a Python plugin that you can just use, create and reuse for everything else. Thank you. Do we have any other question? I think you're fine. Good. Thank you. Thank you so much. This is the, this is your thing for the repo if you are interested in. And I have just one ask for you. It's the first time for me here so please make a good use of the feedback. If there is something that you didn't like, especially you didn't like, please let me know. Thank you so much. Thank you. Thank you. Thank you.