 So edge computing industry has been expanding, right? You guys see edge computing in telecommunication, manufacturing, retail, healthcare, transportation, public sector and many others, right? So what is so powerful about edge computing? Edge computing bring memory and computational power closer to the location where the data is needed. So this is an example of a US military base. They set up edge computing in a hostile environment, small container, small space, really bad networking connection. They need real-time data for logistics and it's mostly used for saving human lives. This is another example of edge computing mobile classroom when we don't have enough classroom in the school. We have limited connectivity and stable power supply. The portable infrastructure is very flexible. You can move it from one location to another. It also depends on the materials and the instructor being on the mobile classroom. The airplane office at Air Force One is another example, right? It depends heavily on Wi-Fi or external connectivity on the airplane. Happen during war situation when we don't want to identify the physical location of the president, a long enemy. Usually they come up with defense strategy within the office. A lot of time during the war, the cell phone tower may be destroyed. So we need to take into account of all these different scenarios. So one of the project I worked on earlier is about commercial telematics. That also leverage heavily on edge computing. The company leverage different sensors on the truck to keep track of the telematic data, the truck load, the temperature of the truck tire, the driver's driving record, right? And so on. So in that project, we heavily depends on the telematic control unit. So we have like a small micro controller chip that were installed at different parts of the truck to gather the telematic data. So edge computing, in theory, it plays the workload closer to the device where the data was created. And then you can take action as soon as the data is available. So going back to the telematic data as an example, it gather from the device and then send the data to the cloud and make it available. So how do we send the data from the device to the cloud? This depends on the 5G network that open up the computational capacity into the devices. This is an architecture or high-level overview of using Ansible, a different layer of the architecture. So at the truck, you have the edge devices that go through the 5G network. Then you have the AshNo and then the AshNo will communicate to the cloud such as OpenShift or AWS. So now we know the edge benefit focused on the workload where the workload is located. It might create a network-centric surface to the workload-centric surface. It's a highly distributed architecture and it react really fast. So you can see that this architecture, we do the latency of the data. It saves the bandwidth, right? By reducing the amount of data, you need to travel back and forth between the device to the edge layer. It increase the resiliency, right? By doing continuous business operation in the event of unexpected situation. And also the data sovereignty, right? Then meet the standard of the compliance and requirements, right? So doing with large data at the edge, sometimes it would be up to the cloud layer for further computation because it require more computational power. So a lot of times the AshNo is a piece of IT equipment, right? It could be a small server, it could be a small device. So in the vehicle, for example, we have different sensors. We have GPS, navigation, radio, the camera that take the pictures and process, the video analysis, right? The internal CPU, GPU of the car system. So these are all example of Ash devices. So for computational capacity capability, we have increased significantly in the Ash devices. Some Ash devices were running in Linux. So we could deploy, containerize, workload to the Ash devices and then do the workload at the Ash devices. So since we are dealing with different types of devices, we can think about there were hundreds of millions of Ash devices, it's important to enforce the Ash security. We want to ensure that the immutable device provisioning is happening, right? So they were all going through the uniform configuration for these devices. TPM, trusted platform modules will be enabled. Data encryption is important. We don't want to lose any data during the transaction. Networking encryption is also important. We want to enable surface account access. So meaning that when you access the Ash device configuration, you should not be using your own account. This would always, always, always use the surface account. So that would guarantee and ensure uniformity. And the key of accessing these devices, the security key or token needs to be rotation. It needs to be rotated every one week or one month to ensure the security were not jeopardized. So now the next question would be how do we manage all these different environments, right? How do we ensure that the right workload were deployed at a specific time? So we leverage Ansible architecture, right? Setting up Ansible Playbook and Ansible Tower. We can ensure that the deployment were deployed at a schedule or triggered by a specific notification so that all these deployment would happen at the same time. And Ansible also have a management layer that could talk to Amazon Web Services, Google Cloud and Computation Engine, OpenStack and Microsoft Azure, right? So using that Ansible management layer, we are able to manage the deployment status, looking at, you know, deployment failure. We can roll back the deployment and then we could set up, you know, health monitoring and alert notification based on the deployment. So as we noticed earlier, right? Ansible Cloud Module is heavily used during the edge computing, right? It supports multiple module packages. It could talk to Amazon AWS, talk to Azure, talk to OpenStack, talk to Google, talk to Docker, talk to VMware and so on, right? It has a lot of rich functionality and example code that you could look at to do the edge computing. So I just pulled out a few examples here to showcase how this could be done, right? This is an example of Ansible certification management. You set up a playbook for example and then you have three different tasks. The first one is to create a challenge for a specific domain using an account key from a variable, right? And after that, the certification is created. Second step, you could copy the search from one destination to another location when a specific condition open is executed. And then the last task is to basically let the challenger to be validated and receive the search and intermediate search. So you can set up the configuration in your deployment to get the certificate as soon as the certificate is available, yeah? So this is an example of Ansible playbook deal language certification. So earlier we talked about security ensuring that TPM is enabled. This is an Ansible playbook that you could use for enabling the TPM, right? So first, it contains three tasks. The first task is to install the TPM package. The second one is using the first task is to create a new IAM user with API key, right? Using the IAM module. And then the second task is to create an IAM role with a specific trust relationship after the user is created. This is also using the IAM module. Yeah, all these, you know, you can think about all these example are part of the Ansible Cloud module. The Ansible managed user role, right? So these are two examples where you can create a user, grant a user, a specific user role, pretty straightforward. And then Ansible deployment. We, a lot of time we use one playbook to call multiple smaller playbooks, right? So this is a example of a deployment orchestration, right? At the orchestration layer, we could call one playbook to configure the TPM host, one playbook to provision, install IDM, right? One playbook to install a director, one playbook to install the host open stack over a cloud. And we can do one playbook to install and configure a tower and the last playbook to do the cloud form configuration. So we have a lot of technical problem, right? So we have the number of edge devices are very high. And, you know, we have 50 to 100 billions of devices in the field. And these devices come in different form and configuration. We kind of talk about security, right? We have the data associated with the edge devices needs to be protected, right? It's not possible to do a manual deployment given the scale of these different devices. And we need a way to distribute a workload to billions of devices. And you can see these technical problems were kind of taken care by Ansible, right? So you look at the example earlier, we have uniformity, insured consistency, security and scale and reliability across all these nodes. It's important to provision standardization of the nodes. And we can deploy the appropriate application resources based on their purpose, right? We could also hook up Ansible to OpenShift and make sure that the new deployment could be used with OpenShift orchestration layer, right? So this is an example of the edge deployment with Ansible. You have different edge tiers, right? On the left side, we have the devices, usually, come with one server. And then as you go to the left, you go through the end user premises edge, right? And then in the center, you have the provider far edge. The provider edge are kind of at the infrastructure layer, right? This is the layer that usually will be at the... Some of them are closer to the edge would be one of the infrastructure location that were locally set up within the region, right? So that's the provider edge. And then the far right, you have the provider and enterprise core. So this is your core data center. That could be across multiple region. So as you move from the left to the right, the number of machine will get increased and you'll get more computational power and more reliability and resiliency. So this is high-level edge deployment for this architecture. And as you get more reliable connection, you can do more data analysis, right? You can do machine learning. You can do different data segregation, right? Doing that when you have more reliable connection. When you have weak connection, on the other hand, right? You need to do a lot more, you need to basically break down your task into smaller tasks and make it retryable. So when the task fail, you want to retry it immediately, right? And then when the data was lost, you want to get the data again and compute the data from the next availability. So you need to deal with that type of computation a little bit differently when you are at the edge. So one example we did in one of our clients is to compute the IFTA, right? The IFTA stands for International Field Tax Agreement. So each day in the United States have their own IFTA law. So when you're driving through a state, it will calculate how much money you need to pay for a specific state, right? For gasoline, for, you know, using the highway, right? For the time your vehicle is in that state for transportation, right? And then you can see, for example, we have two different destinations on the map. IFTA calculation, right? We need the real-time data from telemedic devices. We reroute the GPS based on different mileage, based on different time spent in each jurisdiction. Optimize operational so that we could get the lowest cost and through the state-to-state transportation. And this is where we leverage edge computing with Ansible, right? This is another example we use for telemedic image with Ansible. When we have different truck driver driving at the bay area, we want to get the accident location so that we can inform other edge devices, other driver, where the accident could happen, right? Eight minutes. Ten minutes, okay. Yeah, so we still have time. So based on the telemedic image and edge computing, the whole condition is analysis. And then when it is an accident, it will trigger and alert a notification. So we are solving this computational complex problem at the edge layer, and the image will be transmitted through a 5G network when the 5G network is available. And this will help us perform higher data, more data analysis at the cloud layer. And this images will be back up at the cloud layer using OpenShift, for example. And this is a high-level flow, right? So we use Ansible, capture the truck image, right? And then the image will be packed with the latitude and longitude and the time frame, right? The image will be saved at the edge node. So machine learning will flag the whole images that will contain accident conditions, for example, right? When the accident is identified, it will trigger and alert based on the image analysis output and it will send the alert back to other edge devices within the range of the latitude and longitude. And then that will trigger our GPS calculation, right? So this is an example of Ansible computational on the edge. The telemetric image will be Ansible containerized. The vacation is containerized. We support GPU. We have a latency from the edge to the node less than 200 milliseconds. We have a machine learning program scanning image and the latency is less than 300 milliseconds. The bandwidth of five megabytes per second upload per telemetric, edge devices were able to happen. So these are different images, right? That you can imagine, right? If you are using your camera to track the whole condition. And then at the end, right? You can think about running all these jobs within minutes or miles with real time disability into telemetric. So this is really helpful, right? And then at the end, the job company were able to reduce operational costs and save money. So yeah, so at the end, we in conclusion, we were able to use edge computing with Ansible that enable machine and highly distributed in range of different transaction logs and with state or microservices deployment. Edge computing with 5G network offer fast and reliable solutions for different problems. The event trim can be created based on log level change. Application listen to the event and perform action based on data change. At the end edge computing is an open source framework. It works really well with any programming languages and development framework. So that's the end of the presentation. Thank you very much. So Lucio, can you see if we have any Q&A on the chat? We are working in a project to improve safety in roads in Colombia. So you probably, your project seems perfect for the way we are doing here, the solution. What is the main problem when you put the images on the edge and you try to download to a server if you have a low latency, no latency problems in the network? Yeah, so yeah, so... The stream. Yeah, so there's a technical challenge that we face earlier about low latency, right? Initially, the company wants to upload a video stream from the edge device to the cloud. But it depends on 5G network availability. So the problem is when it is driving through some of the state where it does not have a good network connection, the video stream we're not able to upload, right? So at the end, we need to come up with a way to put down the video stream into different images based on sampling. So let's say you have a video stream of one hour, right? We basically need to break down it so that we do a sample of every five minutes and then use that as a starting point and then measure the bandwidth to see if this is a feasible solution because edge computing depends heavily, heavily on the network connectivity. So you will need to do some testing and do some adjustment based on your experiment. So if you are driving in the city, that's probably OK, right? You can do video stream and upload the whole stream, right? But if you are driving outside of the city, then video sampling and even you can do machine learning to do some sample based on the production data using a feedback loop, right? And then that's what we did. So this is still a challenge. And also, the retry logic needs to be implemented, right? Because the network sometimes is not available in one minute, but it may come up in the next minute. So we keep retrying, right? But however, you cannot retry indefinitely. So we put in circuit breaker, for example, right? The first time you retry, you wait for five seconds and then the second time you wait for 15 seconds, right? Then we gradually increase so that you don't get into any DOS or any DOS attack that would turn your server down. So there's a lot of different configuration we need to take into the picture. But yeah, this is a challenging project and there's a lot to consider. Yeah. Would you like to say something? Yeah, thank you.