 Thanks, Ebi. Yeah. And Karan, welcome to Dev Nation Day. So I've heard that you're playing with some automation in smart cities. So how does it work? Lots of automation. You guys are going to see lots of tools, lots I can already see on the chat. People are excited. So yes, I'm going to show you guys a tech soup that I'm calling it. So bear with me. Yeah, please. Yeah. Show us. Sure. Okay. Thanks, Edson. I'm going to share my screen. Give me a hand up once you see this, sharing screen number three. So what do you see? Do you see my preso presentation? Oh, it's a different screen. I do. Yeah. We're seeing one screen. Oh, fantastic. So hey, guys. I'm Karan and I'm going to show you so all the previous experts on this, on the keynote session, show you how you can seamlessly and in the simple way deploy your apps, your Doc scanner or doc detection apps, CAD detection apps and Pacman game shows. Today, I'm going to show you how you can move these apps onto the edge, right? Because we have billions of people to go to, right? So I'm going to show you how you can deploy, seamlessly deploy open shift and Kubernetes and move to edge and definitely get the data from the edge to the core, do some machine learning solution designing on top of open shift. So we'll start with some, as a standard software development practice, what's the business requirement, right? So let's start with the business use case, a business requirement. We need to build an app which or a solution, it's not a single app, it's a collection of apps. Build a tax stack that should help us reduce some congestion and we need to charge vehicles, some kind of fee as they drive into a city area. So we have chosen an area in London called as ultra low emission zone in which there's a special charge that the vehicle needs to pay because the authorities wants to protect or because of environmental reasons they want, not all vehicles should just enter into the city. Reduce some condition, reduce some pollution, charge some dirty vehicle feces. If the vehicle does not meets emission standards, we want to apply more charges so that the owner will not enter the city very frequently. And maybe a third use case would be, hey, I want to locate a wanted vehicle, right? So which the officials can use the system for. So that's the business, very, very high level business requirement. We got to walk together on this and touch upon how we can capture all these things. So that's the text I was talking about. I'm going to show you lots of tech, lots of tools, open source and Red Hat tools and lots of tools and each of them have their specific reason why they exist on the solution, right? We don't want to over engineer, but yeah, there is a special place for Kafka for Ceph, for Superset, for Grafana, right? For databases, obviously Kubernetes is the heart, the beating heart for all the solution. Ansible is going to be the magic through which we're going to create some tech soup on this. So yes, before we go into the Morse presentation mode, I'm going to go and show you, let's deploy. Let's deploy this thing right now and let's use Ansible to do it for us. While Ansible will do the heavy lifting in the background, I'm going to take you over to a few more generally like what I'm doing in this actual implementation. Meanwhile, the system will be on its toes. So what you guys are seeing here is I have my terminal opened up here. I've already logged in to my OpenShift CLI. On the other side, we have OpenShift console through which we're going to see how things operate. So step one, there is nothing on the system right now. It's a vanilla, yeah, not vanilla, but it's an empty OpenShift cluster and that's the first workload, the smart city workload we want to deploy on this cluster. So starting with, starting the basics, create a new project, right? New project, we'll see a new project called a smart city, you want to run this? Okay, we are into the project and the CLI is smart. It have already added me to the project. Next I'm going to run an Ansible Playbook, which I'm going to explain you in a few slides. So bear with me, this is going to do a lot of things in the background. This process will take close to 10 to 15 minutes. Meanwhile, we'll go and explore how the overall solution looks like. So we're going to run Ansible. So those of you who are familiar with Ansible, Ansible Playbook is the command to do it. I'm going to run it from local because I have the connections established on my local machine. And then this is the main master playbook we're going to run against. So I'm going to hit this and meanwhile this is running, let me pull up my project, smart city, the project is here and just go to my favorite section, workloads and parts. So we're going to see lots of stuff coming up, right? So coming back to my deck, back into presentation mode, system will take some time to set up. Let's understand. Let's understand meanwhile what is happening under the covers. So from the solution design point of view, okay, we have this city of London, ultra low emission zone. We have all these cameras installed across various toll locations in across the city and the use case is that each toll location that we're calling as an edge, right? Each edge location will recognize the vehicle, the passing vehicle and detect its license plate. And through machine learning models, we're going to grab the string that the model detects in real time from all these edge locations and append some metadata to it. What is the timestamp through which this image was captured? What is the geolack long so that we can do all sorts of amazing stuff once we have a lot of data into the system? But the data is being generated at the edge and there are multiple edge locations here. How we can move the data to the central data center, so the central system where we can do lots of compute and analytics, do some free calculation on the data that we have collected, apply some more business logic like, hey, apply some dirty vehicle, dirty charges to the vehicle and notify the officials in case if the system finds out a lost vehicle, right? So remember our use case that I've explained before? So that's the very high level design part one. In part two, great, we are live capturing data across from all the edge locations on to the central core location. We have done some business analytics in there, but wait a minute. We are capturing a lot of data. What should we do with this data? Well, we should retain the model because obviously machine learning, the model has to be updated as we learn about the data as we collect more data, right? So we need to store vehicle images, license plate images and license plate strings into the system, right? And then retrain the model. We have a model already on the edge. Retrain the model, improve its prediction accuracy and then deploy the same model or the new version of the model across to multiple edge locations. So OpenShift helps to do that and tools on top of OpenShift helps you to build a solution like this where you have multiple few hundreds to thousands of edge locations. You just need a system that provides the right tool to do this kind of stuff. So that's part two of the solution design, right? Deploying the model at the edge. Let's come back and understand how this things work, right? So this is more kind of a thousand foot view of the system. We have multiple edge locations running OpenShift environment. It could be fat edge. There could be a three node OpenShift cluster or maybe a very, very thin location where they just need a single node, a single OpenShift, right? That's really a thing right now. So as the video streams produce images, live images from the system, the images will go into a license plate recognition model. This model will then extract out the areas of the image where it detects a potential license plate. And then it will basically give this subset of the image to an OCR model, Optical Character Recognition Model, which will read the letters from the number plate, right? Pretty standard. And then once we have fancy of data onto the edge, what we'll use? We'll use Kafka. Kafka is the go-to technology to handle this kind of workload. So we'll use Kafka, Kafka producer, and then Kafka producer will produce some messages onto the local Kafka cluster running on the edge. But now here's the magic. We have lots of edge locations and we need to move the data. We need to build this loosely coupled system that should handle the network disruptions, right? So Kafka does it pretty well for us. All the events, all the metadata will be stored on the local Kafka edge clusters. And then it will asynchronously, it will move the data from edge to core using a technology or using a feature called as Kafka Mirror Maker, which will move the data from one edge or maybe thousands of edge locations onto the core central data point, which is Kafka again. So we are capturing the data onto the central Kafka cluster. And once we have the data onto the central Kafka subsystem, fantastic. We can do lots of things with it. For example, we can build our own Kafka consumers, maybe a Python Go or Quarkas apps that can read and tap into the Kafka topic and build some business logics, do some wanted vehicle notifications or else for long term preserving the data of Kafka. Because remember, this is all governmental data, right? We want to store data historically, correct? So we need some persistent layer that can move data that is coming to Kafka, move into an object storage because object storage is autonomous storage, correct? You just store it, it's simple to use and it is cheap, it's low cost. So we'll use something called a Secur, which can help Secur is by the way a Kafka consumer, which will tap into the Kafka topic, read the data from Kafka, store that onto object storage bucket. And here we are using self object storage, which is the choice. And next, once we have data onto object storage, we can use, we can build analytics and reporting, real time analytics and reporting using tools provided by OpenData Hub, like SuperSet, Apache SuperSet, OpenGrafana, Starbus. So Starbus is a very amazing technology. It can allow you to write and build distributed queries across heterogeneous data sources. Like you can write a SQL query that's very powerful, which can go and read data from a SQL database. And at the same time, you can join it using data that is living on the object storage as three interfaces. So this is the power of Secur, sorry, SuperSet that provides you to have this kind of environment, which is a very powerful engine. And then SuperSet is basically the dashboard and reporting part of it, which will show you in real time the reports. And Grafana obviously, to manage this system, we would need a cool developers like you, as well as some operations people who want to keep an eye on onto their dashboard. And they need some real time Dashboard and Grafana is like the best choice, at least in my opinion, because I love Grafana. So this is the overall solution that the Ansible automation under the covers is doing for us, right? So what Ansible is doing for us, let's understand. It's installing and setting up a database for us. It's installing and setting up Kafka clusters, because this is a demo right now. So Ansible is deploying an Kafka edge cluster as well as a Kafka core cluster on the same OpenShift environment that we have. And then Ansible is going to deploy multiple microservices. So one of them would be license plate recognition microservice, which includes using build config builds and then from Git source to images and deploying that into deployment config and launching the pod and creating the service and exposing it through the route. So Ansible is actually doing a lot of work. Similarly, like LPR, Ansible is setting up microservice for events, microservice for image server and microservice for load generator, because I don't have a real camera hanging around and looking at the street. But yeah, so we have a load generator for the sake of this demo. And then setting up Secur, which will move the data into the system. So this is all microservice setup that Ansible will do for us. And then setting up OpenData Hub, which is the collection of tools which will help us do analytics in real time. And then finally, the Grafana dashboard setup. So Ansible is really doing the heavy lifting here. While I'm speaking and talking to you right the moment, the system is doing the job for me. And for all the enthusiasts out there, what all modules I'm using to make this happen, it is just these eight modules I'm using to get my system up and running from zero to 100. So there's a community model called KITS, which basically uses OpenShift client, Python client library, client tool, I think library, which it will call up and it has full access to all sorts of OpenShift objects. So Ansible can just tap onto the KITS module and boom, you can just do whatever you want, you can do it with an OC client, right? Similarly, by the way, OC client does crud. It will do and change the objects. While KITS info, another module, it will go and read out all, basically all the get commands that you do, like OC get or kubectl get, KITS info will do the more or less the same stuff. We are using set effects because we have to manage the state and dynamically update the inventories and have some variables that we wanted to capture into the system. I'm running some commands because there are certain things which are still complex and I don't have a module for. I'm like Supercept, Supercept data updations. I'm using some commands for it. I'm using copy modules to copy images from my data store to S3 and then adding host to the inventory, S3 sync to move the data and a pause just for, you know, sake of completion, like I'm pausing it just so that system can get stable. So yeah, look at this. I mean, I'm just using the eight modules to do a lot of heavy lifting using Ansible. And Ansible is a really powerful tool to do this kind of automation for you. Okay, let's now quickly check how does Ansible KITS module or Ansible usage will look like. You will write something like this. Okay, KITS module, the state is present and I need to do multiple, I need to apply multiple ML files. Like, okay, create a secret, create a deployment config, create a service, create an image stream, create build config. So KITS module and Ansible will make sure you're applying these ML files like you do in your OC create or OC apply minus F commands, right? Add into the right namespace. So that's what KITS module will do. It's pretty simple to use. And then KITS info is doing the other way. It's basically getting the data from the cube API. Okay, tell me the secret and the name of the secret is this and register the value in a PostgreSQL register variable and then later on, you set facts to pull out secrets like database username passwords and name that you can later on use in your playbook. So this is how you will create these Ansible playbooks for your lovely Pacman apps. Okay, and all of the code is available onto the Git repository. It's open, go and look it out, check out, happy to get some PRs if you guys are interested at some, we'll share some links and credits to my friend, Chris Bloom, who is an awesome guy who helped us to with a lot of Ansible automation around that you wanna see. Okay, let me go in and get back to my system. Okay, so the system is still doing the job. You can see that Ansible, I'm gonna scroll up a little bit and show you. It's not very user friendly, but since I'm using this in a very narrow window. But yeah, look at this. Deploying databases, deploying Kafka clusters and deploying Kafka drop APIs. And then all sorts of stuff that I explained to you. So system is still doing the job, so bear with the system. Let me see, yeah. So yes, so I'm into all projects. Let me go to Smart City project. Smart City is here. And yes, let me see if I'm getting any failures. No, there's no failure at the moment. So things are good. So yes, Ansible is, so at this point Ansible is importing some data source into SuperSet. So at this point, it has done lots of setup already. We are into the SuperSet part of the equation. Meanwhile, while this is getting ready, I have another environment where I was pretty sure that this is gonna take a little bit of time. So I'm gonna switch my window. It's another environment which is completely set up. So I'm not cheating here, I guess. Or you guys can let me know if I'm cheating. So just to save time here. So this is how the end result will look like. You're gonna deploy lots of containers, lots of parts, lots of services. And Ansible is basically automating and taking doing the all heavy lifting here. So this is my developer view. If you're a fan of administration view, go to the right projects from here, Smart City, and then you will see, okay, great. I have lots of parts running into a system, I guess. Yeah, 36 parts, these are jobs. But yeah, 36 parts up and running at the moment right now, which is making happen this demo. So I'll open one of my route, which is the dashboard, Grafana dashboard for the birds of view of the city. So as I mentioned before, this is the city of London and all these black boxes here are a camera. And we have this generator, which is generating images or like simulating vehicles which are passing from the station. In real time, we are capturing the count of the vehicle. We are detecting the vehicle, last known vehicle from the system. We are detecting its license plate, which is the optical character recognition and the license plate recognition model of the vehicle. And who is the owner? So by the way, these names are just made up names. So that, yeah. And here's a nice graph which can tell you this is a rich data that we are collecting. Okay, please tell me out of the city, what are the top stations we have in the city, right? So, okay, so station one is very popular because there are so many vehicles passing from the system. So you know what, these are some business metrics and data that you can take up. I'm gonna real quick show you the CPU chart because I like this because it's a GIF and it's pretty awesome to see how this looks like. Okay, as I mentioned before, we have edge, open shift edge, open shift core, we have Kafka and inferencing happening on the edge and data flows from the edge via mirror maker onto the score Kafka cluster and via second data move to stuff, stuff object storage. And then later we use Starburst and PostgreSQL and other tools to do some dashboarding around it, which looks like this. So this is your, this is the view for your managers and your key stakeholders. This is a reporting view that they will see. Okay, so far we have done like 215,000 pounds of collection, toll fee collection and 100,000 pounds of pollution fees and 50 or 45,000 vehicles have passed from the city, right? Again, this is all generated data, similar data, however, this to give you an idea that what you can do with these tools once they all work in tandem. And then we have these nice panels in superset which you can use adjusted according to your business case and you can get very precise metrics about your business. Okay, station number 5201 is getting 22% of the total traffic of the city, which means we need to do something here, right? Maybe add one more lane, maybe build a super path here, right? And then again, so not very interesting, but anyways, what type of vehicles are more popular in the city? Okay, Nissan's are popular or Audi R8 are popular. We can debate on that. And some table panels that you can tell me, tell me my top consumers who needs to pay a lot of money to the government, right? Okay, so Suzanne King needs to pay like whatever, 15,000 pounds, oh, why? That's a lot of money to the government. But anyways, this is a fake data and you get an idea, right? This is the view for your managers that they can enjoy and OpenShift and tools and projects running on OpenShift can make it happen. And all the nifty Kafka lovers here, I've seen that Kafka, I love Kafka, Kafka is great. And so I'm using this Kafka, which is a nice, very nifty utility. It's an open source project that you can tap into. You'll deploy it, you have the code in my repo. So yes, it will help you to, you know, check out Kafka messages in real time, no need for Kafka cat, instead just plug into this. It's an open source project. When you open this LPR and then you will, hey, quickly show me the messages. So this is real time messages coming onto the LPR topic at the core data center, right? See this URL. So when I expand this, you know, I'm getting events like, okay, timestamp and event ID and vehicle license plate and detection is successful and which one is on which station we are doing the detection, you know, those kinds of data. And I think we also, we definitely have another deployment of this. Let me see where we are on my deployment. Okay, wonderful, fantastic. So this worked, at least demo God, God is with me. So Ansible is completed, it has done, it has took like 29, oh yeah, 15 minutes. It has taken 15 minutes to deploy the entire setup that we have shown. By the way, you guys, we have already gone through all these things that you wanna show. The only thing I'm gonna show you here is if I go to network and see routes, there are so many routes in here. ODH, ODH is a great, great piece of solution. I'm just using two components of ODF. That's why we don't have a lot of component, but check out ODF, ODF is pretty cool. And then I wanna show you just last thing, my, we have shown, I've shown you a core, right? I've shown you a core, this core Kafka cluster. I just wanna show you my edge Kafka cluster as well. So these are the cluster deployed at the edge. The topic name is LPR, license plate recognition. I'm gonna view messages and I'll do quick view. Okay, so the point here is that we are capturing the message at the edge. Why a mirror maker? Oh, let me go two steps back. We are doing inferencing at the edge, detecting the vehicle, capturing the license plates in real time, putting that strings onto Kafka topic from the edge, moving the data to the core. And then once the data is in the core, we are doing analysis on top of the data, analysis like this, analysis like this. So this is what I have to show you guys coming back to my deck. This Ansible OpenShift and a lot of the tools makes this happen. Okay, so I think I am done with my prepared content. If there are any questions, I can take it up for you. I do not see any questions, but yeah, I'm hanging around in the chat. If you guys have any questions for me, anything that you believe I should explain more or give you some pointers to play around, I'm happy to do it. So that's it. Are you still live streaming? Yeah, thank you Karan for this amazing presentation.