 Go on. Welcome to this workshop on object detection on top of OpenShift. My name is Max Morakami. What we're going to do now, so together with Prasant, we're going to develop and deploy an intelligent application that does object detection. So let me quickly show you how this is going to look like. I've opened this up on my smartphone. So that's within a browser. And I can now do an object detection. So I have this book cover here and try to capture this. And the object detection is working on my smartphone. There's a bunch of bounding boxes indicating the detected objects. And the labels are too small, but definitely detected a butterfly with a bunch of wheels. So again, this is what I just captured. So if you go to the presentation deck, please, I brought Prasant with me, as I said. So my name is Max Morakami. I'm a specialist solution architect at Red Hat, working with OpenShift as an AI platform. And as I said, I brought Prasant. Prasant, can you quickly introduce yourself? Yep. Hey, everyone. My name is Prasant. I'm a senior principal software engineer on the Red Hat OpenShift data science team. Functionally, I'm one of the lead data scientists. And I'll be walking through the object detection model and serving part. OK, great. So I'll just do a very quick introduction based on the slides. But within a couple of minutes, we'll actually go right into the workshop itself, where we get ready hands on experience. So before we start, though, please make sure that you do have a Red Hat developer account. So this is the URL where you can sign up for one account. It's completely free. You just have to take one or two minutes to go through this to enter your details. But definitely, for the later part of the workshop, you're going to use your account for this. OK, so object detection itself. I think most of you are fairly familiar with object detection just on a very high level note here. When we talk about object detection, then we're talking about a technology that is based on the computer vision as well as image processing technologies. So it's really all about automating the interpretation of a visual scene. So the main questions are, which kind of objects are present in a visual scene? And it may also become like, where are those objects located? So if you have a look, for example, at this scene over here, then what you can see is those bunch of different bounding boxes a little bit like on my smartphone, indicating where the machine learning model has detected specific objects as well as labels to indicate that those particular objects have been detected and not others. So and the really fascinating thing about object detection, much as in most of the AI technologies, is really that now we're talking about tasks, which are very easy for us as humans. So detecting all kinds of different objects in a visual scene, but at the same time, they've always been tremendously difficult to implement for machines. So there's a vast array of different use cases out there. I just brought a few of those here. So of course, we think about driving in terms of autonomous driving. Now, when we drive ourselves, then seeing and understanding the surrounding is very important for us and is really crucial for driving. So if you're blind, then probably you're not a good driver, which means that if you want to automate the whole driving part in terms of autonomous driving, then we need a car that detects all sorts of things. So other vehicles on the streets, pedestrians, traffic lights, and different signs that are really important for the traffic itself. And it needs to do all of this detection in real time to guarantee safety, to really come up with countermeasures and steer the car in an appropriate direction. So in the same domain in terms of vehicles and cars, we can also think about the traffic, like in the middle picture here. And there you might ask, if you have a scene like this, such as the one that we see here, and even a stream of images, so like a video stream, then you might ask, how many cars are there? How fast are those cars moving? So you can think that you can already use this kind of information to detect traffic jams, which is interesting because now we're not only talking about detecting discrete objects, but really something like a concept of a traffic jam itself. So we can use object detection of cars in this particular example to either collect data so that other people can analyze that or even to automate traffic management, such that adjusting speed limits automatically or opening up additional lines if we see that there's heavy traffic going on. And lastly, for the third picture here, also a little bit related to traffic now, talking about pedestrian traffic. Again, the question, how many pedestrians are there? How many people are there in a particular space? And we can also use that if you want to automate the monitoring and the management of crowds, right? Like if there's, for example, social distancing going on where we need to ensure that not too many people are in a particular area. So all in all, object detection is incredibly useful for extracting meaningful information out of visual data. So, and especially in the kinds of examples that I show to you here, we're talking really about edge devices where the data are coming in. And we may not want to transmit all of those visual data into the central data center, for example, because oftentimes the latency and the bandwidth is limited. So this kind of object detection gives us a way to meaningfully compress the data volume that we might need to base our business on. So that's about the use case as such, where we're now going to implement object detection in this workshop on top of OpenShift. And we're going to use a very new product which we at Red Hat have developed, which is called Red Hat OpenShift Data Science or Short Roads. So, on the top here of the schematic, you can see that Rhodes is meant as a platform, an AI platform, so to speak, really covering the whole life cycle of a typical machine learning project from the data gathering and data preparation phase to the model development, to the model deployment and finally really also the monitoring and management going on within production. And OpenShift Data Science, of course, as the name implies, is a platform based on top of OpenShift, currently available as an add-on to OpenShift Dedicated and OpenShift Service on Amazon Web Services, a completely managed service where you as a customer or a prospect or partner don't really have to care about infrastructure itself. So we at Red Hat really take care of that for you. So the main components here are different types. So I would say the Rhodes core as such is the Jupyter Hub environment, which allows you as a data scientist to self-provision your Jupyter environment so you can quickly do your experiments and model designs based on TensorFlow, PyTorch and other libraries that are typically used. And then, of course, talking about OpenShift itself, there's a number of OpenShift native services. So all the services that basically come with OpenShift, you can directly also leverage that in this workshop, we're using heavily the source to image functionality which allows us to basically transform the content of a Git repository into a running container without really having to care about writing Docker files. And it also comes with the integration of other managed services such as OpenShift Streams for Apache Kafka, OpenShift API management. And really the Kafka piece is also going to be central to this workshop because we're going to implement a stream-based real-time application basically. So we'll have a look at this one as well. And lastly, we're collaborating with a number of partners, of software partners and offer the ability to really also use those different kinds of products, so fully supported products, together with Reddit, OpenShift data science, so partners such as Intel and IBM and Anaconda that might be well-known here to this crowd. Okay, so one other thing that we're going to use in this workshop is the Roads Sandbox. So the Roads Sandbox is an offering part of the developer portal, which allows everyone to leverage and try out Roads without having to pay anything just quickly in a kind of sandbox environment. So please open up this URL because we're going to use this for the workshop. And let me quickly show to you how this is going to look like. So we're going to be redirected to the developer portal. We're going to try OpenShift data science in the sandbox. Going to click this. So I'm already locked in in the developer portal. You might be prompted to log in with your account that you might have set up earlier. Now we're going to start using the OpenShift data science sandbox here. And because I've locked in already, I'm directly redirected into the Roads console. So just to give you a very quick overview of the different functionalities here. So we're seeing basically three tabs on the left hand side. We're talking about applications here. So there's a number of applications that I just showed to you in the schematic with OpenShift data science. And there's one application that is enabled by default which is Jupyter Hub. So this is really going to be the thing which we're using in the workshop heavily. I'm going to skip this at this point and Prasanth will walk you through the Jupyter Hub application later on. So if you click explore, then you can see all the different kind of applications that can be enabled. As I said, by our partners, also other redhead products here. And lastly, we have the resources tab which shows you're really a number of tutorials, documentation, interactive walk-throughs really related to all the different applications that are part of OpenShift data science to really enable you to see what those different components are really about if you don't know them already. Okay, so please keep this open. We're going to use this in a couple of minutes. Going back here, I just mentioned Apache Kafka. Again, this is going to be something which we're using. Now, this is now another URL. Please open this one and this will redirect you to the redhead OpenShift Apache Kafka. It's also a managed service. So we don't have to worry about setting up Kafka on our own, worry about the infrastructure. Everything is managed for us already. And we can directly create a Kafka instance. So this will be a short-lived Kafka instance, not for production usage, of course, but definitely good enough for our workshop here. So let's just click create a Kafka instance. Let's type in something like our username then object detection. We have this currently available on AWS. There's one region currently available. So this is not GA yet. I think it's in service preview. And there will be many more features available once this service will be fully released. But so far, as I said, we can use this for our workshop. We can create this instance here. And this will take a couple of minutes. That's why we're doing it right now because later in the second half of the workshop, we're then going to use the instance that we have created. Okay, so yeah. And just to mention here, at this stage, it might be that there's not enough capacity for all the participants of this workshop. But anyway, we're going to record the session. So later on, you might have the chance to provision a Kafka instance and then together with the recording also do the workshop itself. Good. So coming back to the quick intro presentation, now we're going to talk about the actual application that we are going to deploy together. So this is a rough schematic just to have us think about what we're trying to achieve here. So in the end, it's all about really deploying an intelligent application. So that's the one that we see on the right-hand side. And we can think about this as maybe even running on-premise. Here in this particular workshop, it's actually running on AWS in the public cloud. But of course, because it's running on OpenShift, it gives us the ability to run this wherever OpenShift runs on-premise in various public clouds and so forth. So this is going to be an intelligent application which is receiving sensor data. In our case, really the stream of visual data coming in from a camera could be our own webcam, could be a smartphone, for example. And in the fully deployed application, those images are going to be fed into a Kafka cluster and fed then into a machine learning model. So the machine learning model that is trained to do object detection. So the machine learning model will then detect objects and then the locations will feed the spec into the Kafka cluster. And finally, in the front end component, our user can then see what kind of objects are detected here. And where does this machine learning model come from? Of course, we have a data scientist. So for example, Prasanthir working then on another environment, in a Jupiter environment, somewhere in the public cloud. So for example, in roads, working on this object detection model might even access some data, for example, in an object storage. And most importantly is going to store the different parts of the application together with a model inside of a Git repository. And the Git repository will then be accessed by the OpenShift cluster where the application is then being deployed and via source to image will then build directly from the Git repository, a running instance of a container and with a machine learning model embedded. So that's the whole idea behind this. It's really something like, I would say, hello world application in terms of computer vision. And we can think that, okay, we can transfer this use case, say to a shop, for example. So we have a camera in a shop or a camera in a warehouse or in a factory in some places where you might need to understand automatically what's going on in terms of objects here. Okay, just a very quick way to get started. The different steps here we're going through. So Prasant is going to walk you through, as I said, the Jupiter hub part is going to explore with you together the object detection model in Jupiter lab. Then it's going to explore how to actually serve this model with an arrest service. It's going to deploy the object detection app together with the front-end service. And then I will take over for the second half where we're going to talk about the Kafka integration. We're going to do the Kafka setup together and then finally deploy the final stage where the application will then be able to do the real-time object detection. And with this, I'm handing it over to Prasant. So this is the URL. I don't know whether we have a means to paste this to the workshop. So please all open up the particular URL which will take us directly into the workshop itself. And with this, I'm handing over to you Prasant. Thanks Max, let me share my screen. Good, can you see my screen? Not yet? Yes, no. Let me just quickly test it. Okay, sounds good. So this is the workshop material as Max pointed out. So I'll be reading this on one screen and kind of like walking you through the materials. So there's three sections to it. The first one walking through the notebook. So basically we're going to start with a pre-trained machine learning model. So we're going to download some data, use the pre-trained model, walk through it and like explore the model as well as the data. And then we'll jump to integrating the model with a flask application as well as like packaging it into a reproducible container image using source to image. And then we'll deploy that into OpenShift. So those are the things that we're gonna visit in the next section. So that said, like when you access the link to the road sandbox, I saw that some of you are having issues with accessing the sandbox, but hope you, okay. Let me, I saw a comment that to make the font a little bit bigger, how is this fine? Maybe one more and then I think it's going to be fine. Yeah. Good, good. So you start with clicking the link, try OpenShift data science sandbox and it'll say like, okay. I hope you've registered with it. So once you start using it, you'll come across this panel where you can view the Jupyter Hub application. So you go ahead and launch it. So this is where like you start playing the Jupyter Hub environment. For those who haven't used Jupyter Hub, so this is more like a web-based interactive platform that lets you play with code data and all your workflows. And in this context, it's gonna be Python-based. So here you start with, because we have a pre-trained machine learning model, it's based on TensorFlow. So you select that particular notebook image. It's TensorFlow Python version three eight and has TensorFlow version two dot seven. Now a little bit context here. Now if you have a totally different, if you need a totally different environment, like say you're gonna work with GPUs, then you can switch to it or you just want a standard data science environment. Yes, you could do it. So this makes it convenient for users to just, I mean, spin up a preloaded image. And here I'm gonna choose a default size, but you have the option of choosing, in this keyword, like in this demo, you can choose small or default, but in a production version, it's more elaborate so you can go to like medium, large or extra large. So with that said, let me start the server and it's gonna take a couple of seconds or minutes. So all it does in the background, it's like it's spinning up a part and just loading the container image for that specific notebook image that we pointed it to. And it should automatically take me to the next screen. Okay, good. So we are in this Jupyter Hub environment. From here, the next step is to actually clone the GitHub repo. Now, so you might wanna like take a look at the panel on the left. So this is typically when you wanna access your S3 storage contents, like if you have a URL and access credentials, like you put it here and you should be able to browse your buckets. And so this is where you clone the GitHub repo. So under section 1.2, the Jupyter environment, you will find the link to the GitHub repo. So let me copy that. Good, so that's successfully cloned and once you go into the directory, you'll find all the relevant files. So as I said, this works based of a pre-trained model, which is actually stored here. And then you have like the relevant notebooks. So let's start with the sandbox. So a notebook is nothing but like it's, think of it as a child of spreadsheet and a language interpreter, in this case, Python. So you have like individual cells where you can write your Python code, run, execute it, and then move on to the other cells and they are exclusive of each other. So let me, so you have the option of executing the individual cell or could go and just execute all the cells. And I'll show it here. So this basically executes the code here. Or now if I just say like run all the cells, it's just gonna execute all the cells in a sequential order. Good, now moving on to the next part. So this is the exploratory part. So in an ideal world where you actually train, you start with training a model, you would explore the data, do the feature engineering and then go on to like train the model, tune the model and so on. Since we already have the model, what we're gonna do here is like connect to our S3 backend and download some sample images. Do I see? Oh, there you go. So it did download the sample image which we'll be using in our examples going forward. So the exploratory phase, like you use like various like visualization techniques or like statistical techniques to kind of like understand your data. So in this case, like I'm saying, say he's, oh, sorry. I think I forgot to execute the first cell. Yep, there you go. Now it works. Good, I've tried this demo more than two times and I still like can make a mistake. So now it's defining the image. So what do you see here? Now you're trying to like load the image and try to print it out. For those who are not familiar with TensorFlow, so TensorFlow works on data in the form of tensors. So they are nothing but like n-dimensional representation of vectors. To put it in simple words, it's like a multi-dimensional array. So what you see, what you see is this three-dimensional arrays could be some kind of like pixel over here on the legs of this dog. That's just an example. So the idea here is like, when you're working with the data, you basically convert it into the format that's required for the particular framework that you use. That could be like TensorFlow, PyTorch, or any other specific frameworks or models you're using. Now the next step here is to load the model. So we've loaded the model, you can load it and now we're gonna try to pass that image into the model and see what comes out of it. So the step above this is like, I mean, we can't understand the numbers, like what it's being printed out here. So we're just gonna convert it back and display it in the form of an image with like bounding boxes. So there you go. As you can see, and I literally can't see what's in the screen, and even if I squint my eyes, I can. So just trust me, the yellow box is actually saying it's a dog and for both of them, it actually detected the dog. And the little green boxes, it says it's a footwear, but it's definitely not a footwear. And it's a very small percentage of saying like, okay, I find something at the end of the feet, it's a footwear. And I don't recommend purchasing this either. Good. So now that we've had the pre-trained model, we downloaded the data, used it with the model. Now, how do we turn it into a prediction function? So it's simple. So whatever code we wrote down in the previous exploric where we used the model and added to the data, we just generalize in the form of a function and that's how you get the predictor. So I'm not gonna run through the individual cells, but let's run through all of them at the one at the same time. So that'll show you like one way like how you execute the Python notebooks. As you can see, like a star basically says like it's right next for execution or is currently under execution. So you gotta wait for a little bit and it should start showing up better sounds. Let me just go around. Yep, still this. Okay. So now we have the same thing that we did in the explore notebook part. We converted that into a function and executed the function on the sample image that we downloaded. Downloaded. So as you said, like you can see that it predicted it as a dog and you have the nice footwear here. Good. So this walks you through the example where you have data and how you are exploring it like turning it into a function. Now, how do you integrate that with another like say like a flask application? So now the sample code here shows like how do you create? So the WSJP.py shows you like how you create a sample flask application. Okay, so let me just quickly show this. So if you see here, the flask application does nothing but it calls the predict function which was actually defined here. Yep, there you go. So that's what the flask app calls. Now to run the app, all you need to do is secure it. Okay, now you wait for the app to start. Now we are in this case we are starting this on the local host and in the next step I'll show you like how you actually started as a website on the OpenShift cluster. Okay, good, that successfully started. Now let's move on to the test flask app part. Okay, you should see an okay status return from the local host. Yep, it took a while and that's why you shouldn't trust your neighbor's internet connection. Okay, that it executed the same but now what you're seeing here is in the form of numbers but soon we'll just... So it's the same example. So, okay, I was asked to increase the font again. Okay, that's how you test the flask application. So now the next part is to serve the model. So to serve the model, we use a toolkit called source to image. So what it does is like you point source to image to a base image and it uses that and the source code to build a reproducible container image which you can then like deploy it on the OpenShift cluster directly or you could just import from get on the OpenShift cluster itself. So let me go here. When you click here, so you would see this Rubik's Cube like icon here. So go there, click on the OpenShift console. Okay, so then you should come to this OpenShift console where you'll see two more. I mean, I'll be seeing developer and administrator but in your case it's probably just developer. So go ahead and pick a project. In this case, I'm using dev. So let me just quickly check the instructions as well so that I'm not missing something. So now when you click on topology and say like right click here and says add to project and you can import from get. So now what, so at the end of this step what S2Y is kicked in and it automatically builds the image and takes the container image and then spins up the bar. You wanna give an application name here and I'm following instruction and says like use object detection but you can use any name you want but as long as you use that the same name as we move on the latest steps. That's object detection rest and now we ask it to create a route to the application. So that basically exposes your application with the public URL. Now there are advanced routing options possible but that you'll see that later in the demo. So now you go ahead and click create. So now you will see this screen and what is happening in the background is like so as I said like it's building the container image and if you wanna look at the logs you can just basically go into that and you will see that it starts with cloning that particular Git repo and then tries to pull a base image and in this case we already have like a Python image and once it has the base image and the code it starts like assembling the layers of images and the environment, the code and everything is packaged conveniently that it's a standalone execution environment by itself. And while that is getting done, go back to the topology and look at it from here. So it says like build is running. Once the build is successful, you should see that the part starts getting kicked in and it should start, it should pull up the container image and start creating the parts. Prasant can we make this one also a little bit bigger because it's a new tab, it hasn't. Okay. Let me try. Okay, great, thanks. So if anyone has some questions, please don't hesitate to put them in the chat. We monitor the chat and are happy to also answer this directly within the workshop itself. Yep, feel free to ask as many questions as possible and I'll make sure Max answers all of them. So while this is going on, I can probably show you another environment where I typically work on like developing the whole data science workflow. So typically you will have like exploring the data, then you build the features, then you train the model and tune the model and then do the model inference. And you go through the same set of steps like to build the Flask app, the prediction.by and other stuff. Now let's go back and check. So there was one question, what could go wrong and would one look at the logs to work out what went wrong? Yeah, so in principle, what the step is doing is basically cloning the repository that you indicated and then trying to run the source to image to build the container. So you can think that there could be all sorts of mistakes like some typo in the Git URL, for example, or some kind of formatting in the Git repository itself that makes it impossible to actually build the container image as such. So have a look at the source to image documentation to see what kind of structure your Python repository, for example, should have in order to be able to then finally build such a container. So I think whoever starts working with source to image will have one or the other chance to then debug using the logs. So the logs really tell you step by step what is happening and then sometimes you catch obvious mistakes that way. And we don't know whether it's failing or not. Just look at the logs, follow it. And if you actually watched it a couple of seconds ago, it had a red symbol saying like image pull back off. That basically happens when it's trying to pull the container image and it's for some reason your network is slow and there's like a transaction layer that happens. It backs off and then tries again. And in this case, it did try to do it and it's actually successful right now. The other way to look at logs is, so now you see the build that's happening. You can actually click on the config itself. Now, there you will have the AML, the builds, the environment and whatever events that actually happens on that particular build. And same with the parts. You can either view the logs or come back here and click on the part itself and say like, okay, I wanna look at the logs or the events with respect to that particular part. That will give you like the events that was generated like step by step. Now, if you have admin privileges, can actually go into the admin console, in which case, I can show you an example here. So if you're an administrator, you can look at the pods, the deployments, the configs and all the other different state of the containers. So in this case, I can look at all the parts or look at all the deployments. I can, same way I can go into them, look into the events and environment. But if you have just developer privileges, like you stop just at the log level and you don't have to go into the admin console. Okay, there was one more question. Can you monitor requests to the application using this topology window? So the topology view itself that we're seeing here in the OpenShift console is basically the only representation of what pods are out there, which ones are running, which ones are building. So it doesn't really give us a dynamic view of the communication between those pods. So for tracking the request itself, one might either check the logs of the individual pods, maybe, so if the services are also logging the requests, that could be one way, or then using the service mesh in terms of the Istio, which enables us to track networking and traffic going on between the different microservices, basically, in terms of distributed tracing. So that's the kind of topic. As I said, on a very simple level, I mean, you could just go to the pod itself and look at the logs. So when you're sending a HTTP request, it'll actually show up as the next lines. Let's go back. So we built the image and we created the part. Now, when we tried to import from Git, like we said, create a route to the application. So this is the route to the application and anybody can use this URL to send a HTTP request. So if you click on this, it'll just say, okay, that means, yes, it can access the code and like everything is up and running. So now let's move on to the next section is testing the deployment. So there are two ways to do it. One, you can use a seal, I mean, a terminal, send in a curl command, but for the sake of easy execution, let me use the notebook here. Let's go back to the testflask app notebook. And in the previous state, we used the service in the local host, but now let's try to send it to the service that's running on the OpenShift cluster. This means you need to start, yep, running it again, now let's try. Okay, now it's live. We're basically going through the same steps, but now we are actually interacting with the service that's on the OpenShift cluster. Okay, there you go. So now you get the prediction on the same dog images. Okay, now let's go back. So we tested the model on a notebook. I mean, from the notebook, now we deployed it on the cluster. We used it as a service. We integrated that with the flask application. Now, what happens when you want to integrate this with another application, which typically happens in the field. So if you go to the next section, which is calling the API from an application. So we have to create a frontend application. Now we follow the same steps, like how we created the service for this model. Now go to the add to projects. Now we do the same import from GitHub, but now use the Git repo for the application. Okay, now it detected, so it's a Node.js. It's automatically going to use that. Let me look for the names from the instructions. Okay, it says object detection, this object detection app. Okay, so now we're going to use some advanced routing options here as we say, like secure the route, use, I think it's re-encrypt. Yep, it's edge and redirect. So a little context here. I mean, you want to read the OpenShift manual for more detail explanation, but what I'm doing here is basically, there's like a two-way connection. The app goes to the routing service and the routing service connects to the web service that we have in the backend. So now what this configuration basically means, like if one of those connections is not secure, then basically drop the connection or, so there are like other options, it says like, okay, I don't care about the security here, I can just like pass through, but for more detailed explanation to those topics, you can refer the OpenShift manual. I think the next part is, we are creating this app, but we need to tell it like, okay, what service are like, we need to send forward the request to. So in this case, go to the deployment and add the environment variable. Basically say, yes, object detection, guys, create. So if you look at here, this is basically the, okay, this is basically what we copied and pasted or like defined in the environment variable for this application. Now it goes to the same steps, you can see it building the image, and then you'll see once the build is ready, it pulls the image and then tries to spin up the part. So this is what I said last time. So while you're waiting for the build to run successfully, so you may temporarily see like image pull back off or image pull errors while it's doing for the first time, because that could be due to the network glitch, but eventually it'll retry and pull the image and build the containers and build the parts. Okay, any more questions in the meantime? So while the container is being built, let me grab some sample images so that we could test it. And the model that we're using here, it's trained on like public data set, I think it's available on TensorFlow, but you should find all the relevant links from the repo. So what we can also do, because we're slowly approaching the end of the workshop, I can quickly do the walk-through through the Kafka part, setting up the Kafka instance, and then we can quickly come back to you and then check with the application itself. Okay, sounds good. All right, then let me share my screen. Okay, so I've been following along with Prasant, and now I'm at step 3.1, where it's about creating the Kafka instance. So I've done that before, so we also how to log into Rosak, so OpenShift Streams for Apache Kafka and provision a Kafka instance. It might not have worked for everyone because we're more or less at capacity here, but anyway, later on you can check the recording and then follow along as well. So let's go to the Apache Kafka. I'm going to, okay, I wanted to do it like this, okay. Good, so I've created a Kafka instance. As it says in the description here, I have entered a unique name with my username and object detection. I've created the instance. So this is already done. Now it's about creating the service account. So click connection. So connection page, copy the bootstrap server. Okay, this is what we're going to need. I'm going to just copy this in my own file. So it's this one. And then creating service account, such as my username, Kafka service account. Create this. Now I have to quickly make sure to copy this because it won't be displayed again. But I have this handy. I can close this. And it's created. Setting the permissions. So going back here into the Kafka instance, going to click the access tab as it's indicated. Okay, then I'm going to manage the access. Find my service account. Okay, now I have to compare client ID. Must be this one. And add a couple of permissions. So the ones that are indicated for the consumer group is write card permissions over here. Same for topic and same for transactional ID. And save this. Quickly creating the topics themselves. Okay, so this is updated here. Good, into the topics tab. We're going to need three topics, one with the name images. So this is the topic where the raw images are going to be put in. We can go with the default values for the partitions and the time and do the same thing for the objects. So the result of the object detection. This is into this topic. Again, default values. And same thing for the notebook tests. Test, then default values. And then we are done with this. So let's maybe go quickly back to you, Prasant. I think your instance has deployed already. Yep, so let me show my screen again. Good, can you see it? Yep. So now the image is done and the part is created. So just like the website, I mean the web service, like we have a route to the application now. And when you click this, it'll open up the camera. And I'm going to try, so I'm going to try to test it on myself. So obviously it's not going to tell my name, but it says like, okay, there's a human eye, human face, but at least the algorithm acknowledges that I'm human. And let's try a picture of my dog and I'm going to zoom it close. Let's see. Yep, that's a dog 24 person. So let's quickly try a bottle and see if it works. Set a person. Good, again. Of course it says I have a fashion accessity card for it. I don't know what that is, but yeah. Let me stop sharing now. Back to you, Max. All right, so now would be the later part where we're going to extend the application with a Kafka integration. So there's a separate repository with a Kafka client due to time reasons, we're not going to jump into that. But following, again, the workshop instructions, you can then step by step go through this. Maybe let me just show you the final result. So I've already deployed exactly this before. And just so you can see, okay, again, this is an opening of cluster. Now with the rest service and the frontend service that Prasanta has also deployed and the latter part as I said is a Kafka consumer. And if everything is set up correctly, then if I again open this URL, I should stop this camera so it can use this one. So again, this is what we just saw, object detection, static one, single image. And hopefully now the new feature with the real-time object detection. Okay, so you can see a sequence of images. It's incredibly slow. So the bandwidth and latency is not very good, but it's the first step of really extending the application to make that more or less a real-time application in some place. So maybe I can make that, oh, yes. That's larger. So you can see, I hope this up. Okay, it's taking too long. It's very slow. But you can see, okay, headphones are recognized. The person in my window is recognized as well, which is good. So it's already doing something reasonable. So with this, I would say, go back here. So we're at the end of the official workshop. There was a whole lot that we covered and then we couldn't cover because of time, but I hope you enjoyed it. And as I said, this workshop is recorded. You can then refer back to it later and follow along with what we did at this time during the workshop itself. And just a quick thing. So I'm posting my email address on the chat. So if you have any questions that are trying to workshop and need any help, just feel free to send me an email. I'm doing the same. And with this, thank you so much, everyone. Hope you enjoyed it and see you next time around. Yeah.