 Now that we've used a model and a REST API, let's switch it up a little bit and change the application to a Kafka consumer. To make things easy, let's go ahead and use Red Hat OpenShift Streams for Apache Kafka. It's available on your Red Hat console under application services. Let's go ahead and create a new Kafka instance. As that's creating, you can see here that we've been given a bidstrap server. Let's copy that down and make sure we have that for use later. We can also go ahead and create our service account. Copy the client ID and secret to a safe and secure location for use later. Once our instance is created, we can set the permissions for our new service account. Here under access, manage access, let's add all the right permissions for our created service account. Now that permissions are set, we can create topics for both our notebooks and for our object detection application. Here I've created three topics, images, notebook test, and objects with one partition for use inside our application and from our notebooks. Before we leave, we need some information if you didn't already get it. We need the bootstrap server, the service account client ID and secret, and the three topic names. Now that we've created a Kafka instance, we can connect to that instance in Jupyter Hub. Let's restart our notebook server using that Kafka connection information stored in environment variables. First, I'm going to restart the server by going to File, then Hub Control Panel, and then stopping my server. Now I'll restart my server and add those environment variables. Our first environment variable named Kafka bootstrap server is the information from your Kafka bootstrap server. The second environment variable, Kafka username, is where you'll put your service account client ID. Your Kafka password will be set to your client's secret from your service account. Then click Start. Now that we've restarted our notebook server with the right variables to connect to Kafka, let's clone our repository with some sample notebooks and our Kafka application. Go to the Git plugin and clone a repository. The URL is available in your instructions. Let's take a look at our new project. Here you can see some sample notebooks. In the sample consumer, you can see that we're installing our dependencies. And instead of flask, we're using Kafka Python. From here, we can listen to messages. And in the producer, we can send messages. Notice in the prediction.py file, nothing has changed. It is the exact same prediction from the REST API. However, the application has changed. Here we have a consumer that takes images, predicts objects, and puts in on the object's topic. Now let's move on and create an application. Back to the OpenShift developer console, where we're gonna create a secret so that our front end application and our new Kafka consumer can connect to our Kafka instance. So make sure you're in the developer mode and in the correct project and go ahead and go to secrets and create your new secret from YAML. Paste in the secret as guided from the instructions. Now use your correct information to connect to your Kafka instance. Once your secret is created, let's go ahead and add this secret to your front end application. Let's go ahead and create our Kafka application from Git. We're not gonna need a route to our application since it's a consumer and not serving an API. What we will need to do is inject our secret so it can connect to Kafka. Let's go back to our secret and add it to our Kafka application. Here's our secret and we'll go ahead and add that secret to our Kafka consumer. Now let's wait for it to finish building. When your consumer is done building, go back to your front end application and click on your route again. Now from your application, switch it to video mode and record. Thanks for watching the workshop and I hope you enjoyed it. To find out more, visit the OpenShift Data Science page on redhead.com. There you find more information about the product and more learning paths in workshops.