 Now that we have extracted our prediction and created an application, we can build a containerized image and deploy it into OpenShift. A couple of key files before we move on. Here is our application entry point, the wsgi.py file, which calls the prediction function. Here is the updated prediction function with the object detection. And here, all the dependencies for both development and data science that we need for our application to run are here. So now onto the OpenShift console. If you're in the OpenShift data science console, you can get back to the OpenShift console using the application switcher. Here. Once you're back to the OpenShift console, make sure that you've got the developer section selected and you're in the correct application namespace. From there, we can go ahead and add a new application from Git. Once you enter the location of our Git repo, OpenShift has detected a Python application. I'm going to customize our application name so that it's more relevant for all our applications and for the service itself. And here I'll leave it as a deployment with a route and then simply hit create. As you can tell, our application includes a deployment and a build config. This will take a few minutes to build. Once it is done, you'll see a running pod. Here you can tell the build is completing and the pod is starting. And there you have it. Our application is now running. Let's go ahead and test your deployment running in OpenShift. From your deployment, under resources, find the route and copy it. Now, inside your test flask notebook, set your route. Then run your notebook. Now that we've got a deployed service, let's have some fun and deploy an application that calls it. Before we move on, let's make sure we can call the service correctly. Under our deployment, under services, we can see our service name and port. Now, let's create a front-end application the same way we created our service. Add from git. The URL is in the instructions. As you can tell, it has recognized a Node.js application and has put it in the same application. We'll use a deployment and now we're going to configure a secure route. We'll pick secure route, edge termination, and redirect in secure traffic. Now, so that our application knows where to call our service, let's add an environment variable. We're going to create a new variable that knows the URL to call for predictions. Then we hit create. Here's our new application, which will take a few minutes to build. When the application is done building, you can click on the route to take you to the application. Thanks for watching. Check out part three of the workshop where we integrate the application with Apache Kafka.