 Hi, I'm Chris Chase, an engineer at Red Hat. The pain point in data science projects is integrating that work into application development. In today's workshop, we're going to integrate an AI-MO model into OpenShift application development. We'll start off with some notebooks, deploy into OpenShift, and then we're going to integrate it with Apache Kafka. So I hope you enjoy it and find it useful. Here we have the OpenShift data science dashboard. Under the applications enable tab, we'll see Jupyter Hub. From there, we'll launch. This is the Jupyter Hub spawner and this will create a new notebook server for us. For this tutorial, we'll be using the TensorFlow image with the right set of libraries. If you choose the wrong image, we're not going to have the right libraries. For container size, small should be sufficient. And then we start. If you've entered in some settings incorrectly and want to start over, you can go to the file, hub control panel, then stop my server. Once it's stopped, you can go ahead and restart the process. Let's go ahead and get started with some sample code in your Jupyter Hub environment. We've created a repository in Git with some sample notebooks. The URL is available in the instructions. To clone, click on the Git icon, clone a repository, paste the HTTPS URL, and then clone. The files are then located in your file tree. Let's go ahead and get started in Jupyter notebooks, an interactive environment for running Python. Open up the zero notebook. To run a cell, select it and press play. The result will show up underneath. You can also hit shift enter. To restart and run everything from the beginning, hit the fast forward button. To experiment on your own and start with a new notebook, hit the plus and create a new notebook. Now that you know how a notebook works, let's go ahead and open up the explore notebook and so you can see an object detection model at work. Follow the instructions and you should have the working object detection model making predictions for you and finding objects in pictures. The next few notebooks deal with serving our model as an API using Flask. First, we'll figure out what dependencies we need and then we'll extract our prediction into a file. Once we've done that, we can go ahead and run our Flask application and test it. Thanks for watching. Check out part two of the workshop to find out how you build a containerized image and deploy it to OpenShift.