 Here at Red Hat, we want OpenShift to be the platform for data scientists to use their favorite tools to create real-world AI ML solutions. I'm Chris Chase, a developer at Red Hat. Today, I'm going to be giving you a preview of Red Hat OpenShift Data Science, a managed cloud service based on the open source project Open Data Hub. The service allows a data scientist to rapidly develop, train, test, and deploy machine learning models. We'll take a high-level overview and then we'll see how a data scientist might go about incorporating their work into the development pipeline. When Red Hat OpenShift Data Science launches later this year, it will be available through Red Hat Marketplace as an add-on to our OpenShift cloud services. This means you will need an OpenShift cluster to get started. Once purchased, the OpenShift Data Science tile will be available in your clusters add-ons tab. From there, it's a simple one-click installation. The link to the application is available under OpenShift Managed Services from the application launcher. This will launch the OpenShift Data Science dashboard. Immediately, we can see the enabled applications, in this case, Jupyter Hub, which is managed and supported by Red Hat. We can go directly to Jupyter Hub from here or start the tour to learn more. The Explorer link will take you to a variety of available applications. These include Red Hat Managed Cloud Services, Partner Managed Services, and Self-Managed Software. Click on any of the tiles to learn more about each application and how to get started. The Resources link offer tutorials and documentation on how to use OpenShift Data Science, including partner software. Quick Starts are embedded in the application itself and offer an in-line tutorial experience. Here, we can see the tutorial for creating a Jupyter notebook. Let's go ahead and get started in Jupyter Hub. First, I can pick a notebook image from the list of available images. I'm going to pick the TensorFlow image, so my notebooks will automatically include updated versions of TensorFlow and other packages I'll need for my project. I'm also going to pick the resource settings for my notebook, including CPU, memory, and GPU. The Environment Variables section is useful for injecting dynamic information that I don't want to save in my notebook. In this case, I'm going to be reading and writing to S3-compatible storage, so I'll inject my credentials. Now, I can click to start the notebook server. So now we've started a new notebook server and are ready to experiment. Perhaps I want to connect to S3 storage and explore some data. Now, if all I want to do is experiment, this is great. But if I want my work to be part of a development project, I'd like to set that up from the beginning. For this project, we'll serve a model using a RESTful API for predictions. An application developer has already created a source-to-image Python project in a Git repository. It also has some starter notebooks and a Python file for my prediction. It will build and deploy on the cluster every time changes are pushed. This way, when a data scientist updates the model, the service will get redeployed with the new version. If you're interested in the sample template for an S2Y Python project, it's available in the Open Data Hub IO GitHub org. So this is a very common OpenShift development scenario, and we want the data scientist to be part of it from the very beginning. Here, you can see the front end of the application. And here is the REST service that will be called from the front end. The model isn't served yet, but a basic API is available as part of the initial setup. So now, I'll go ahead and clone the project. Here, we can see the starter notebooks where I can get to work. In addition, there's a very small amount of application code that will serve my model using a common web framework. Dependencies are here in the requirements.txt that will contain everything I need to serve the application, but I'm also going to use it in my notebooks. Let's fast forward a bit. Now, I've got a TensorFlow model I'm happy with. I'm going to update my dependencies in the requirements.txt file, and I'm going to put my prediction function in this prediction.py file. I can test my Flask app locally if I'd like. When I'm confident it works, I'll commit my changes and push it to get. As you can see, because of the configured web hook, a new build has kicked off with my new model. Once that's deployed, the application is updated. Now that the application is updated, let's go ahead and try out the app. There we go. We've successfully served our TensorFlow model in a REST service. That was a quick introduction to Red Hat OpenShift Data Science, what it looks like, how to use it, and how to fit your work into the development pipeline. While that was one way to use OpenShift Data Science, our goal is to let data scientists use the tools they want to use, like open source tooling, partner software, and other Red Hat services. I hope you found that tour helpful. And I cannot wait to see what you do with the Red Hat OpenShift Data Science platform.