 A data scientist isn't just someone who trains models, she also turns data into business insights. Businesses don't have a one-size-fits-all method for machine learning systems. A well-architected model may be useful for gaining insights into data, but oftentimes in order to gain business value, models have to be deployed as part of a larger intelligent application that's constantly learning from data and making inferences on dynamic data streams. I used Open Data Hub to seamlessly bring my models into production. The tools that this product gives me allows me to deploy my models without having to be a front-end developer. I'll start my data science workflow with a model deployed through the Selden operator. This will host my model outside of a Jupyter notebook for easy access for both me, a data scientist, and the rest of the team, which will include software engineers or front-end developers. I'm able to interact with this model through Jupyter Hub. I can see more about the explanations that are being made, the different outputs that are being created, and all of these outputs plus other model metrics are scraped and stored in Prometheus, which is a time series database. From Prometheus, Grafana will take these different metrics and be able to visualize them in really easy to understand dashboards. It will look like there's a lot going on on this slide, but the beauty of the Open Data Hub is that I can deploy all of these different applications at the click of one button. We'll start in OpenShift with the Open Data Hub operator already installed. To see the entire suite of tools that's offered in this operator, we can go to the dashboard right here. These are all installed automatically when you deploy the Open Data Hub operator, so you can see there's a lot of different options if you're looking for a more robust workflow. As a data scientist, most of my work is done within Jupyter notebooks, so let's start there to see what model serving looks like from a data scientist point of view. When you first launch the notebook, you can see a lot of very typical data science workflows. I am downloading libraries, I'm uploading data, and you can see that I'm establishing a gateway with my Selden client. This is establishing a connection with the pod, which is deploying my model. The gateway you see there is the same gateway that software engineers or front-end developers would use to have production-ready applications built off of the model I've created. After establishing this connection, I'll use a predict function in order to get a prediction for my image classifier. After decoding the predictions, I can see my model happily says that this photo of a cat is indeed, with 84% certainty, a cat. While a data science workflow normally ends when a model is built and validated, it's still important to keep an eye on the model to make sure that it's continuing to stay healthy. Prometheus forwards the data to Grafana, so I can build dashboards to easily monitor the model's health and performance. Looking inside Prometheus, we can do a quick visualization of the data by querying how many times our Selden models API has been requested recently. Prometheus also supports more complex queries if you need them. Different measurements of model health can be easily visualized using Grafana's dashboards. Grafana is where we're really able to visualize everything that's happening under the hood of our model. Machine learning doesn't break loudly, and code can keep running even with severe model degradation. It can be difficult to diagnose, but Grafana allows us to measure model health. In this dashboard, we can see requests being sent and how the model's responding. If there was a distinct change in model health, whether that be API latency, global success rates, or any other custom metrics that you choose to build, you could clearly see that the model is not performing as expected. As a data scientist, my time is valuable, and I should be able to use my time to do data science without also having to have the deep knowledge of a cloud architect or of a front-end developer. The Open Data Hub simplifies the end-to-end machine learning workflow and gives me the tools I need to put my model into production. Thank you for watching.