 Here at Red Hat, there's some really cool things happening in the open source community related to data science and the tech stacks around them. I wanted to make a quick video to highlight one part of that, which is running Jupyter Notebooks on OpenShift in containers that can leverage the underlying GPU hardware. If you're not familiar with Jupyter Notebook, it's essentially an open source web application that lets you create and share these documents that have live code, which could be like Python or R numbered for languages. You put equations, you do visualization, you can add narrative text around all that. They're used for a lot of different purposes, like data transformation or data cleaning and also numerical simulation and statistics and, like I said, visualization and machine learning. So when you get to run those things on OpenShift, you get, Operations loves that, because you get a lot of the cool container management capabilities and resource isolation and quotas and being able to manage all this stuff. But data scientists love it too, because it can be, essentially, it can give you that horsepower that an OpenShift cluster can provide. It can give that directly to these scientists to have it on demand. So without further ado, let's jump on to an OpenShift cluster and see it in action. Right here, I've just got a basic Amazon Web Services cluster with GPU node, which is a P3 2XL. And this NVIDIA GPU operator does a lot of the hard lift of automating access and driver management and all the things that you would need to do manually, which is hard, to get that GPU and make it available for containers. So, like I mentioned, Jupyter Hub is what I wanted to focus on. Even though there's a whole bunch of other stuff inside of this OpenData Hub community that we could look into. But Jupyter Hub, it looks like this. So you come in, you've got this list, this file system-based list of my notebooks. I've got three of them here. So we'll open this one right here. This one I wrote to basically just showcase some common TensorFlow math that you might be doing and show the difference between doing it with a GPU versus doing it with a CPU. So we'll take the first cell here, and we'll click Run. And this is a basic command that's going to ask TensorFlow, hey, how many GPUs do I have in this notebook, which makes sense. It's telling me one, because when I created this Jupyter Hub instance, I only asked for one GPU access. This next one's a little bit more complicated. We're going to run a matrix multiplication operation. And we're going to take 1,000 by 1,000 matrix and multiply it by itself 100 times. And I'm going to do that with a CPU. And I'm going to do that exact same thing again with the GPU. And you'll see it came back with results. And the CPU took five seconds to do this math. And the GPU took 0.02. So in this case, with 100 iterations of this 1,000 by 1,000 matrix multiplication of random numbers, the GPU gave us a 208 times performance boost, which is pretty awesome. And here's another example of a machine learning type activity, which is doing spatial convolution over images. And in this case, we're doing the same sort of activity where we're going to do it once with the CPU. We're going to do 30 loops of this math. And then with the GPU, we're going to do the exact same thing. And we're going to compare the results to see what the performance differences are. And here we go. So the CPU results were 24 seconds. And the GPU was 0.2. So in this case, yeah, still over 100 times speed up by using the GPU to do this work. So yeah, that's a quick demo of using Jupyter Hub on OpenShift with the GPU. I hope you found it valuable. If you want to find out more, go to OpenShift.com and look for the AIML topic. Or you can reach out to me on my Twitter. Thanks.