 All right. Hi. My name is Manoj Kumar. I'm from IBM. My members represent the power systems. I know some of you know that OpenShift is available in another architecture. I'm mostly here to talk about OpenShift on power. So from a systems perspective and not really any of the other software content you'd hear about IBM doing on top of OpenShift. So I'm sort of the chief engineer for OpenShift on power. Since IBM is acquiring Red Hat, I'm required to put this disclaimer up that everything that I'm talking about here is tough that we've been working with Red Hat. We were a great partner for Red Hat for over 20 years. We've had RAL working on our systems for a long time. So everything I'm going to talk about is stuff that is pre-acquisition. Nothing is here that I'm going to speak about is related to the acquisition. Briefly, to go through what I want to cover is what is power or power systems. Just to give you some idea of what the background is. To talk specifically about the Red Hat OpenShift on power that we brought together on our platform with that partnership. Talk a little bit of a couple of use cases. One for OpenShift for data intensive workloads and then OpenShift for AI and ML on power, what those use cases look like, and then finish up with the demo. If I can, if I can squeeze that time in. So what is power or power systems? Well, it powers the largest supercomputer in the world. Summit had 200 petaflops, runs. This is something we built in collaboration with Red Hat, NVIDIA, and Mellanox. So it has 27,000 NVIDIA GPUs, 9,000 power, 9 servers, and 250 petabytes of storage. It's also the number two. Power systems also are the number two in the supercomputer list. So both number one and two, as of November last year, are run on our power servers. Well, power is not just for HPC. We do build a lot of scale-out servers. We build those servers, which were sort of the building block of those supercomputers with the GPUs. We make that available in a 2U form factor so you can have a small version of that supercomputer at your desk or in your data center, as well as we do some scale-up servers as well, 2U and 4U servers. That typically run our Power VM hypervisor. The ones on the left run typically bare metal or KVM. And the one on the right runs our own sort of hypervisor called Power VM. We worked with Red Hat to bring REL on all of our servers. And as a result of that, OpenShift runs on all these servers as well. So specifically about OpenShift and Power, about that partnership that I kind of touched upon. We have OpenShift 3.10 and 3.11 with the two releases. The first one was available was 3.10 in October of last year. And then 3.11 came a couple of months after in December. We're working in partnership with Red Hat to bring 4.X to power as well. But we're not able to make that initial release. So 4.2 will be the first release that will be available on Power. I show in this picture kind of blow up of what kind of systems would run. So on the physical side, which is bare metal mode, you would run that what we call our AC922, our 2U servers with dense sort of compute and NVIDIA GPUs. You can run our scale out server either in KVM or bare metal mode. And then we have our enterprise class servers, as I mentioned, typically are all sort of virtualized. So I'll talk a little bit about a couple of use cases on why you might want to choose Power for your workloads. The first workload I want to touch upon is sort of a MongoDB workload with sort of a Node.js application front end. This is sort of simulating, as you can imagine, if you leave the conference at the end of the day and you head to the Gothic Quarter and you want to look for restaurants, it's sort of simulating that you going out and looking up restaurants in a particular neighborhood. The OpenShift configuration for this is a simple thing that we did in our lab with the front end sort of driver generation that's going to simulate a whole bunch of people taking out their mobile device and clicking that search for restaurants. The control plane is a single master node that's running in a Route 76 on a bare metal server. And the back end, the worker nodes are really running on our Power VM sort of scale up servers, but it's still a small form factor with two VMs on that. And the goal for this was really to look at what kind of container density can you get on our servers. So you can think of it as a pod scaling task, just scaling up the number of pods. And what we found is really that you can get twice the number of containers. You can get 148 containers of that combined MongoDB and that Node.js application on that single power server with the two worker nodes. So the server was carved up into two VMs. For that geospatial workload, the TPS is a little better. But the real story is really the containers per core. So you get to about 7.4 containers per core whereas on the Intel two socket server. Similarly configured. Similarly priced server. Perhaps the price on that Intel server is slightly more. You get two containers per core. So the ratio is really about 3.6x of container core. So any kind of workload like that that's kind of scales and you can pack them in, typically makes sense to do it on Power. So that's sort of my first use case where we're able to do that work in our labs. The next few slides I want to kind of go through is for AI and ML. We have a lot of technologies that we work at at IBM, bringing a lot of our technologies from our research groups. Things like vision and vision processing. Things like automatic hyperparameter optimization. Elastic distributed training. Training across a cluster of GPUs. I think someone touched upon that earlier. Someone actually talked about that video as well. And you can typically train much faster on Power servers. And I'll go through a couple of use cases. So we do make a whole bunch of open source frameworks on Power available. Cafe, TensorFlow, PyTorch, and so on. And typically, you get somewhere between three to four acts improvement of running those frameworks on Power. And that's mostly because we have some special technologies on Power to coherently, for the GPUs to coherently access all of system memory. So you're not constrained by just doing your training in just the limited amount of memory that the GPU or each individual GPU has. You're able to do something unique that we've done on our systems called Distributed Deep Learning. So that's really why you see some of those advantages. So we'll go through one use cases. Our Power AI systems with the Power AI vision is being used at the Hong Kong International Airport. This is really sort of vision-based. A whole bunch of cameras, kind of like what you heard from China Mobile earlier. So in those cases, it's things like crowd and queue management, detecting unattended objects, intuition detection, and so on. So that's running on Power with that Power AI vision. We also have a bunch of Power servers in the Mass Open Cloud. And the Mass Open team joined me at the Red Hat Summit a couple of weeks ago. And the kind of use case they kind of shared with us is some of their researchers at MIT want access to the same kind of computing that's available in that supercomputer. And the team at Mass Open was able to show them OpenShift and tell them the value proposition of why you don't need a dedicated server with things like OpenShift. You can just dynamically provision a pod, get your GPU-based workload running, and then decommission that workload and re-get access to that. So they kind of validated our use case of essentially GPU-based systems with OpenShift are a good combination. And we're continuing to go ahead and work with them. So how am I doing on time? Oh, excellent. OK. You're getting very, I'll keep you up when you need to. So a couple of other things I'll touch upon is IBM also has something called the Model Asset Exchange. This is a notion that AI should be available or machine learning models should be available. So we have a bunch of trained models that we have upstreamed from IBM, and that's available through the Model Asset Exchange. Things like facial age estimator, image caption generator, and so on, object detectors. So there's a bunch of models available.