 Hi, my name is Marcel Mutran and I'm a Distinguished Engineer in the CTO for IBM Linux 1 and Red Hat Synergy for IBM Z. And today, myself and Christy are here to talk to you about OpenShift on the IBM Power and IBM Z platforms. Christy, did you want to quickly introduce yourself? Sure. Yeah, my name is Christy Norman and I work in the Power Organization. I'm a software engineer and work pretty closely with Red Hat and our OpenShift team at Red Hat as a partner engineer. So that's about it for me. Thanks. Great. So IBM is really excited about our partnership with Red Hat. It's a great thing to see these two organizations come together. And we're coming together really to build what we believe is the next generation of hybrid cloud platform. And with that, we're bringing a host of capabilities and middleware and software capabilities to bear under what we're calling our cloud packs. And these are really designed around patterns around application development, data management, integration, automation, and so on and so forth. To serve the purpose of addressing industry needs and industry patterns, such as blockchain, IoT quantum, and so on and so forth. And all of this is being built on top, of course, of the OpenShift platform. And so the OpenShift platform, of course, is a pervasive, pervasively available platform. It runs, of course, in all the big public clouds, AWS, Google. It also runs in your private cloud. And within your private cloud, it runs on key infrastructure platforms like the IBM Z platform and the IBM Power platforms, which are really at the core of a lot of enterprises, IT infrastructure. These platforms have formed the center of truth around which a lot of enterprises have built and grown their IT capabilities and their IT solutions. And so when we think about pervasive OpenShift being pervasively available and being the foundation of this broader hybrid cloud platform, inherent in that is the ability to fold into that platform existing and new investments on IBM Z and IBM Power systems. And so people are asked, what does OpenShift look like when it's running on Z or Power? And the answer is it looks exactly the same as the OpenShift you might be familiar with running on Intel and other platforms. And to make that point, here we've got a view of the dashboard of OpenShift on IBM Z and on Intel. And I dare you to figure out which one is which, right? They look identical. And so the experience, the whole point here is that this experience needs to be the same, whether you're running on a public cloud, whether you're running on Intel in your data center, whether you're running on Z or Power. It's the same experience. It's the same consumption model. It's the same developer experience. You can't really tell the difference. And if you want to find out more, you want to give it a try. You're welcome to go to our public Linux one community cloud. We allow anybody in the world to gain access to those systems and provision IBM Z Linux instances. And you can go off and you can do a test drive of OpenShift on IBM Z on Linux one and see it for yourself in action. So the next question that people also ask is why would you do this? Why would you run an OpenShift based solution on Z or Power? And there's some really common patterns that people have been using or adopting around the adoption of OpenShift on Z. And so a lot in many circumstances, you know, our enterprises have large investments. They've spent decades building out capabilities on these platforms. And so there's a desire obviously to modernize, right? To take those investments and to move them to, you know, to the cloud native experience. Also, because these systems really form the center of gravity for data gravity in the enterprise, there's a need for, you know, applications that are developed or that are being brought to bear. Those applications need to be near the data that they consume, right? And that data gravity is bringing cloud native based applications near to that data. So they need to coexist or be co-co-located with that data that sits on IBM Z and Power. And then there's just, you know, the qualities of services of these platforms. These platforms have been engineered to be that center of truth, that resilient platform that the rest of the IT infrastructure can depend on. And so the resilience, the scalability, the high availability, the security, right? Any application that has that type of need from a quality of service, well, it makes a ton of sense to just host them on a platform or on a piece of hardware that they can inherit those capabilities or those qualities of services from. And so, you know, that's a great, great, great reason why other clients are also bringing their open shift based solutions to Z and to Power. Now, when we look at IBM Z specifically, you know, there's 60 years of engineering that have gone into these systems now at the center of enterprises, right? As IT has evolved over the last six, seven decades, IBM Z has evolved as well. It's a modern platform designed for the modern enterprise and designed around a lot of the needs and requirements and concerns of the modern enterprise. And so these systems are designed for very large volume data serving and transaction processing, right? They're designed to do elastic, scalable, available and resilient infrastructure where you need to host the data and the transactions that just cannot fail, that your business depends on that are mission critical to your business. And then, of course, a lot of that data is sensitive in nature and has, you know, a lot of business sensitivity around it as well. And so the ability to build security and compliance around them in a very natural and holistic way is also fundamental to the way the platform is engineered. So why, you know, why open shift on Z? Well, largely because all those qualities of service make as much sense in the world, the cloud native world as they did in the traditional IT world. You still need those qualities of service. There's nothing about a cloud native environment that removes the need for highly resilient, highly secure, highly scalable infrastructure underneath your cloud native platform. And to be honest, a lot of the investment you've made that's running on top of these platforms needs to be able to be modernized and leverage the cloud native experience as well. And so we've seen three core patterns evolve around IBM, Z and Linux one with respect to container technology. The first one is essentially what we call cloud in a box where, you know, the ability to host massive amounts of scale on a very small physical footprint, bringing down data center footprint, bringing down data center power envelope significantly to get to that 2.4 million containers per box. I mean, that's a cloud in a box experience where customers can really, you know, our customers can really get to a very high density, low power, low physical footprint compute. And that gives them very good business flexibility so that their business can grow very quickly and not be held back by their IT infrastructure. The other one is digital transformation modernization where, you know, there's an existing investment that runs on Z, the ability to modernize that investment without having to completely disrupt that investment is significant for clients. And then the ability to do consolidation and data serving at scale and to bring down the cost of doing that. So we've seen, we have studies that say you can bring down the cost of running open shift infrastructure by up to almost 48% by consolidating it onto that much denser, you know, highly available footprint. And so an example here of, you know, a client that has a significant investment in their core banking infrastructure. It's a COBOL application running on a CICS, DB2 on ZOS environment. And they wanted to extend this, you know, that investment but they wanted to extend it in a cloud native containerized fashion. And so the idea here is they wanted to incrementally grow the workload, grow the transactional workload. They tried to place that incremental workload off on an Intel based infrastructure and the data latency of going off to that Intel based infrastructure for what is essentially going to be a very chatty, you know, application because you're, you know, incrementally growing the business logic, it's going to be very, very chatty getting back and forth between the old and the new. The fact that it was the latency to get to those Intel boxes was so high, they could not do it that way. And what they recognized is they needed to run open shift co-located on the same physical infrastructure as where those COBOL, DB2, ZOS assets existed. And that allowed them to bring down the latency by almost an order of magnitude. It allowed them to improve the SLAs, the service level of grievance for their transactions, bring down their batch times for their workloads and get to a much more resilient outcome. And it positioned them very well for more modernization, reducing the risk around modernization, improving the time to value from a modernization perspective, and giving them a much better outcome. And so the other example that I mentioned was the cloud in a box example. And so on a single piece of infrastructure, on a single Linux one or IDMZ system, you can host actually five different hypervisors. And so 85 different hypervisors, each of which can host thousands of Linux environments and obviously millions of containers, right? And you can grow or extend or add new hypervisors or new Linux guests non-disruptively. You never have to bring down the system. You never have to reboot it. You can also move resources around, be it memory, IO, and compute. You can move all of that around transparently, non-disruptively. So that for databases, which like to scale up, you can scale them up in a very holistic and natural way. For applications that like to scale out, you can scale them out in a very natural and holistic way. You can network everything, software. It's all software when you talk about the networking in this world. There's no physical networking. Everything is done in software, so it's highly resilient. It's highly low-latent, low-latent. And it's highly secure because the data never leaves the box. And so you get this really wonderful elastic system. When you throw in the fact that you can compute that, you can consume all of that elasticity, and you can experience all of that elasticity and that scalability from an OpenShift value proposition where, again, it's the same experience you would have on any other platform. You really get to something that's unique and distinct in the industry. And that's why we like to call this a cloud-in-a-box experience. And then so the last key point that I wanted to highlight about IBM Z and Linux 1 is some of the differentiation that the platform holds around confidential computing. We've been investing in trusted execution environments and confidential computing on this platform for over a decade now. And so our ability to host container-based workloads inside of these secure enclaves that are inherent in the IBM Z infrastructure is really differentiating the platform and bringing a value proposition around confidential computing that's unmatched elsewhere in the industry. And that's really paid off really well in the IBM cloud where we have a set of services called the HyperProtect cloud services that offer DBAs for Postgres and Mongo, offer compute through virtual server offering, and also offer crypto services on our industry-leading FIPS 142-2 level for compliant HSMs. Nobody else in the industry has that level of certification on their HSMs. We do, and we offer them in our public cloud under our HyperProtect crypto services. We offer it as a service, and that allows you to keep your secrets and your keys secure in a manner that nobody else can in a public cloud setting. And so all of this is really also part of the, you know, why hybrid cloud, why cloud with IBM Z? Okay, so I've covered Z. I'm going to now pass the baton over to Christy, who will walk you through the power part of the equation here. Thanks, Marcel. So my name is Christy Norman, and on the flip side at IBM, I work in the power organization, and my team focuses on OpenShift as well as a lot of other open source projects that are, I guess you could say, the backbone of this new containerized world that we live in. So over the past seven or eight years, we've done a lot of focused work to make sure that so many of these projects, including Kubernetes and OpenShift, run really well on power systems. So I'm going to talk about is a little about deploying OpenShift on power and some ways to make that easier and also a small sampling of what you can deploy on top of OpenShift. If you've ever done an on-prem OpenShift install, even on any platform across the board, actually, then you know it's a bit of work and that makes us very thankful for our DevOps teams. So hopefully after I'm finished, you'll have the confidence and some tools to help you tackle this yourself. I'm not going to spend a lot of time on this first slide. Since this is a recorded video, I'm afraid to pause and read the fine print. The first thing I do want to highlight though is a feature that gives us a real boost with regard to Kubernetes and that is our ability to dynamically adjust CPU capacity based on the current workloads. And this is pretty advantageous with respect to OpenShift. As it gets you around a limitation that Kubernetes faces. So some of you might be aware that Kubernetes at startup allocates all of its resources to various components, which means in essence that you can't later add CPUs or memory. But on Power, this system can adjust the capacity of a CPU at the hypervisor level. So instead of having to add any worker node or even add CPUs and then reboot your node, your nodes already doing the work can simply just be given the right amount of processing power. And we've tested this out and when it's enabled it really works pretty well to make sure that applications run more efficiently. And then the other thing I'll call out here is that if you do have existing power workloads and this is similar to what Marcel mentioned about Z-Systems, maybe you're running IVMI or AIX on your power hardware already, then you can have Linux VMs running on OpenShift alongside those same workloads and make those transactions a lot faster, keep your data all in the same place. So your latency is extremely low and it's more secure. You don't have to move your data outside the platform. So that's another advantage of Power. So then with those two notable features in mind, I will move on to a bit of how to deploy OpenShift on Power onto your, I guess, on-prem hardware. And I will say that most of these principles apply to all platforms and not just Power. So to install OpenShift, you will be interacting with the OpenShift installer. It's got two broad installation strategies. IPI, which stands for Installer Provisioned Infrastructure and UPI which is the user provisioned infrastructure. IPI takes advantage of established cloud and other management infrastructure like AWS or OpenStack, et cetera. But UPI requires that you do a lot of setup in advance. Like you have to install your nodes and provision your operating systems and set up some other services beforehand. So currently the officially supported ways of installing on Power are both UPI. There are a couple of development-only IPI options that we've used in-house. There's a single-node cluster project that you would not want to use in production because it does put everything into a single virtual machine running on a Linux box. It's really great for development though. We've used it in-house for some testing. And so it's been really good. It spins up pretty quickly. And then the other is a Libre-based IPI which is also helpful for Dev and Test and allows you to have multiple nodes on the same system. And it takes advantage of services that Libre provides to get the nodes running for the installer. So the first supported UPI option that we have for Power is to install this directly on top of bare-metal systems. So this one allows you to use dedicated servers for each of your nodes. And then the second one is not completely different, but it allows you to provision VMs and then you would run the OpenShift nodes on the VMs or LPODs if you're more familiar with that terminology for all of your OpenShift nodes. So for both of these installation methods, UPI requires that you provision and pre-install Red Hat Core OS onto all of your nodes and also set up services for your cluster in the installer. There's a transient node that is used to bootstrap the installation, which is then destroyed post-installation. One of the most important prereqs is configuring external storage for your PVCs and your container registry. A few of the services that are required by the installer include a web server to host configuration files that are fetched by the core OS installer. Let's see, a load balancer and DNS. So those services typically run on what's come to be referred to as a helper node. So that's just a little bit about the UPI install. I'm not going to do a demo because it takes about an hour to set up cluster. But the reason that I am going to into a bit of detail about this is to make it clear why we and others have developed some automation around this process. So there are two projects that I think are extremely helpful for anyone who is installing OpenShift using UPI. They're linked here. The first is created and owned by my team within IBM. It does require that you have PowerBC in-house to assist with some of that provisioning. It'll right-size your nodes for you, install them and configure those necessary services on the dedicated helper node that I've mentioned just a second ago. The second project is maintained primarily by Red Hat, but since it is an open source project, of course, contributions are welcome by anyone. And let's see. The link that I put on this slide is linked to a quick start for installing on PowerVM if you don't have PowerBC. But I do want to make sure it's clear that this project is not only for PowerHardware, the specific link is, but the project itself that sets up the helper node actually contains playbooks that will set up this helper node running all the services that I mentioned previously on lots of different platforms. So these are really helpful. We use them for all of our deploys. We recommend them for customer deploys. And we have done a lot to make sure that our best practices are baked in. So make sure even if you don't want to use this automation, I think it's helpful for anyone who wants to do UPI install to be aware of these projects and go take a read through those at least. So in addition to deploying OpenShift, I do also want to make sure to do kind of a quote-unquote demo using OpenShift on Power. Since, as Marcel mentioned, the user experience is the same across all platforms, I thought I would highlight something that won't look identical on all platforms. It does and it doesn't. It sounds a bit illogical, but bear with me here. So OpenShift provides some operators out of the box that you can take advantage of when building your applications. One of these is a pretty boring operator actually called Node Future Discovery. It's boring in the sense that it doesn't take a lot to set up, but it's really useful. So let's say you want to run certain workloads on a node that only has a very specific capability. So this operator is specialized in that it, and very securely, I will say, this probably sounds scary to a lot of people, but it queries all of the hardware and software capabilities of all your nodes. So it then turns that information into Kubernetes labels, which you can use to target nodes in your job specs. Since this operator just takes a couple of clicks to deploy and the default values are provided and fine to use, then I just put a screenshot here on this slide to give you an idea of what operator hub looks like and which of the operators I'm talking about. Okay, so after you've installed and deployed the operator, the labels that it populates show up in the top of the output of your OC get node for this specific node. So this, I think, is worker zero in my cluster, and then a little bit further down, unfortunately, for this presentation that shows up at the top of the output, you can see those labels actually populated with the values. So you can see how this would be really useful if you had an application that you wanted to only run parts of on the very specific nodes in your clusters. You can use this operator, which again, just ships out of the box with Red Hat is supported on PowerZ and Intel platforms to do really specific targeting for any workload that requires something like this. So I think it's extremely useful. I know we have a lot of customers who have very, very specific requirements for some of their applications. So I just wanted to give this as a sort of a demo, but not so much a demo. It's not super flashy, but I would mention it. And then I do also, this is a customer use case. I don't want it to show to make sure that it was clear that you can do a lot more interesting things on OpenShift on Power. This is a banking customer that we've worked with who used OpenShift to deploy something much more complex. So they were using a lot of open, well-known open source projects. Kafka notably was what they used to stream all of the data that they were using to in-house to speed up their application. And then they used IBM StoreWise as their backend and an IBM CloudPack called IBM CloudPack for data, which we didn't really go into IBM CloudPacks here, which is sort of a bundle of things that you can install on top of OpenShift. And a lot of other things that are offered by IBM. So they combined some IBM products, some OpenShift products, and some open source products to make their application really well tailored to their business and serve their clients really well. So did want to make sure that everyone at least aware of one customer application that we have seen and helped with on OpenShift. Okay, so the very last thing that I will just briefly mention is an upcoming offering that we have been working on, and that is to put Power Systems into IBM Cloud. So we're going to a project called PowerBS or Power Virtual Servers that will actually allow users to use Power Systems in a cloud environment instead of having your own on-prem hardware. It's currently in beta, and we plan to have it generally available by the end of the year. The automation project that I mentioned earlier on also contains a repository that will allow you to deploy OpenShift into PowerBS. So there are lots of different automation scenarios in that first repository for Power Systems. So if this is something that you're interested in, I just wanted to mention that briefly, and go take a look. This is a screenshot, actually, of the tutorial for using that repository to deploy OpenShift into PowerBS. So that's it from me. I want to thank Marcel again and make sure that if anyone has any questions, they know that they're more than welcome to reach out to Marcel or myself. Our email addresses were listed on the first slide, and hopefully you learned some things, and I hope you enjoy the rest of the video.