 Hello, everyone. Thank you for joining this virtual talk on Kubernetes Cloud Provider IBM project overview and deep dive. My name is Sadeh Zalla. I am a senior software engineer at IBM and a co-lead for this project. Richard, would you like to introduce yourself? Sure. Richard Tice, I'm a senior software engineer and I'm the open shift and Kubernetes release lead for our managed services for open shifting Kubernetes. Brad, would you like to introduce quickly? Sure, Sadeh. Hi, everyone. I'm Brad Topol. I'm IBM's Distinguished Engineer for Open Technologies and Developer Advocacy. All right. Thank you, Brad. All right. So I will provide an overview and the activities structure of the project and then we will straight deep dive into the three repositories, the main repositories we have. Actually, it's four repositories, but the three main repositories for cluster API provider IBM Cloud, there is one for PPC block storage and then we have two under IBM Cloud Provider. All right. So let me just brief about the broader specialist group Cloud Provider. This cloud provider is a group of people interested in various aspect of running Kubernetes on different cloud provider clouds. It owns cloud provider interface code, which is responsible for running all the cloud provider specific control loops. You can read more about the code on the link I have provided. The SIG also ensures that the Kubernetes ecosystem evolve in a way that is neutral to various cloud providers and the same time it ensures that it provides consistent and high quality experience to the users. The SIG owns different sub-projects. They were formally had their own specific SIG, but now they're part of the SIG cloud provider and the provider IBM Cloud is one of the sub-projects. You can read more about the SIG cloud provider on the link I have provided at the bottom of the slide. All right. So the provider IBM Cloud sub-project, it's all about developing and discussions around various aspects of running Kubernetes on IBM Cloud. We also participate in various activities happening in the SIG cloud provider. As a member of this project or you just following it, you basically are staying on top of what's going on in the IBM Cloud platform with respect to Kubernetes and related CNC projects. We strictly don't discuss any commercial kind of activities or any discussions related to it. It's all about open source side of Kubernetes and related development. So as I mentioned, we have a total four code repositories under the project. This year has been great. For the project, we added three new repositories, the bottom three, and we will do a briefing on each of them. But before that, real quick, just about the structure of the project, how it works. So we have three colleagues from different areas of IBM Cloud. We have Khalid Ahmed, who is an IBM Distinguished Engineer from Multicloud Manager, Richard Thais from the IKS and Rockside, and myself, Sade from the open source software side. If you haven't already joined the mailing list, we would love you to join it. But we also have a Slack channel where you can post questions or if you want to have any discussions, please be there. We meet every month, every last Wednesday at 2 p.m. But if you miss the past meetings, you cannot attend it, then you can always look at the meeting recordings, which is available on Kubernetes YouTube channel. And again, in the link, you can read more about the project. All right. So I mentioned that we added three new repositories, but this one, the cluster API program, IBM Cloud, has been around for some time now. As you already know, there is a cluster API project within the Kubernetes project. It's basically about managing the cluster lifecycle, creating, scaling, upgrading, and destroying the cluster by Kubernetes style declared to APIs. So it's basically Kubernetes to manage Kubernetes, right? So there is a management Kubernetes cluster, which manages the workload cluster where the applications are running. And similar to CUBE CTL that we use for working with the Kubernetes, there is a command line tool called cluster CTL for creating and managing provider clusters. Again, there's a link to read more about cluster API. There's a whole book out there, actually. So various cloud providers, they extend the cluster API and for IBM Cloud, we are extending it under cluster API provider IBM Cloud. It abstracts the infrastructure specific details. And as I said, this repository has been around for some time. We already had the support for classic infrastructure, but this year we added support for IBM VPC, Gen2, and PowerVS. We are also working on a new release for this added support. Again, the link is there. We would like you to take a look, lots of good recommendations, and we would love to have you contribute there. This repo, the IBM VPC block CSI driver, was moved to the Kubernetes 6 just recently, I would say, just in a couple of months. So it provides a CSI plugin for creating and mounting VPC box storage for your applications running on Kubernetes cluster or OpenShift cluster on IBM VPC infrastructure. It currently supports Kubernetes version 1.21, 1.20, 1.19, and for OpenShift it supports 4.7 and 4.6. We would love to take a look at the repo, the documentation. We really have good doc out there on how to deploy the plugin. We use it in production ourselves. So take a look, let us know if you have any questions and contribute if you can. With that, I will hand it over to Richard for IKS. Awesome, thank you. So yeah, I'll be covering IKS and ROX and providing details on our Cloud Controller Manager or Cloud Provider project. So we'll start with IKS, just to give you a little background on our Kubernetes service. It is our managed offering for deploying Kubernetes clusters on IBM Cloud. And our Cloud Controller Manager or IBM Cloud Provider is used as part of this service. So this service is certified. Kubernetes can form it, which is a really nice program from the CNCF to certify offerings as Kubernetes compliant and so that customers can build their apps on a cloud-native certified platform for deploying those container apps on Kubernetes regardless of Cloud Provider. If you want more details on our service from IBM, you can check out that link. All right, let's jump to the next slide and we'll talk about releases. So really, I always put the slide on a conversation here to talk about our releases and really to focus on how they're related to the community and Kubernetes releases and how they work. So IKS has provided two releases of Kubernetes this year, three last year, and we're on target to deliver another release very soon. So that'll bring us to three releases in 2021. And even with the move from the community from four releases a year down to three, which is certainly helpful for many consumers, it's still very challenging for many to keep pace with Kubernetes. This is a current view of our support structure from IKS. We support 119 and later fully. 122 is coming out soon and our 118 release is now deprecated and we'll be out of support very soon. So this is very similar structure to the community's community, which just came out with patch releases 119 to 122 this week. So they have four releases in support at the moment, but 119 will be dropping out of support soon. So let's move on to the next slide. And we'll talk ROX. So ROX is the OpenShift managed service on IBM Cloud. And OpenShift builds on Kubernetes and is also certified Kubernetes can form it. So OpenShift provides some additional features, which Brad will talk about a little bit later. And if you want more information on the managed service, feel free to check out that link on the slide. And then we'll jump to the next slide to talk about the releases here for OpenShift. So OpenShift, building on Kubernetes, we lay out here from our docs, the OpenShift versus Kubernetes version. So each version of OpenShift is associated with a certain version of Kubernetes. And we support three versions at the moment fully supported. This year, we've released two. We're going for a third or OpenShift version 4.8 here shortly. And that again, will bring us to three releases to 2021, just like we had in 2020. Just like our Kubernetes users, OpenShift users have often find themselves in a difficult position at times to keep pace. One of the benefits of OpenShift in this space is that Red Hat as a provider of OpenShift takes on additional extended support of certain releases. So you'll see on the slide that we talk about. We have OpenShift support for 3.11 at the moment. And it's quite extended. And we'll have some extended support here as well for 4.6. But otherwise, it's for the most part aligned on the Kubernetes support timelines, slightly shifted for OpenShift. All right, we'll jump to the next slide. So on both of these managed services and many managed services across cloud providers, they implement what the community calls the Cloud Controller Manager, which is the control loop within Kubernetes owned by the cloud provider SIG, which is the interface for this control loop, to manage an interface with the cloud to deliver certain features to the Kubernetes cluster that Kubernetes itself doesn't implement because they're cloud provider specific and that the Cloud Controller Manager is then responsible for implementing those key features for the cloud. Now, this view here shows the CCM architecture, which is the new architecture out of tree cloud providers. The Cloud Controller Manager runs in the control plane alongside the API server, scheduler at CD, the Kubernetes controller manager. And then the worker side of things, you've got your KubeLid, KubeProxy, other network proxy, CNI setup, container runtime and so on, out on the worker nodes. So that's the basic structure that it is today for the new cloud providers. So we'll move to the next slide and we'll talk about a little bit more details on what a cloud provider provides in Kubernetes. So for most cloud providers, there's two main things. Number one is load balancers and number two is instances. We'll start with load balancers or IBM cloud. We provide four different flavors of load balancers and it depends on the infrastructure on which you run. So if you're on our classic infrastructure, you get the first two network load balancers and they run in cluster. They leverage basic in-cluster networking, IP tables, IPVS are the two different versions there. And then if you move to our VPC infrastructure, VPC Gen 2, we have a couple load balancers there as well, different flavors and they're configured slightly differently. Details are in our docs as far as how that is concerned. We also, with our open source project for the cloud provider, you can certainly dig into the code details to find out how that's all done with the docs associated with that at the cloud provider level and how those load balancer types are configured. And then the next major part of a cloud provider is it's responsible for node initialization. And this is extremely important. So I'm talking like Kubernetes nodes getting brought up and brought into the cluster. They get stored in the SED database, the node is coming online, but the node stays tainted until the cloud provider comes in and clears the taint for the node so that it is initialized by the cloud provider. So it's a really important part of the bootstrap process for your cluster that the cloud provider is functional. It's able to understand that node comes coming in, get some data about the node so that it can provide it to Kubernetes. Some of the key data that it needs to know is the type of instances you're dealing with, what zone is it in and so on, so that those can be used to bootstrap the node into Kubernetes cluster. Right now in our managed services offerings, we rely on an internal bootstrapping process to help us with this. We are working on enhancing this to support OpenShift, IPI, UPI installs which would be done outside of the managed service environment. And I'll talk about that in a moment. And then we have a couple other big interfaces, clusters and routes. We don't implement them at the moment. We rely on a CNI or like Calico or in the case of OpenShift, like I think it's OBS or OVN to do the routing and configuration. But we are looking at possibly having some route support for VPC native routing. And some cloud providers do that as well. And then there's other things that cloud providers can plug in to these and extend upon, storage being one of the things you may find in some cloud providers or like credential operators. But so I've already talked about how we have the VPC block plug in there, which is a separate repository, separate control loop. So we'll move on to the next slide. And we'll look at some of the activities that we're focused on at the moment. So one of the big things this year is to deliver the open-source version of our cloud provider. So we have two repos out there. One builds the core, the other ones used for VPC, some of the VPC components. There's a bit of a history as to why it's done that way. Ultimately, we like to have just one repo. But that's out there available to build, check out the code. It's based on Kubernetes 1.22. And this is one of the main drivers to deliver the open-source was to support OpenShift, IPI, or UPI installs on IBM Cloud. So this is an installation done outside of the managed service. Now OpenShift provides this type of installation on a lot of cloud providers. So one of the key things that happened though for OpenShift and Red Hat in the Fortnite timeframe is that all the cloud providers from the early days in Kubernetes, they were built in-tree. So like Google's cloud provider, Amazon, AWS, Azure, and so on, they were in-tree. Now the cloud provider community and the SIG has determined that those are all deprecated. All those cloud providers in-tree, they need to move out-of-tree. And that is going to be the new path forward. And eventually, those in-tree cloud providers will be removed from the code base in Kubernetes. So it's really important for consumers to switch to out-of-tree cloud providers. And the Fortnite release of OpenShift is really where the Red Hat folks worked with a lot of the cloud providers to start transitioning from these in-tree providers to out-of-tree. So they did a lot of work with OpenStack folks and Amazon and Microsoft and obviously us, IBM, to try to support CCM-based installs of OpenShift on various cloud providers. So we started our work in 4.9. We're continuing here in 4.10 of OpenShift. Our goal is to complete in 4.10. And that is a lot of changes will be required for our cloud provider to support this type of installation mechanism. Because at the moment, it was designed for our managed services. So that's another second big piece that we're working on with that. Different features will hopefully be coming in the areas of networking. We've been looking at different things in the load balancers or in routing. And then having this up in OpenShift in the community, also having the open source up there, we hope to extend that support build on our community, provided more documentation, how to deploy it on your own, how to do all those things so that we can grow this community. And that pretty much covers the cloud provider overview. So I'll jump to the next slide and turn it over to Brad. Thank you. So one question that we typically get at this session is what's the difference between Kubernetes and OpenShift? And essentially, OpenShift is a Kubernetes distribution that includes extra tooling to simplify cloud-native development and provide automated operation support. So for example, and just vanilla Kubernetes, if you want to deploy your application, what do you need to do? You typically start, you find a base image and you use Docker commands to then take the base image and then add your code to that image, and you'll create a new image using those commands and then you'll push it to a registry. And so a lot of us, the folks on this presentation, we're very familiar with doing those steps. We're very comfortable doing it. But there's a lot of developers out there like Java developers that aren't very familiar with those steps. And so what OpenShift provides you is it provides you the ability to, it'll recognize your source code repository and pick the right base image for you, and it'll automatically take your code, merge it with the base image, create the new image, and push that image to one of its registries. So it's reducing the friction for developers who wouldn't commonly know those low-level cloud-native development steps of dealing with the containers and dealing with the image and building the container image. So OpenShift's going to take care of that with its sourced image and make that a lot easier for those types of developers. It also provides nice techniques for recognizing when images change or when the configurations change and even being able to automate actually the deploy when those changes occur. So that management aspect is also made simpler through OpenShift. OpenShift's security is a big, big feature of it. It has good security guardrails. So with vanilla Kubernetes, you can do a lot of things. It'll let you out of the box, do a lot of things that aren't very secure. It really will let you run around with scissors in your hand. And with OpenShift, they're going to have guardrails to keep that for happening. So for example, OpenShift will prevent privileged containers from running by default. If you ever take a container and let it run as privileged, it gives it boot access. There is a lot of surface area for a security breach when you do that. So out of the box, OpenShift is going to keep you from doing that. And you'd be surprised how many images are out there that you didn't realize are built to run as privileged. Similarly, OpenShift gives you, it prevents you from running with a default namespace. Again, using the default namespace is not a secure and you really need to learn to not use it. And so OpenShift keeps you from doing that. And a final security feature that I want to cover is the notion of security context constraints. Basically, OpenShift provides you several security profiles that you can choose for your pod's container. And what's nice about that is that enables you to not have to worry about all the individual security knobs and try and get all the individual security settings correct. Because if you have like 25 settings, the odds are you're trying to set each one individually and you think you know what you're doing, the odds are you may not. And so instead of you setting all the individual knobs, you could just pick a security context profile and use the one that's best for your pod's container. And the nice thing is then you know what features, what security enablement your pod's container is going to get when you pick that security context. So you'll know, will I have root access or not? Will I have access to all the block storage or only a portion of it? Will I have to run as user? So all those types of features and you'll know exactly what you're guaranteed to get. Security context constraints give you that ability. Day two operations, this is another place where OpenShift really shines. It has automated cluster size management so it can automatically provision new worker nodes to increase your cluster size. That's a huge feature that you don't get with vanilla Kubernetes. Automated day two operations, it can do automated installation, automated updates. It worries about making sure the right version of OpenShift is running on the right version of RHEL so that you're guaranteed you've got full stack consistency. It uses a capability of identifying what are called cluster versions and so it does cluster version management to do all that automation of the lifecycle management. So those are huge features for your day two and your IT operations side of your house. And of course OpenShift also provides multi-cloud management support, provides a unified cloud console to view and manage multiple OpenShift clusters. So as you get into production and you're dealing with multiple clusters on multiple clouds, this is where OpenShift is going to shine as well. Next chart. So we'd like to thank you for coming to this presentation. We hope it's been very helpful. Please feel free to reach out to us and we will be available during the virtual playback for questions or you can reach out to us on Twitter or our other contact mechanisms. And again, thank you for coming to our presentation.