 Hello everyone. Welcome to this talk, Kubernetes Cloud Provider Project for IBM Cloud. My name is Sadez Zala. I am a senior software engineer and open source developer at IBM. I contribute to Kubernetes. I'm one of the maintainer for the XD project and I am one of the co-lead for the Provider IBM Cloud project. I have a pleasure of having Richard Thais with me today for recording of this talk. Richard, would you like to introduce yourself, please? Yes, thank you. I'm Richard Thais. I work for IBM as a software engineer on the IBM Cloud Kubernetes Service and Red Hat OpenShift and IBM Cloud Managed Services as the release lead for delivering OpenShift and Kubernetes releases to our platform. With Sadez I'm co-chair for the Cloud Provider Project for IBM Cloud. Thank you. Thank you, Richard. Hello again. For the agenda, we will provide an overview of C Cloud Provider and this Subproject Provider IBM Cloud. This is part of the maintenance track, so we'll definitely give some introduction here of the project. We'll briefly cover activities and then we'll talk about cluster API provider for IBM Cloud and the IBM Cloud Provider, which Richard will give a deep dive. I might already know about the Special Interest Group Cloud Provider. It owns Kubernetes Cloud Provider interface, the code and related work, which is responsible for running all the Cloud Provider-specific control loops. You know that when you run Kubernetes on different Cloud Providers, they have their own requirements, they have their own functionalities for the Kubernetes to run the Kubernetes on their site. For example, load balancer, that's one of the examples there. You can learn more about the code there in the Kitabripo for the Cloud Provider. I have put a link here. The SIG also ensures that the Kubernetes ecosystem evolves in a way that is neutral to all the Cloud Providers. There is no favor given to one Cloud Provider or other that kind of things. It also ensures a consistent and high-quality user experience across different Cloud Providers. The SIG also owns the subprojects from various Cloud Providers. For example, the project we will be talking today, the IBM Cloud Provider, it's one of the subprojects and some other examples are like Provider AWS, Provider Azure. You can learn more about all the subprojects in the link that I have provided here. The Provider IBM Cloud, as I said, it's a subproject of Cloud Provider. You would be interested in the project, especially if you're interested in building, deploying, maintaining, supporting, and using Kubernetes on IBM Cloud Platform. The project also owns the cluster API code of IBM Cloud. We will talk about it and then I will own the code from IBM Cloud Provider once we have that open sourced. As you can imagine, as part of this subproject, we have IBM Cloud Teams developers, leaders, they're talking through the open discussions about what's happening there, sharing with the community, working with the community, so that folks can follow the evolution of IBM Cloud Platforms with respect to Kubernetes and other CNCF projects. One thing I would mention here is the subproject is, it's for the open source-related discussions for Kubernetes and other projects. It's not for any sort of commercial kind of discussions, so we totally discussed that kind of discussions. A bit about the structure. We have three different leads for the project. From the different areas of IBM Cloud side, we have Khalid Ahmed, who is an IBM Distinguished Engineer, working on multi-cloud management. We have Richard Thys, who is a speaker here as well, from the IBM Cloud Kubernetes Service, and with that OpenShift Kubernetes Service myself from the open source side, from the community side. I have given a link for the mailing list here, so please join the distribution list there. We would love to have you. Part of that, we have a provider IBM Cloud Slack channel on Kubernetes.slack.com. We would love to have discussions there, questions, any kind of other thoughts, and then you can learn more about the subproject in the link provided here. The subproject team, the folks who are interested in the project contributions, the leads, we meet every last Wednesday of the month, so once a month, unless there are things like holidays or no agenda kind of things. Again, we would love to be part of these meetings, provide your inputs, contribute to the betterment of the subproject, and take a leadership role. If you miss meetings and interested to see what's going on, we have recorded all these meetings. We do record as we have meetings and upload the videos on the Kubernetes Slack channel. Again, the link is provided here. You can take a look and watch the videos. We also participate in the general activities of C Cloud provider through their bi-weekly meetings, during face-to-face conferences. We meet in person and brainstorming about the ongoing things, about strategy. As I mentioned, the IBM Cloud provider owns a couple of things here. The cluster API provider IBM Cloud, excuse me, it's a GitHub repo that I'll be sharing in the next slide, and then the IBM Cloud provider, which we will talk about. The cluster API provider IBM Cloud, before I talk about it, let me briefly mention about the cluster API project. As you might already know, it's a community project from Kubernetes community. It was created sometime back with the goal of managing the life cycle of Kubernetes cluster, creating, scaling, and destroying, cleaning up kind of things. Through declaratory APIs, which are basically Kubernetes style APIs, you can read more about the cluster API project, a lot of good documentations. There's actually a whole book there. It's a good reading to learn about cluster API. As I said, it's a similar style declaratory APIs like Kubernetes, also the CLI. As you can see here, cluster CTL, similar to Kube CTL, to work with the management of cluster, it's there. Different cloud providers, they typically extend the cluster API project. They extend it to meet their own requirements, to provide the support for a specific cloud provider. From IBM Cloud side, the cluster API provider IBM Cloud, the KITUB repo we have. Again, the link is provided here. We basically extend the cluster API project for IBM Cloud for the different infrastructure of IBM Cloud. We already have a stable support for classic infrastructure, what we call, but as you might know that lately the VPC Gen2 was announced, we have a power virtual system, a virtual server, right? We have a work going on. There are a few PRs out there for review. There are some issues out there. So we'd love to have you take a look, provide your review comments towards the work. That's like in progress right now to support different types of infrastructure. That involves some refactoring as well. So as I said, we would like to have you review those things, provide your feedback, contribute as much as you can. With that, let me have Richard to talk about IKS and then IBM Cloud provider. Richard, would you like to take over from here, please? Yeah, definitely. Thank you. So yeah, we'll continue the conversation here into the Cloud provider project here. I will set the stage a little bit talking about IKS and ROX first, and then go into the details under the covers there. So we'll start with IKS, which is IBM Cloud Kubernetes Service, which is the managed offering that we provide to create clusters, good clusters on IBM Cloud. One of many Cloud providers out there that provide such a service. So it's a starting point for where our Cloud provider is used. As you can imagine, IBM Cloud provider running in this service. If you want more information on the service, feel free to check out the link I provided there. Key thing here is it's a certified Kubernetes offering from the CNC app. So this is a really important piece for the Kubernetes ecosystem to provide certifications, whether it be a managed service or a distribution, that you can say with confidence if you're running your applications on this particular thing that it's certified Kubernetes to run the Kubernetes APIs. So with that, we'll jump to the next slide. And we'll go over some of the release processes for IKS and how they relate to the community. So this year, 2021, we've released 1.20 release of Kubernetes. Last year, we had three releases in the 2020 time frame, 17, 18, 19. And we've found from our users and internal and external that maintaining currency with Kubernetes and the entire ecosystem of CNC app can be rather difficult because of the speed at which they move. So looking into the more details, we've seen not only us, but it's really a lot of industry-wide, a lot of folks seeing the similar thing. So it's not just unique to folks running Kube on IBM Cloud. It seems to be very pervasive. A lot of discussions on the topic. And there's a new cap, SIG release, cap to add release cadence for Kubernetes. And this is a really, assuming it gets approved and merged, this cap would have some key impacts to the project and to all those consuming the project downstream. So proposals to go to three rather than four releases of Kubernetes every year, believing that a lot of background investigation that's gone into it, feedback surveys and such. So that I think have a good direction for the community to pursue. And so if you're interested in following that and how that's going to play out, certainly a lot of the ecosystem will be impacted by the result of this. And I think for the better. I think for the better. In as far as patch releases are concerned, so this lays out the high level of the major minor release updates. She usually have been for a year coming down to three potentially. They also could release his patch updates every month. And we've done the same for IKS. So very good cadence there on delivering patches. And for the most part, very good quality coming out of Kubernetes community. All right. Next slide, please. Rocks, which is Red Hat OpenShift on IBM Cloud. We call it the Red Hat OpenShift Kubernetes Service Internally here. It's another managed offering this time allowing you to create OpenShift clusters on IBM Cloud. So we're all about, you know, in a cloud provider being able to provide both Kube and OpenShift through our cloud provider, same cloud providers used for both. If you want more information on OpenShift in service here, check out the link. Again, certified Kubernetes. So under the covers, same stuff. All right. Next slide, please. Just like IKS, Rocks has a very similar release cycle. Okay. And it's again, based on OpenShift and OpenShift based on Kubernetes. So you can see the cascade. This is one of many examples in the Kubernetes community with respect to applications running on top of Kubernetes and how a lot of them have a similar release pattern to underlying Kubernetes. So for Rocks, we've had one release this year, four, six OpenShift on top of 119 Kubernetes, that is, and three releases last year. You can see them listed out dates and so on. Again, both for Kubernetes users, but also OpenShift users, they see the same pattern of adoption and issues and concerns, which is can certainly be alleviated through vendors, supporting things longer, but it makes it more difficult, especially when the underlying support is not there from the community. So again, this cap will have impact, like I mentioned for CUBE, but also for OpenShift going forward. And I suspect we might see some changes potentially coming from these downstream providers as well. All right. Next slide, please. All right. So how does this all tie together? So as, as I said, we're, we always love talking about community stuff, Kubernetes, on IBM cloud, right? And most people deploy it through the managed services, but certainly they could deploy it other ways as well. And so supporting those in whether, however, they want to run it, usually an important piece of that is the cloud provider and then the cloud controller manager in particular. So Rewind a few years ago and how the community used to manage and work with cloud providers within the Kubernetes code base, they were actually statically linked into the binaries built for Kubernetes. And so they were very tightly coupled, both from the control plan and in the worker nodes. So now the new architecture, which the community has done a great job moving forward on and getting really close to the end game here is to be running a cloud controller loop, controller manager, if you will, in the control plane that handles the connection to the cloud. And that's, that's the end game we have gotten there, especially in 120. We've communities completed a lot of the extraction and migration work. There's still a few more things to go, but a lot of groundwork has been laid for that. So the architecture here in the diagram showed comes from the Kubernetes documentation detailing where the cloud controller manager usually sets within a cluster similar to the Kube controller manager, but it is responsible for cloud loops and delivering those reconciliation loops within Kubernetes to deliver your load balancers and other main pieces of the cloud. Some cloud providers can run their cloud controller managers. They don't have to be in the control plane. They can actually run the data plane within the workers as well, like in a demon set. That's also seen as well. All right. Next slide, please. So if we dig under the covers a little further, look at the cloud controller or the cloud provider, the interfaces, what do those specifically mean in the context of Kubernetes? So the first main interface is load balancers. Every cloud provider has some type of load balancer service they'll provide. And then different implementations of that and different options for it. We're no different on IBM cloud. We have a set of options here for load balancers that you can run, whether it be on the classic infrastructure that we provide, which would be your network load balancers, version one and two, they run in cluster. And then if you're on the VPC infrastructure, you have a VPC layer seven load balancer or VPC network load balancer, and we continue to work on providing, you know, enhancements, new feature function for load balancers. So that's the main interface. Then you've got another main interface here, the instances interface, which is also the nodes interface, if you will, along with that is the zones interface. So these are really important pieces needed to bring up your cluster. Of course, you got the load balancers, you've got the nodes on the infrastructure that you need to manage. So the instances has two versions of the interface, for instance, is V1 and V2. V1, the V1 interface, if you will, is like the interface of the old architecture. It works on the new one, but they wanted to enhance it. So I believe in 119, they started working on a new V2 interface. I think it's beta now in 120. So we're going to look at pulling in and using the new interface a little bit more efficient for the new architecture, the cloud provider. So that's more of an implementation detail. But nonetheless, you can see the moves forward in the community on improving the interaction with the cloud. So that's the nodes interface, and then zones is similar. But from our standpoint on IBM Cloud, we rely on node bootstrapping process to help set things up to implement this interface. Another interface is the clusters. We don't implement that. And then lastly, we have routes, again, not implemented for our cloud provider, because we rely on Calico to provide the routing necessary. But now these interfaces all being available have their own controller loops to go along with them. The nice thing with the new design from the community is that we're able to now turn on and off interfaces that are interesting to us for our interface to turn on and off the control loops that are of interest. So if we're not using a control loop, we can shut it off now in the new version 120 from Kubernetes. So we'll go to the next slide. We'll look at some of the enhancements and activities that we've been working on for the cloud provider. So some of the big things here in 120, which is really awesome, is that the community has made great strides forward again on extracting the cloud specific code from Kubernetes. So the cloud provider code, if you will, to make it more cloud agnostic so that you can have some base controllers there that folks can leverage. And then we have a little set of build tests, examples to build cloud providers, a release process guide that helps cloud providers have some consistency across that, not necessarily a binding requirement, but at least a good process going forward. And so these have all been very beneficial, allowing us to instead of what we always had to do is vendor or have Kubernetes, the core Kubernetes be a dependency of our cloud provider. No longer has to be the case anymore. So we can build a cloud provider much smaller, less dependencies, less conflicts, if you will, that can go into all the large set of vendor that Kubernetes has today. So that's a real benefit of 120 that we were able to take advantage of, much cleaner build process. We were able to move to Go modules for dependency management, also a nice move forward. And we're looking towards the future here, like I mentioned, we're looking at V2 interfaces, implementing that for instances. Looking at open source for the cloud provider definitely makes things a lot easier as the community has made the extraction and migration much smoother, improving our documentation. And as always, adding more feature function, of course, in the area of load balance is one of the big ones. All right. Next slide, please. So that kind of takes us through the ecosystem of Kubernetes and how we leverage it in the IBM cloud space and a cloud provider and how it's used within that. And now let's talk a little bit more about Kubernetes and OpenShift and how they're similar in difference. So from our standpoint, running OpenShift on IBM cloud is like running Kubernetes on IBM cloud. A lot of similarities are both certified. And from Red Hat's perspective, it talks about their documentation. Red Hat OpenShift is a Kubernetes distribution. So it contains additional features beyond Kubernetes. And this is not uncommon. There's many, many providers out there that have a distribution or managed service. You take Kubernetes open source and you provide a little bit more of how you add to that. And so this is no different. You have a few interfaces that you work with on Kubernetes and OpenShift that are very similar, but yet have a few differences. OC is the CLI for OpenShift and kubectl is the CLI for Kubernetes as you are well aware. Very similar. You can effectively run very similar commands and in fact, just replace kubectl with OC and usually the same command and work is right after that. There's UI for OpenShift, console, administration, operations. In Kubernetes, the community provides a dashboard. Your distribution or your managed service may or may not provide it with you, but there is a community dashboard for Kubernetes that you can do a lot of similar things there. Tooling for image creation and deployment. One of the really nice things, and I think a lot of the ecosystem around Kubernetes takes advantage of it. It's like kubectl is a really good building block, but it's not the end solution in the end for your business. So a lot of times people want to build on top of that, so it can be day two operations. So OpenShift has some of that as well, automated installs, updates, and so on. A catalog of operators and applications provided by both Red Hat but also the community at large. There's a lot of benefits there. And then if you go back to the earlier conversation, you're talking about the releases and the release cadence of Kubernetes changing that certainly has a broad range of impacts to OpenShift, but also all those operators' applications that are running on it on top of it to make sure that they stay current with those releases coming out. And the last thing I wanted to touch on is security. Obviously Kubernetes and security is huge, and a lot of effort has been put on that in the community to securing Kubernetes by default and addressing potential weaknesses. And OpenShift's no different. A lot of focus is on that as well, keeping your images clean of CVEs and so on. One of the key differences that most folks may be not aware of off the bat is that pod security in particular is a little bit different on OpenShift than in its Kubernetes. And in particular, OpenShift provides security context constraints to help manage what pods can and can't run and what they can and can't do. In Kubernetes, you have pod security policies. They're very similar, but there's a lot of significant differences in how they're used. Yeah, how they're used and how they work and to achieve the end goal, which is basically that pod level security and security of the entire cluster container level. So the thing with pod security policies being provided by Kubernetes is that there's now active conversation about actually deprecating pod security policies and looking at what's the next thing to replace it. So an important conversation, I think, for the entire community, ecosystem around Kubernetes is that security aspect. So I'm looking forward to seeing where that goes because it certainly will have a broad range of impacts to our community, both in the Kubernetes and OpenShift space and the applications running on top of that. So I look forward to that in the coming years to talk about this. If folks have interest in discussing these topics, these are very good topics for our project group. With that, we'll jump over to the next slide, which I think takes us to the end. And thank you to all for attending and wrapping it up from here. So back to you. All right. Hey, Richard, that was great. Thank you. Thanks everyone again. If you have any questions, please reach out to us. Otherwise, we'll see you during the playing of this recording at the KubeCon and we will take questions at that time. Well, thanks again. Thank you. Bye.