 Hello everyone. Welcome to the stock intro plus deep dive provider IBM cloud. My name is Sadeh Zalla. I am one of the lead for the project. And with me, I have Richard Thies and Brad Topol. Would you like to introduce yourself please? Yeah, definitely. I'm Richard Thies. I'm working on IBM cloud provider for Kubernetes and OpenShift. Also co-lead with Sadeh on the cloud provider project for IBM Cloud. Hi everyone. I'm Brad Topol. I'm IBM's Distinguished Engineer for Open Technologies and Developer Advocacy. I lead a large team that contributes upstream to Kubernetes. And I'm a Kubernetes contributor, a KubeDoc maintainer and chair of the Kubernetes SIGDoc localization subgroup. Well, thank you Richard and Brad. All right. So in today's talk, we will provide an overview of SIG cloud provider. There will be a separate talk for SIG cloud provider itself. So, you know, I would highly recommend you to attend that to know more about the cloud provider SIG. But in this talk, we'll just brief about it. Being the provider IBM cloud is a sub project of the SIG cloud provider. We'll talk about the structure and activities of the provider IBM cloud project. And that will be followed by deep dive to IBM cloud provider and cluster API provider IBM cloud. All right. So just to brief about cloud provider spatial interest group. It is one of the 30 plus spatial interest group in Kubernetes. It owns Kubernetes cloud provider interface. This interface is responsible for running all the cloud provider specific control groups. You can find more about the code at the URL that I have provided here. The SIG, it ensures that the Kubernetes ecosystem is evolving in a way that is neutral to all cloud provider. So, you know, there's no favor given to one cloud provider or other. And, you know, everybody's everybody can participate in the ongoing discussions and, you know, the implementation of specific cloud provider. The SIG also ensures a consistent and high quality user experience across different cloud providers. And it owns all the sub projects. They were formerly known as, you know, SIG themselves, you know, for different cloud providers like AWS or SIG for Azure, GCP or SIG IBM cloud, SIG OpenStack, SIG VMware and so on. The provider IBM cloud, it's a sub project of this SIG cloud provider. As I mentioned, there will be a separate talk on SIG cloud provider itself. So I would recommend you attend that to know more about the cloud provider SIG. And you can also see and refer documentation in the URL that I have mentioned at the bottom of the slide. Rest of this talk will focus on the provider IBM cloud. Alrighty, so this is a project of, you know, cloud provider SIG. It's around building, deploying, maintaining, supporting and using Kubernetes and IBM cloud. Obviously, you know, it will provide you a platform to interact with the team, you know, the IBM team and others that, you know, the build and operate IBM cloud. And, you know, we openly discuss, you know, things like, you know, kind of contributions and all meant that, you know, we will be doing in the community's committee from the IBM cloud perspective. And, you know, as part of this sub project as an active member or just following it, you know, you get to follow the evaluation of the IBM cloud platforms with respect to Kubernetes and, you know, related CNCF projects. All right, so the structure of the project, you know, when you talk about it, we have three colleagues Khalid Ahmed is from IBM multi cloud manager side. We have Richard Thies, who is one of the speaker here today. He represents IBM cloud community service and Red Hat OpenShift Community Service. And then myself, I am representing more from open source software side. You can find some of the link here that will be useful, you know, like mailing list. So I would highly recommend to be part of the mailing list to get updates on the project. We have a Slack channel, which the provider IBM Cloud Slack, you know, highly recommend to be, you know, to be there to see what's going on, you know, we provide the updates and other things there regularly. And you can find more about the project documentation on the link I have provided here for the GitHub. Activities. We meet every month. So it's last Wednesday at 2pm Eastern time. We have a 30 to 45 minutes or sometime in our meeting, depending on the agenda. Now all the recordings for the meeting are also available. So if you have missed the past meetings, if you cannot attend future meetings, you can refer to the videos and take a look there. We regularly participate in the SIG cloud for their channel activities, you know, their bi-weekly meetings and other things. The project owns a sub project called cluster API provider IBM cloud, which is an extension of the cluster API project of Kubernetes. Right. So the cluster API project itself a sub project of Kubernetes. And it provides, you know, optional additive functionalities on top of core communities to manage the lifecycle of communities cluster. Now Brad will be giving more details there later in this presentation and then, you know, we are staying on top of support for out of pre IBM cloud provider in SIG cloud provider, you know, that discussion has been going on for for some long time in the meetings through the mailing list, through the different talks that some of the provider specific code, which were part of the communities core, they are taken outside and that's called out of pre. So Richard will provide deep dive on to the IBM cloud provider and he will also cover the out of pre topic there. With that Richard, I would hand it over to you. All right, thank you. So yeah, I'll be taking you through IKS rocks and our cloud provider. And we'll start here on IKS, which is IBM cloud Kubernetes service, which is IBM's managed Kubernetes service. So you can create kube clusters and IBM cloud. And there's a lot of other managed services out there. And so in a similar fashion to those we are certified Kubernetes provider through the CNC F certification program for Kubernetes. So if you'd like more information on that service, you can check out the link on the docs there and we try to keep updated on, you know, what's going on in IKS and post things on the slack channels to keep you updated on those activities in our meetings as well. So if that you go to the next slide please. Sure. Thank you. So this year. We've already provided three releases of IKS based upon their associated kube releases that we're out and when we were talking last year at this time. We were touting 116 just came out right and now you can see in this chart here. We support 117 18 and 19 which is the three latest releases from this year. And 116 is already deprecated. So you can see at which the speed at which kubernetes moves. It's pretty significant. And our 115 releases now unsupported it just went on supported a month ago. So one of the things that take away for a lot of folks when they get on board and kubernetes is that it moves fast, and that you have to have a plan for change right away. So you can keep current. So there's been a lot of discussions in the kube community about this and certainly impacts users directly that are working with kube and those obviously through managed services which provide kubernetes as a service or a product right. They're built on the service iterations and the service cadence from kubernetes. So, you know a lot of folks should try to stay current. There was a discussion on the long term support for kubernetes what that might look like few things change this year with covert and other things happen the delay of 119 release was noted and 116 was extended for a little bit more having support. Ideally trying to allow people to maybe only have to upgrade once a year, their clusters. So then the current discussion that is going on the community and feel free to contribute to that is whether or not kube should have three releases a year or four. And right now what I've seen is a little bit more weighted towards three people prefer, but certainly keep keep your feedback coming to the community so they can make good decisions on these things. And as far as patch releases kubernetes did a great job this year getting a very good cadence on delivery patches for the support releases. They generally do it monthly. About mid month, they patch all the releases at the same time, which has been really helpful especially for managed services. I'm sure for products as well to be able to deliver those patches quickly and then a timely matter very consistent manner. And we've been doing that as well. So, next slide please. So red hat open shift on IBM cloud or the red hat open shift kubernetes services another managed service. Again, open shifts built on kubernetes so this is a also a kubernetes certified offering through the CNCF. And red hat builds on that kubernetes base capability to give you some additional functions with Brad will talk about a little bit later. And if you want more information I included a link there for for open shift. Next slide please. For the release standpoints we've delivered three releases of rocks this year. So very similar cadence to IKS. You can see each version of open shift is based on a version of kubernetes so the latest one for five that we've delivered is based on kubernetes 18, and we fully expect for six soon from red hat and open shift that will be based on kubernetes 19. And certainly we'll be following that up and delivering that through the managed service as well. So you can see the impact of the velocity of kubernetes carries over into the velocity of open shift they're also moving very quickly. On the chart here we show 434 and five. There also is a long term support from open shift and red hat on 311. That's based on kubernetes 111 so that hasn't been extended life, we fully expect for a version of four, I believe 4.6 will be a more of an extended life version from shift to give you that that additional support but bear in mind that obviously kubernetes has not supported 111 for quite some time. All right, next slide please. Just one back if you thank you. No problem. All right, so we have IKS we have rocks, they're both built on kubernetes and one thing that's very noticeable when you look at kubernetes is very manageable. A lot of interfaces, whether it's through like the API level CRDs and such, the cloud provider is no different. And there's various unique functionalities in each cloud, and that we need to provide through kubernetes control interface and that interface is the cloud controller manager or the CCM through the cloud provider. So that's the architecture that kubernetes has today to support various clouds. And we leverage this architecture both of our managed services. So on the control plane side is where you find the API server scheduler, the kubernetes controller manager and it's also where you find in our managed services the cloud controller manager which is the key aspect of taking advantage of the cloud specific features that you need to deliver some of those key aspects of kubernetes that you come to expect. Now, on the worker node sides, where you have the kubernetes and kubernetes, you know, they're obviously running on nodes within the cloud. And those don't no longer have direct connections to the cloud if you will that was under the old architecture that used to do that direct connections and they're trying to free that up so that the control loops are contained, and it's much cleaner interface going forward, so that the code that used to live in kubernetes can more easily be impacted dependencies can be broken that were specific to cloud providers that we could get these rolled out into their own repositories to make it easier for everybody to both consume kubernetes but also to build cloud specific features for kubernetes. So with that next slide please. So this, the cloud provider interface has several key interface interface functions that it provides. The biggest one for most folks is the load balancer. So that is the interface where you're going to deliver load balancer service to kubernetes. So kubernetes is going to call the cloud provider to, you know, create and delete and update load balancers for the IBM cloud provider depending on whether you have a classic infrastructure, or VPC, our next gen infrastructure, you have a little bit different load balancers that are available. So in the classic you have a network load balancer based on IPPS or IP tables it runs in cluster and then when you go to the VPC infrastructure you've got a later later seven load balancer, and then you also have a network load balancer, which is new in our 119 release that we just delivered. So also the instances which is another huge one, huge component to the cloud provider which is managing the nodes. In particular, you want to know where the zone region instance type of the node so that can be data fed back to kubernetes for important aspects of scheduling, So those are important pieces. Now being the managed service, a lot of the bootstrapping happens as part of the managed service and we take advantage of that through the cloud provider to deliver that data to kubernetes. The community has moved with this new architecture, they saw the need for a new instances v2 interface as part of the cloud provider. That was new in 119 as an alpha. So it's going to progress I'm hoping to a beta and we're looking at taking advantage of that it's a little bit more streamlined interface to align with the new design of the cloud providers. And then we do do some implementation for zones, which is needed for load balancer and scheduling. So everything for clusters and routes, we rely on the CNI, or the container network interface in our particular case for the managed service ideas we use Calico and likewise for rocks. So we rely on that for doing routing. All right, next slide. Okay, and so a little bit about the future so as always, I don't have it on here but we continually take the data that we collect through running kubernetes both within the managed service, based kubernetes and then within OpenShift and managed service through Red Hat there. Delivering what we know in back to the kubernetes community what we learn running kubernetes at scale, problem security things, enhancements feedback, we always try to deliver that back. Also being part of the cloud provider. We're looking to expand our role here especially if we can open source our IBM cloud provider. Things are aligning a little bit better as the kubernetes community works to extract and migrate all of those entry providers out of tree, making it a lot more easier to work on dependency management builds with these processes they're all coming together to get us to a point where this is going to be much easier for for all cloud providers to, to deliver in the future. So with that we'll be looking at improving our docs are building a test release processes, aligning our go dependency management with what kubernetes does. So these are all the activities that we're working on right now and in the future. And that is it I'll turn it over to Brad. Thank you Richard. So the IBM cloud provider project has as a sub project, the cluster API provider IBM cloud sub project. And if you're not familiar with the cluster API provider projects and kubernetes basically they provide a declarative model or approach and tooling to simplify the provisioning upgrading and operating of clusters. And so we have our active project in this space the cluster API provider IBM cloud. And just to just to see some of the basics here. There's always a target cluster that's the cluster we intend to create and manage. There's the bootstrap cluster that is typically a smaller cluster that's used to help get things started and helps helps to manage that target cluster. And there is a cluster control command so there is a command line interface that is makes life easy and makes it easy to run the types of commands for for managing the clusters. And you know IBM is one of many vendors that that has embraced, you know, this approach. So the architecture here on the right if you're interested. Again we have an active project in this space and feel free to go check it out and see what the team is working on. Lately I saw they're working on some upgrades. So feel free to check that out. Next chart please Sada. So one question that we always get in this session. The one question that always comes up is, what's the difference between kubernetes and open shift. So we're going to go ahead and address it now because I'm sure it'll come up in the questions. But open shift is a kubernetes distribution from red hat that includes extra tooling to simplify cloud native development and also provides automation automated operation support. So, when you start doing development for cloud native applications. So if you're just getting started and you're going to start running in production and requires a lot of skills. Right so you need to be able to create container images, and you typically need to know how to find a base image and then take your code and merge it into the base image and create the new image and push it to a registry. If you work with cube every day. Well, maybe you're an expert at that and that's not so difficult. But if you're a large organization and you say have lots of Java programmers, lots of Python developers. These are folks who want to get the benefit of cloud native, but maybe don't have all those cloud native skills. So open shift provides is image creation and deployment tooling that makes it real easy for developer to just push a change to a get repository. This is called source to image, and, and through open shift, it's able to recognize that find the base image, merge your new code changes into the base image and push it to a registry. And so, taking care of all those details is reducing the amount of friction for a developer who doesn't work in cloud native every day and as an expert at Docker configs and what have you to be able to get up and running and run their application in a cloud native fashion and a Kubernetes environment. So open shift will also give you image and configuration change detection. The other thing that open shift does that's nice is it provides security guardrails. So when you run in production compared to running and just your development environment, you need to worry about security. And in Kubernetes there's a lot of knobs that you need to turn to get the the role based access control correct. One of the things that open shift provides our security context or security profiles that make it much easier for developer to get the right security that they need for production out of the box, and not have to do a lot of knob turning for a lot of individual values that that they're either guessing are properly configured or not. Similarly, open shift in its default form is going to prevent privilege containers by for running by default. And again why that's important is a privilege container is something that would have root access and if somehow there's a security breach, those types of issues can cause way more harm than non privilege containers. So open shift is going to protect you in that way, and also also try and deter you from running with the default namespace again that can cause some security issues as well. So open shift provides tools to help you get up and running and provide production level security and reduce the level of expertise that your developers and operator operators need. And also what open shift provides is automated cluster size management, it will automatically provision new worker nodes to increase the cluster size, and it has great day to automated, automated operation support. So automated installation, automated updates and cluster version management. So a lot of features that that become very, very valuable when you're ready to run in production. So obviously IBM has just standard Kubernetes and open shift as Richard covered, but those are the differences. And so that pretty much concludes our presentation and we'd like to thank you for attending.