 Hello, everyone. Welcome to join this session about Introduction and Project Update of Provider OpenStack. My name is Lin Xiangkong from Catalyst Cloud. Together with me today is Anousha Remnani from NEC, and we are both maintainers of Provider OpenStack. In this session, we will first talk about some overview of Provider OpenStack. And then we will go through its components and their features, their design, and their updates. And finally, we will give you some information about how to get involved. Maybe some of you have already heard of OpenStack, which is an open source cloud computing platform. Actually, before Kubernetes was born, OpenStack was one of the most active open source projects in the world. And then Kubernetes just came from behind. So some people may say they are competitors. But from my perspective, that doesn't necessarily mean that one is replacement for the other. Actually, the two can be both capable of working in tandem in order to bring greater values to organizations and better service to the users. So actually, they are friends. And most importantly, they are both open source. Cloud Provider OpenStack was created as a sub-project of Sieg Cloud Provider. The responsibility of Sieg Cloud Provider is to establish the standards and the requirements that should be met by all the cloud providers to ensure their integration with Kubernetes. So basically, Provider OpenStack is just like Provider AWS, Provider GCP, or Provider Azure, if you are more familiar with those public clouds. So they are at the simulator. So there are actually a bunch of components in Provider OpenStack in order to implement some Kubernetes resources and functions. So here is a list of all the components in Provider OpenStack. The two important ones are OpenStack Cloud Controller Manager and the CSI plugins. Other than those, we also have Octavia Ingress Controller, which is implementing the Ingress resource in Kubernetes. And we have Magnum AutoHealer to achieve high availability of the cluster nodes. In security area, we have Kstonehouse Webhook for RBSD, and we have Barbican KMS plugin for the sacred data encryption. And we will go through them one by one shortly. So here is a diagram showing that the Kubernetes resource and functions each Provider OpenStack component has implemented and the interaction between each component and OpenStack services. Actually, OpenStack has lots of projects, lots of services. And you can see here some service is required by some particular Provider OpenStack component. For example, Magnum, the Kubernetes as a service in OpenStack is required for the Magnum AutoHealer implementation. And another one is Octavia, which is a load balancer as a service in OpenStack is required by multiple components by the OpenStack Cloud Controller Manager and Octavia Ingress Controller. Because as you know that in Kubernetes, the service of load balancer type and the Ingress both needs to create the external cloud load balancers. The release of Provider OpenStack is actually in the same cadence of Kubernetes. For example, Kubernetes v1.22 was released several weeks ago, and we just released the same version of cloud provider OpenStack days after. In addition, we have CI jobs running that could make sure that one version of OpenStack Cloud Controller Manager can talk to a different, I mean the latest of three minor versions of Kubernetes. But we are not following the patch release as Kubernetes. We only do patch release as required, especially when there is some critical bug fix we need to backpot. So basically, we have different manner, sorry, we have different patch versions with Kubernetes. In terms of artifacts, each time when we do release for Provider OpenStack, we will create and upload binary files to our release page. Meanwhile, we build and upload the Docker images for each component to Docker Hub. In both binary files and Docker images support multiple platforms such as AMD64 or ARM64 etc. And we also provide manifest examples in our repo, which could make it very easy for the user who want to deploy and test our components. If you prefer deploying those components using Helm, we also have Helm charts in GitHub. Most importantly, in our CI job, we are using the Docker images created in Docker Hub and the manifest example files in our repo to make sure that they can be working as expected. Okay, so the first and foremost, I want to introduce OpenStack Cloud Controller Manager. As you know, the Cloud Controller Manager is just a special controller manager but talks to the cloud API for the cloud specific functions. The OpenStack Cloud Controller Manager talks to Octavia, which is a load balancer as a service in OpenStack in order to create the service of load balancer type in Kubernetes. So if you are familiar with OpenStack, you must know that several years ago in OpenStack networking service called Neutron, there was a plugin called Neutron Airbus that could provide some simple load balancer functions. However, Neutron Airbus has been deprecated, I think two or three years ago, in order to promote Octavia in OpenStack community. As a result, OpenStack Cloud Controller Manager has also stopped support for Neutron Airbus. So if your cloud provider are still running Neutron Airbus, I think they need to upgrade in order to use the latest versions of OpenStack Cloud Controller Manager. We also support to create a TRS terminated service with Barbican, which is the OpenStack Key Manager service. At the moment, there are still some known issues. For example, we still don't support local external traffic policy. Which is looking for the port locally on a node to avoid extra traffic hub within the cluster network. I think this feature is supported by most of the public clouds, and we are still working on that. Another issue is if an application inside the cluster want to talk to the external load balancer with proxy protocol enabled, the request will fail. Because the code proxy is too smart, the traffic just bypassed the external load balancer and go directly to the service backend ports, which obviously don't support proxy protocol. So we have an ugly workaround for that. But I know that someone is working on that in Kubernetes community, and hopefully we will have a available bugfix soon. Another issue is we have limited annotation support when updating the service. So we will keep working on those issues in the upstream. And here are some updates from our latest releases. As mentioned, we added helm chart support for Cloud Controller Manager and our CSI plugins. And also we added support for TLS termination with the Barbican with the Key Manager service deployed. And also metric support for monitoring purposes. Another improvement we've made in the past releases is to use one single API call to create the load balancer, which significantly decreased the service creation time. And the last but not the least, we added the Octavia version check for some advanced features, because we know that there are some cloud providers are still running different versions of Octavia and even different versions of OpenStack. So we added this feature to make sure that the OpenStack Cloud Controller Manager won't break when talking to different versions of Octavia, especially for some advanced features. And we have something planned. In the next release, we are going to implement a feature that will reuse a single cloud load balancer for multiple services, which brings a cost-effective solution for the cluster users. And also we need to stabilize our CI to make our contributors life easier. Welcome to Octavia Ingress Controller. Well, the Ingress Controller is responsible for the reconciliation for the Ingress resource in Kubernetes cluster. Similar to OpenStack Cloud Controller Manager, Octavia Ingress Controller is also communicating with Octavia in order to create load balancers. The same, we also support TLS termination with Barbican, which is a key manager service in OpenStack. So it's very similar to the OpenStack Cloud Controller Manager. So basically the job of Octavia Ingress Controller is very simple. It's just maintaining the mapping relationship from the Ingress definition to the resource dependencies in the cloud to make sure that when there is changes in the Ingress definition, and we will update the resource dependencies in the cloud. Magnum AutoHealer. Magnum is Kubernetes as a service in OpenStack. So it provides cloud API to create, update and delete the Kubernetes cluster. And additionally, it provides advanced features such as the cluster certificate rotation and cluster rolling upgrade and the cluster node resize. So it's very useful. The Magnum AutoHealer was initially designed for Magnum, but after that we changed the architecture in order to support multiple cloud providers. So in Magnum AutoHealer, we have a health checker and cloud provider. Both are pluggable. So if, which means if you are the cloud administrator or the cluster administrator, it's very easy to customize the health check by integrating with your own monitoring solutions. And also if you are a cloud provider, it's very easy to implement some API, some interface in order to manage the Kubernetes cluster running on your cloud. And by the way, the Magnum AutoHealer supports both master nodes and worker nodes for the detection and auto healing. Keystone AutoWebhook is providing authentication and authorization for the clusters, but actually the tool feature can be running separately. I think the most significant value the Keystone AutoWebhook brings is to simplify the logging process and the resource permission management for the OpenStack users. For example, if you are OpenStack project administrator and you have your existing OpenStack users in your project and you have some Kubernetes clusters running on top of OpenStack, the user management and the resource access management will be very simple by using Keystone AutoWebhook. If configured, the Keystone AutoWebhook can create the Kubernetes name spaces automatically for the OpenStack project. And also it can do the mapping from the role of OpenStack user to Kubernetes user or group. So it's very convenient. Speaking of authorization, I think Keystone AutoWebhook provides more flexible RBAC policy than the Kubernetes built-in RBAC. As we know that the Kubernetes built-in RBAC is quite least based, which means it's very easy to define the roles such as which user can have what operations of what resources. But with Keystone AutoWebhook, you can define the roles. For example, a user can access all the resources except for some special ones. So it's more powerful. And the policy change can be made dynamically without restarting the service. The last component I want to cover in my part is Barbican KMS plugin. Well, Barbican is a key manager service in OpenStack. And the Barbican KMS plugin is pretty simple. It's a KMS provider that running as a GRPC server which resides on the Kubernetes control plane but talks to the Barbican service in the cloud in order to fetch the key encryption key. And using the key encryption key, Kubernetes can manage the data encryption key and the data stored in the storage. Of course, mostly it's ETCD. So the Barbican KMS plugin is just responsible for receiving or fetching a sacred in Barbican in order to encrypt or decrypt the data for the sacred in Kubernetes. And you may notice that I haven't covered the components in storage area which I will hand over to Anusha. Okay, Anusha, please go ahead. Thanks, Lingzian. So next component we are going to look at cover is on the CSI drivers. CSI drivers are used for volume management in Kubernetes. So we do provide an OpenStack repo host couple of CSI drivers that we'll be looking in a bit. Before diving in, so let's give a brief intro on what is CSI and why is it used. So CSI is a container storage interface. It is an industry defined standard to expose storage systems to containerized workloads. With the adoption of the CSI, Kubernetes entry volume plugins have been moved to the out-of-free and also the volume plugins can be containerized. So these plugins can be returned without the need to touch the Kubernetes code. With the deprecation of the entry volume plugins, CSI drivers must be used with the Kubernetes for the volume management. This is the high-level component diagram that involves in the Kubernetes cluster with the CSI. So we have a CSI driver here at the right-hand side. It implements the volume plugin behavior and exposes GRPC as defined by the CSI spec. It implements the identity service, controller service, and the node service. This CSI driver in turn communicates to the underlying OpenStack Cinder Manila services to give the volume management in Kubernetes. Coming to the next sidecar containers, these are helper containers. These assist to the communication between Kubernetes and the CSI driver. So we have, I think, five or six sidecar containers which are optional and can be enabled as per the requirement. For node driver registrar, it registers the CSI driver with the kubelet and external provisions and deletes the volumes and external attachments. As such, does the attach and detach operations external snapshotter is for the volume level snapshotting and external resizer with the volume expansion functionalities. So these can be enabled as required by the driver. The Kubernetes, we have Kubernetes core component here which has kube API controller manager kubelet. In this kube controller manager communicates to the external CSI driver via kube API server. So sidecar containers have to watch the Kubernetes API server for the events and then invoke respective calls on the CSI driver. Coming to the first CSI driver that is Cinder CSI. So as you know that Cinder is an OpenStack block storage service. This CSI compliant driver used used to manage the life cycle of the Cinder volumes. This plugin is compatible with the CSI spec 1.3.0 and also efforts are being made always for every release to be on the with the latest CSI compatible with the latest CSI spec. The release cycle of the plugin is in sync with the Kubernetes releases like the recent release Kubernetes 1.2.2. So we have released the latest version of 1.3.3 which is incompatible with the Kubernetes release. In the likewise we also update the sidecars for every release to the latest to the latest versions. And we do share we do have the sidecar versions that are supported by the driver in the manifest. So we do recommend use the same versions to ensure that there is no breakage with the driver. Coming to the driver deployment, there are two types of driver deployment that is supported one through Helm charts. And also we do provide the sample manifest in the repo that can be used for the easier deployment. This is a bit deep dive into the driver deployment. So here we have CSI driver is commonly deployed as a two sets of plugins. One is controller plugin and other is a node plugin. So controller plugin is deployed as a stateful set or a deployment. Inside this we have to we have containers of the single plugin. And along with that the sidecar containers of external provisional snapshots or the attachment resizer these can be enabled as required. This needs to be installed on any node in the cluster. The communication between the sidecars and the plugin happens through GRPC over the UTS socket. Unix reminds of it. The next is the node plugin of inside the demon set. This this runs on every node. We have two containers. One is the Cinder CSI plugin along with the sidecar node driver register. This node driver register registers the driver to the Kubelet. So the communication do happen over the GRPC here as well. These are the wide range of features that are supported by the Cinder CSI. So if you would like to explore any of it and would like to know how to use it. So we do have a detailed documentation in the repo. Please check it out for it. This for the users who are still using increase in the provisional. So starting from Kubernetes 1.2.1 the flag for the Cinder CSI migration flag is supported as a beta feature and is on by default. So by default all the plugin operations from the existing entry are redirected to the Cinder CSI. It is going to fail if you have if you don't have the Cinder CSI driver installed on your cluster. So if you you need to explicitly disable offer it if you don't want to use the Cinder CSI. But this all the plugin entry plugins are targeted for removal on from 1.24. I think maybe targeted couple of releases mostly by 1.24. So it is expected that everyone must migrate to the CSI driver instead. We do provide the detailed guide on how to migrate from the using increase in the provisional to the external CSI driver. So you can check it out. The major updates that are contributed to this plugin over the past year. So these we did add generic ephemeral volume support and support for the multiple config files have been added like cloud config can be specified multiple times now. They will be merged updated. I think later site cars or every release we do update it to latest site car. And there is a we have added support for ignore volume is it where if you have the cluster with the node is it and volume is it as different. So this can be enabled to enable the port to be deployed on any of the node availability zone and several talk improvements that have been contributed to the report planned for the future. I think for this cycle the main focus would be on the CI improvements, stability and increase in test coverage. We have been recently migrating to the new probe so we have much work there and also there are some plans for adopting the new implementing or supporting the new CSI features as well. The next CSI driver that is hosted in the repo is the Manila CSI driver. So Manila is the open stack shared file system service. As you know, so CSI Manila driver is able to create expand snapshot restore and mount the open stack Manila shares currently supported Manila back into our NFS and native CFS. The release I think latest have been done in sync with the Kubernetes 1.22 which is 0.9.0. This is compliant with the 1.2.0 CSI spec. There are several features that are supported include dynamic provisioning topology volume snapshot and volume online volume expansion. For this also the driver deployment through help and samples manifest both are supported. Major updates of this plugin over the past year, the expand volumes in the online mode have been added and injecting metadata to the newly created shares either through cluster ID or those storage class parameters that has been added. Influence the selection of NFS share export location by specifying desired subnet and the next I think pass mount options specific to CFS CSI. A couple of features that are for the future releases that is improve the validation and handling of volume access modes and work on mountable snapshots and also improve selection heuristics for NFS export location. So that concludes our component overview of all the plugins hosted in the repo. If you would interest you to get involved in the project so there's a getting started guide which would help you on board into this project. We are actively looking for developers who could contribute or either in the test area documentation area or in the plugin enhancements etc. So you're welcome. So you have it called all the all it is host of all the plugins is hosted in this repo cloud provider openshrack and for the users do raise the feature or bug if you have come across any if you'd like to report so And Communicate I think flat channel we are active on provider openshrack channel windows flat so feel free to be I have added couple of contacts as well so feel free to ping us on the flat to get to know more about the plugins or contribution how to start so we are happy to help. Thanks all for joining the session. Let's open it for Q&A. Thank you.