 Hello everybody and good afternoon Welcome to the introduction to SIG OpenStack and community session at KubeCon Shanghai. I want to thank everybody for joining the session So in this talk, we're going to give a brief overview of all of the activities that SIG OpenStack is Engaged with right now and how it relates to the Kubernetes community in general But I'm hoping that after we finish that session that those topics that will have an opportunity for people to ask questions and give us feedback on If they're using OpenStack and Kubernetes together and how they're using it and how SIG OpenStack can help you meet your needs to Successfully deploy Kubernetes on top of OpenStack. So we're going to start with SIG OpenStack governance SIG OpenStack coordinates the cross-community efforts of the OpenStack in the Kubernetes communities This includes OpenStack-related contributions to Kubernetes And it can be you know, and you can look at OpenStack in a number of different ways in relationship to Kubernetes The first is is you can treat it like a deployment platform for platform for Kubernetes And so this is this would be a typical scenario that you would see just as you would run Kubernetes on top of AWS or Azure or Google Cloud you can also run Kubernetes on top of a cloud and actually Kubernetes Really depends upon the existence of a cloud It doesn't necessarily know about how you handle storage or how you handle ingress controllers or how you handle How you want to handle your networks? And so it's dependent upon some sort of underlying infrastructure that that it builds upon through a series of common API's OpenStack can also be a service provider for Kubernetes and This would be the instance of if you want to have some sort of say authorization provider or some sort of storage provider OpenStack can provide those things and so an example is Keystone has a web hook That connects to Kubernetes RBAC and allows you to use Keystone to manage it to manage your users and your projects Within a Kubernetes cluster Similarly for Cinder we also have drivers storage drivers for Cinder that can plug directly into Kubernetes using a CSI interface Giving you the power of over 80 storage drivers that are backed by Cinder without having to pull in an entire OpenStack cloud Finally it can serve as a collection of applications to run on Kubernetes And so if you consider that Kubernetes is an orchestration platform for micro services An OpenStack is a collection of micro services that are meant to deliver infrastructure Then you can actually install OpenStack On top of Kubernetes and have that be the management platform for your OpenStack cloud Now this leads to an interesting situation where you might be running an infrastructure Kubernetes cluster with OpenStack on top of that and Then you have a full multi-tenant cloud that you can run Kubernetes on and offer to your users so You can see the full collection of OpenStack projects that are available and they run throughout the they run throughout the stack from you know from from networking layers to You know to managing hardware through projects like Ironic All the way up to the compute layer which you know many people are familiar with Nova But we also have projects like zoom that offer containers And Kingwing that offer function as a service As well as higher-level projects like Magnum Magnum offers An inner an API to create Kubernetes clusters on top of OpenStack And it depends upon some other things like like like Barbican for secrets management And at the very top you have things like horizon and the OpenStack client Which are you know ways that you can interact with your OpenStack cloud and a number of associated projects around that they help you to deploy and to manage and to extend the functionality of your OpenStack cloud And so let's look at you know we talked you know before about the number of ways that you can have OpenStack interact with Kubernetes This slide kind of captures this here where If we look at our stack as it's moving from left to right At the bottom of that we're going to manage bare metal infrastructure with OpenStack Ironic This means that we're using Ironic either as part of a full micro OpenStack cloud or as a standalone service to reach out and provision your bare metal servers and Either provision OpenStack on top of that or provision Kubernetes on top of that and so that's your bare metal layer And then you have your infrastructure layer where maybe you're installed You know where you have your OpenStack installation there or you could possibly have a Kubernetes installation You could possibly have both and connect those with projects like OpenStack Courier You know what you know, which you know, which allow you to you know to bridge the networking between those layers Then when you have your OpenStack cloud on top of that you have You you you run Kubernetes on that and one of the main things that connects Kubernetes to your OpenStack cloud is What's called a cloud provider If you actually there's a session later this afternoon for SIG cloud provider where we'll go more into depth of how in general you build cloud providers for Kubernetes but OpenStack has an implementation of this that allows you to That allows Kubernetes to be strongly aware of the resources that are that are available in OpenStack these include networking resources compute resources and And storage resources And things like load balancers and ingress controllers also So we call this project cloud provider OpenStack We have a repository that hosts several Kubernetes and OpenStack integrations. There's the cloud controller manager for OpenStack the cinder CSI plug-in the Octavia ingress controller a Barbican KMS plug-in and Keystone authentic authentication and authorization web hooks So the cloud provider OpenStack implements the cloud provider interface Runs the cloud provider specific control loops that needs for services and information from the cloud provider such as zone No details and load balancers and so on Cinder is the storage driver in OpenStack. We also have a manila driver for CSI also Which allows you to treat shared file systems also as CSI compliant file system Kubernetes specific deployment files are also maintained with the drivers As well as the deprecation of the volume entry drivers and CSI drivers that that can be used with Kubernetes for volume Management and so there are a lot of different storage options But what you should really be looking at for future use is anything that's a CSI driver because all of the other drivers are going to be phased out of production in favor of the CSI system As well, we also have an Octavia ingress controller and so Octavia is an OpenStack service that builds upon neutron to build low balance load balancers and so there is a There there is an independent layer that allows you to use Octavia to manage all of your ingress controllers And so let's look at how this works in practice In the large red area. We have our OpenStack cloud that has Keystone, Nova, Neutron and Cinder On top of that cloud you have several you have two different major flavors of nodes You have a Kubernetes controller node which runs your kube controller manager as well as your cloud controller manager the cloud controller manager is the is the is the portion of the code that connects to the cloud provider and Offers that integration between Kubernetes and OpenStack Then there's a kube API server, which kind of sits in the middle of this and communicates with that CD to manage the state of your cluster as well as a kube scheduler that is able to look at the resources that you have within Nova and and within the and within the Kubernetes worker nodes and Schedule your workloads onto nodes With your control node up, then you will have a number of Kubernetes worker nodes that are all running kubelet in the kube proxy and It's possible to scale this and add more nodes and remove nodes as you would on any other cloud Okay, another we another piece that we talked briefly about was the Keystone auth and Keystone client So Kubernetes Keystone off implements Kubernetes with webhook authentication and authorization client Keystone allows the client side integrations and authentication using the Keystone API and allows the exchange of user credentials with with bear tokens via kube control so We a typical workflow would be that user issues a kube control command And then the credential plugin prompts the user for Keystone credentials And exchanges those credentials with the external service in this case Keystone for a token The credential plug-in returns a token to client go which then uses it uses it as a bear token against the API server Then the API server uses the webhook token authentication to submit the token to the Keystone service provider and the Keystone service verifies the token and returns the user's user username and groups and so you kind of break this down into a diagram where You see that Keystone is sitting in the middle of this and you can follow the entire flow kube control trust the token and it receives a token back And these are these are sent along with the user's credentials kube control then sends the token to the kube API server That goes to the Kubernetes Keystone off which sends the token back to Keystone. So It's making the round trip to Keystone and Keystone is saying yes This is a token that I created and you are they are authorized for for whatever this operation is and Responds with the appropriate user names and groups that are associated with it then that sent back to the kube API server And it either performs or denies the operation based upon the response from key from From from Keystone We also have the barbican KMS plugin It's an API the API server requests the KMS plug-in to encrypt the and encrypt the secret data It encrypts the data with the key from the barbican service running in an open-stat cloud And then using envelope encryption the Kubernetes API server requests Server requests to the encrypt decrypt To encrypt decrypt the data encryption key So you're basically storing the circuit the secrets that are allowing Kubernetes to encrypt the data that is being stored in the cluster and this is stored in the secure key service called barbican Okay, so that was basically that was a run-through of all of the projects that we're managing that provide these These kind of low-level Kubernetes and open-stack integrations But a new project that many of you may have heard of That is coming out of the cluster the SIG cluster lifecycle group is The cluster API provider open-stack and this is a new project that we have started within the last year So the cluster API is a Kubernetes project to bring declarative Kubernetes style APIs for cluster creation configuration and management It also provides an optional additive functionality on top of core Kubernetes It's important to note that the cluster API is still in the prototype stages Although they've just started sending up their alphas and I think they're out there if they haven't published their stable API They're right on the verge of publishing their stable API But it should still be considered a bit experimental You know and it's within its abilities And we're trying to drive more technical feedback for the API design to make sure that when the API settles down The API is done right the first time Nevertheless open the SIG open-stack has actually been collaborating on the creation of two different cluster API providers Okay, so But what is this? What is this doing actually? so essentially the cluster provider is a custom resource definition and You're gonna have some sort of bootstrapping machine With CRDs for the machine CRDs and your controller manager And When the when a request is made to that small bootstrapping Kubernetes cluster It's going to reach out into your cloud in this case your open-stack cloud and it's going to use the custom resource types to Build out it's going to create the machines that I want to deploy Kubernetes on and then it's going to build out the cluster definitions and put these and place these into Kubernetes The nice thing about this is is that by using this you're treating your cluster the same way that you would Create any other Kubernetes resource and so if a node begins to fail you're going to have algorithms that will be able to look at that and and Do auto healing based off of that node failure if your workload is increasing You're going to be able to have algorithms that are not just applicable to the open-stack cloud But to any other cloud that will allow you to scale your node out And so as you were looking at the Kubernetes landscape and things that are very important That are coming up cluster API and all of its implementation implementations are going to be very important because what you're going to see is an implementation of general cluster management algorithms that Don't that don't necessarily know anything about your underlying cloud instead. You're going to have a Provider like the like the cluster API open-stack provider that will handle that will handle all of the heavy lifting of creating your nodes and installing the necessary components on top of them while while the logic of What what what state is a node in do we need to grow it? Do we need to shrink it is the is is the You know is is the cluster healthy will be handled by cluster API? So right now and this is the case for all of the providers is it's more of a concrete Implementation than a true pluggable provider and so cluster API, you know kind of specifies an API that you have to An API that you have to code to but it's not it's not a it's not a pluggable interface yet And so it's not a plug-in you actually go and you grab the cluster API provider open-stack code and You build it and you run it as a standalone program And so you're not trying to plug into some general cluster API program Again, it's still in the prototype stages while we get feedback on the API types themselves But it is settling down And it's something that you know, we're still looking for more developers and more robust implementations And so if this is something that you're interested in contributing to there's a tremendous amount of effort behind it And if you're using other installers like cops You should be you know considering that cluster API is either going to replace those installers or it's going to serve as the Fundamental basis for those installers in the future And so this is a great place to get involved in you know kind of the an important future of Kubernetes Part of the work that we need to do on this is right now cloud credentials are configured differently than for the cloud provider And so another piece of work that we want to get started relatively soon is a single library for handling all of the credentials to communicate with the open-stack cloud and then have the The cluster API provider as well as the cloud provider open-stack to all use that same library for authentication So you're not having different ways of authenticating through open-stack cloud and defining that authentication So I mentioned that there are two of these cluster API providers The second is a new project that is coming out of Red Hat, but has picked up a lot of new contributors It's called metal cubed, which is the bear which is a bare metal cluster API provider and so Metal cubed exist to provide components that allow you to do bare metal host management for Kubernetes Metal cube works as a Kubernetes application Meaning it runs on Kubernetes and has managed the Kubernetes interfaces and is going to provide a cluster API implementation For bare metal now. How does this relate to open-stack? Well right now it's it's it's using It's using open-stack ironic the logo of it right there is a pixie boots who's he's the He's a bare metal drummer bear and it Ironic is the open-stack project that allows you to manage bare metal and so it supports a number of different open and proprietary hardware types and it You know and it allows you to manage the complete life cycle of a bare metal system So not just how do you provision the image on it? But how do you put machines into a managed state if there's some sort of hardware failure? Or how do you deprovision a machine when it's reached the end of its life cycle? and it gives you a way to treat bare metal as if it were a You know similarly similarly to the same way that you were that you would treat virtualization this even includes things like connecting to the glance image service so that you are able to upload images to glance and ironic does all of the work of Signing those images to make sure that That that they're okay to load onto your system and and provide a location where those can be downloaded from for your pixie server So it's you know, it's a it's a very powerful and flexible tool for managing bare metal and and one of the major bare metal cluster API providers for Kubernetes is going to be Using this as one of its fundamental layers. And so this is a pretty you know pretty exciting move forward If you're interested in seeing more about this At the at the open infrastructure summit that we had in Denver one of the key notes We had a demonstration of some early stages of metal cube where we were provisioning some bare metal hardware with ironic and a custom resource definition and we also have a demonstration of this at the from the Kubernetes community meeting that was held directly before cube con Barcelona, so that was a About a month or so ago. So if you go back and look for that demonstration, you can find that on on the Kubernetes community meeting Okay, and the way it works is that you have you have a beam you have a machine controller with a bare metal actuator Built that's built on top of On top of a bare metal model bare metal operator, which is Which with CRDs that represent The bare metal inventory with the configuration needed by its bare metal management workers some That's built then that builds upon bare metal management pods and that manages your entire cluster And so it's really it's it's really neat you create a YAML file that describes That describes your bare metal cluster including things like your Your IPMI interface, you know how you connect to IPMI What addresses you're going you're going to connect what what addresses you connect to what at what ports are available on the machine for the for the different networks that you have connected to it outside of the IPMI and For for pixie booting and When you when you change the state of that YAML for the machine to be enabled with a particular image Kubernetes running on top of OpenStack ironic will bring the bare metal machines into that state Okay, so So if you are interested in SIG OpenStack, we have a Slack channel Which is in which is in SIG OpenStack. We also have bi-weekly meetings on Wednesday at 1500 UTC and our primary github repository is Hosted within the main Kubernetes repository at cloud provider OpenStack It's also worth mentioning that so I'm so I'm Chris Hodg and I'm one of the co-leads of the SIG I also co-lead the SIG with Aditi Sharma who is from NEC technologies in India as well as Christopher Glovitz who is who is with Inno cloud in Germany and Christopher is the main point of contact contact for the OpenStack cluster API provider And Aditi has been one of the primary developers on the cloud provider OpenStack And so with that we we have a lot of time left over and I like to use these sessions to also talk about You know to answer your any questions that you might have but to also You know understand your usage and how you know your needs for the for the cloud provider So I'm happy to take questions now No questions. Okay, so so I have questions for all of all of you then so How many of you are running production OpenStack clouds right now just just for a sense of So and are you running Kubernetes on top of those clouds right now? Some of you so and are you what are you using Magnum? Are you using some other sort of system on that to install and maintain it? Magnum Okay, thank you, I deep loud I deployed a Magnum on the OpenStack Journal no no no it's a Liberty on the liberty and About two years ago, maybe 2060 and I haven't it's it just an experiment and I do some things about the Nova and now is neutral and I want to study something about the Kubernetes and That's all. Okay. Okay. I just come here to to occur. Oh, I'm sorry. I just come here to acquire some news new Yeah, no your information and I have no question. Thank you. Okay. No, no, that's but I think that's I mean But that I think that brings up a good point of how do you get started running Kubernetes on top of an OpenStack cluster? and so Magnum is a wonderful project, but it's also very complex and If you're interested in running Kubernetes on OpenStack, it may not be the place to start if you want to understand the integrations Now it is nice. So if you're familiar with CERN which is the the the European nuclear research institution where they where they discovered the Higgs boson they're running a large OpenStack cluster and At last count they have over 400 Kubernetes clusters running on top of their OpenStack cluster And they're using Magnum to enable this and so Magnum is being used in production It's it's providing a fantastic service all of these clusters are tenant isolated and so because it takes advantage of the multi-tenancy inside of OpenStack and You know, but setting it up is a challenge because you have to set up a lot of trust stores and a lot of You know, you have to set up, you know, say barbican and and and some You know and and some storage things and you know make sure your network is configured correctly and it can be A bit of a lift to be able to get to that point and also create images that you're going to put your cluster from If you're looking at starting with Kubernetes on top of OpenStack, I think that cops is a good place to start because it assumes that you have access to an OpenStack tenant and so rather being rather than being a service A managed service that installs Kubernetes for you Instead, it's it's almost like a deployment tool that that goes out and uses I think it uses Terraform to go out or to go out and reach into your cloud and manage the nodes so, you know One is you as a user are you know using a tool to reach out and make the request to your cloud and make those changes where Magnum is more like you're you're asking the larger OpenStack service to stand something up for you It's the difference between a managed cloud versus a you know a personally deployed cloud so that that's a good place to start cops is a good place to start because it's going to go through and You know manage, you know creating all the connections that you need to bring up the cloud provider and integrate that with your installation Okay, thank you. You're welcome So we have another question right right next to you Hey Chris we we haven't used the cluster API OpenStack in catalyst cloud because It's it's necessary to have a bootstrap Kubernetes cluster But it's very it's impossible for the cloud OpenStack for the cloud provider who have deployed OpenStack So what do you think of this because we have deployed Magnum because we have already deployed other services like Nova Cinder Nealtron. Yeah Yeah, so so I also think that's a very good question. I would I would say that the the cluster API provider OpenStack is beta Alpha software, it's it's very new Chris. Yeah, Christophe is using it in his his you know, he's developing it for for their cloud and for the work that they're doing but I Wouldn't be using it in production right now And Magnum is based on you know having a full OpenStack cloud there And there are a number of different tools that are you know built in to take advantage of that like we have cluster autoscaling and cluster healing that rely upon the existence of heat and Magnum to You know to to manage that So right now if you want like the most robust way to deploy Kubernetes and manage it on top of OpenStack clouds is Magnum But I think in within the next two to three years Which is actually a very short time inside inside of tech. We're going to be wanting to look at You know as cluster API matures Do we I think the appropriate approach and of course this comes down to the Magnum team is If they want to take advantage of that would be to keep the API within Magnum But use cluster API as the implementation of how they achieve that Maybe in conjunction with heat, but that's a longer. That's a much longer question for the development community to to work through So we have 30 seconds left Very quick question regarding to that. So how do you see the OpenStack compared to other proprietary company solutions? Like you know, what's the advantage and how do you see the future? so I mean to me the the primary advantage is there It it comes down to openness Like Kubernetes is very attractive because it's an open source platform It's an open source application platform, and you know the code and you can contribute to the code and you can change the code OpenStack is the primary open source cloud platform That's out there and so if you want to free an open cloud that you can contribute to that you can make changes to that You know that you can deploy within your data center or you can have managed Then their OpenStack is really the open only open source solution that you have Yeah, and it's and it's been proven in in industry You know throughout throughout the world too and so and if you don't want to manage it on your own because it can Be daunting to do that. There are any number of vendors out there within the OpenStack ecosystem. We're happy to help you with that Okay, well, I want to thank everybody for coming and I hope you have a fantastic conference and I'll be available Please you know catch up with me personally if you have any more questions and thank you very much