 Hi, everyone. Thanks for coming in this session. So we'll be presenting on OpenStack Kubernetes integration of what is left. In this presentation, we'll not be specifically talking about OpenStack things that I mean that OpenStack running on top of Kubernetes or Kubernetes on top of OpenStack. Instead, we'll specify an area in which still work is required. So moving on. My name is Javish Kotari. And IRC, you can catch me on Geminonymous. And I work for NEC Technologies. And I'm Oskar Motohiro from NEC Solution Innovator. I was a Magnum core reviewer and until last month. So agenda of this presentation is we'll be talking about overview in which we'll have a small details about Kubernetes and OpenStack and their overview. Then we'll be talking about OpenStack Kubernetes integration in which we'll talk about what are the integration points and how it could be integrated. Then we'll have a small demo on OpenStack working on top of Kubernetes. And then we'll less down the gap and missing features. So over here. I'll not dig into the detail of Kubernetes and OpenStack in this slides. We'll just go through a couple of slides to get the concept. What is Kubernetes? So for that, we'll first have to understand what is containers. Containers are simply lightweight environments, Linux kernels. And it uses technologies like namespaces, secure groups, CH routes, SC Linux, and various other features provided by a Linux kernel. And next, we'll move to Docker. Docker is a platform for managing these containers. And it uses LiveNetwork specification. And it has many features for packaging images and running in containerized environments. So Kubernetes is an orchestration engine which uses orchestration tools for lifecycle management. And it's derived from the word healthman. And it's completely open source, mainly driven by Google. And it runs everywhere, like in public route, private cloud, on bare metal. And it has specific components listed, like the major ones are nodes. They usually use the terms nodes, in which we have a master node. Master node is the one which is usually the controller one. By controller, we mean that it has all the controllers, either application controller or deployment controllers. And apart from that, we have schedulers on the management node or the master node. A scheduler schedules the ports on various nodes, as per the scheduler configurations. Like we could have custom schedulers also now. Then we have API server, which is the main part of Kubernetes control plane. Then we have worker nodes, on which all the ports are usually scheduled. We have two specific components. Running on worker nodes, we have cubelets. And the container runtime. Container runtime usually consists of the Docker, RKT, et cetera. And we have cubelets as an agent of Kubernetes running on worker nodes, which usually communicate with API servers to manage life cycles of the ports. So what are ports? Ports is a group of containers. It's an atomic term in Kubernetes. And then we have services. Services are used for ports discovery, identification, and locating where are the ports located. We have labels and proxy. Labels are usually used to identify ports or group the ports into specific categories. We have a proxy to talk with the ports, attacks as a load balancer. Then we have a etcDemon, which is a cluster store, which is used for storing keys and values. And it maintains all the maps relative to them. Yes, we're moving here. We have this architecture diagram of Kubernetes, which shows a master node. And then we have worker nodes here specified. Usually, the master node consists of API server, which have the Kubernetes API's control running. And then we have schedulers and controllers, which schedules and controls the deployments. We have a etcD store, which is distributed store. And we have worker nodes. And like a kubelet runs on every worker node, it has a worker node has a proxy. And we can see that all the ports are scheduled here. So all the communication is done between kubelets and Kubernetes API through this etcD score. Or they could directly pull these, or they directly interact with each other through kubelets. We have a, everyone knows that what is OpenSack, but I'll just list down what's OpenSack. And OpenSack is a cloud computing project. We all know that it's a completely open source, which follow open as its main protocol. And it empowers users to use various range of resources, like to manage compute, to manage storage, and to manage networking. And it provides a wide range of projects to control them. So it's massively scalable, and it's easier to implement than other cloud providers. And it has a benefit that a user could deploy on its own machine, and it could be used as either in private cloud or as a public cloud. So this is the architecture OpenSack we usually see. It has a compute node, networking node, and a storage node. All are controlled by OpenSack. And usually, interaction of all these services are done through a REST API, in this case. And we have a standard hardware on top of which all of these services are deployed. So we'll be talking about OpenSack integration. So there are several ways in which OpenSack Kubernetes integration could be carried out. Yes, sir. But why OpenSack Kubernetes? First, we'll go through it. Because we have to mention a reason why are we integrating any component with OpenSack. So OpenSack provides a programmable infrastructure for everything. So Kubernetes fits very good with OpenSack. And it could be used both as running OpenSack as an application, or it could be used on top of OpenSack. In case of OpenSack running on top of Kubernetes, OpenSack could use features of containers, like lifecycle management of Kubernetes, and many others. So these are the three usual ways we see. And there are also mix and matches of all these three. First is OpenSack on top of Kubernetes, in which OpenSack is, sorry, Kubernetes on top of OpenSack, in which Kubernetes is deployed on top of OpenSack machines. Like OpenSack machines, we mean it could be a virtual machine or a bare metal node. But usually, Magnum do it in virtual machines only. Second, we have OpenSack on top of Kubernetes. So this is the usual case of projects like COLA, OpenSack Helm, COLA Kubernetes, COLA Ansible, in which OpenSack is packaged inside containers. Using the OpenSack architecture, containers also uses microservice architecture that suits very good with OpenSack. So OpenSack services are containerized and then deployed on various nodes. And then orchestration and all the management is run through Kubernetes. So another way is that. And third is run side by side, which means that not only running VMs or containers, but instead running VMs and containers. That means VMs and containers running side by side and communicating with each other, either on bare metal. Or it could be nested containers also, like containers running inside VMs. So we'll be talking mostly about Kubernetes on top of OpenSack. And then we'll list the gaps. What is left? So first, I talk about Kubernetes on OpenSack use cases. This will cover most use cases. For example, web application, web application, social game, such as Pokemon Go. But before I explain this first, I need to answer some question. Why not bring OpenSack while on containers on OpenSack? Indeed, if container is just a kind of virtualization, while OpenSack is needed under container layer, the simple answer is container is a bundle of application and it's environment, not a virtual machine. So container still needs a computing resources, networking resources, and storage resources. This figure shows relationship between OpenSack and Kubernetes. As you know, CINDA provides storage resources for containers. NOVA provides computing resources. Neutron provides networking resources. Looking at this figure, integration seems easy, but problem is the mismatch of container and VM. Container lifecycle is very dynamic. We cannot decide which NOVA instances will have my container. It means we can't decide which NOVA instances should be attached CINDA volume. It means we can't decide how to connect container via Neutron network. That's why Kubernetes has the concept of cloud provider and storage class. Cloud provider features a module which provides an interface for managing TCP load balancer and networking routes. It is possible to create a custom cluster without implementing a cloud provider. And not all part of the interface needed to be implemented depends on how flags are set on various components. So this is an interface of cloud provider. And seven methods are defined, but some methods are not implemented on OpenSack. In this presentation, I'll focus these two interface, load balancer and routes. Implementation of these methods is OpenSack Neutron to enable outside of Kubernetes cluster to container connection, and container to container connection. Kubernetes in itself doesn't care how connect pod to pod across nodes. Inside node, pod can communicate each other using bridge called CBR0. So implementation of routes method will enable routing of pod network, but how? Each worker node has its own pod network, which is assigned by Kubernetes. So just adding routing rule to Neutron router is enough. That is routes method doing. In this slide, worker01 node has 10.244.1.0 network. And worker02 node has 2.0 network. So pod2 can connect pod3 using this rule. But one notice, to enable this routing, we must add our address option to the Neutron port. Neutron knows that the IP address of VM and drop packets which have incorrect IP address to protect IP spoofing by default. Due to this protection, the packet having pod IP address will be dropped. That's why this odd address option is needed for Neutron port. Load balancer method. You know, Kubernetes has service resource. And there are three types of service. These are cluster IP, load port, and load balancer type. Cluster IP service expose the service on a cluster internal IP. Choosing this cluster IP makes the service only reachable from within the cluster. This is a definite service type. And node port expose the service on each node's IP address at the static port. A cluster IP service to fix the node port service will route is automatically created. You will be able to contact the node port service from outside the cluster by requesting node IP address and node port. So load balancer expose the service externally using a cloud provider's load balancer. Node port load balancer will route automatically created. So this slide shows what cluster IP service is doing. Usually, client of this service is inside of cluster and client access to cluster IP address. Then IP tables do not access actual pod address. So client can access to correct pod via cluster IP. This slide shows what node port service is doing. Usually, client of this service is outside of cluster but inside same tenant network. Node expose its pod to provide access point and map to cluster IP. Then client can connect to pod via cluster IP and node pod. So this is service type load balancer. Client will access pod via load balancer. This load balancer will be set up by a cloud provider and also member of load balancer, also manage. I'll show you the demo later, which resources are created by service type load balancer. Next, thread integration. Kubernetes has persistent volume and persistent volume claim. This provides an API that abstract details of how storage is provided from how it is consumed. A persistent volume and persistent volume claim can have a class, which is specified by setting the storage class name attribute to the name of a storage class. Storage class have a provisioner that determines what volume type is used. So Cinder is supported by storage class. This is an example of storage class manifest. We can specify Cinder volume type and availability zone as well. Once you define the storage class, you can use storage class in persistent volume and persistent volume claim. This is a definition of persistent volume claim. Since storage class name, this is defined by storage class in previous slide. To enable cloud provider, Kubernetes requires a cloud config file. This config file includes information about authenticate and which network is used by load balancer and which router is used by routing. I'll show you the demo of an open-stack cloud provider. This diagram is a diagram of a demo environment. We have three nodes, and one is master node, and two is worker nodes. These nodes belong to Neutron subnet and also have pod network about pod network. So I confirm current routing. So in this environment, we have three nodes. One is master and two is worker node. So it has a routes to pod network. So this IP address is for master node. This is worker node. These are worker nodes. And this node has own pod network. Master has 0.0. Worker has 1.0 and 2.0. So then, if I add a new worker node, this routes will be added. So I add a new worker node. I already set up one Nova instances. It has 17.16.2.133 IP address. So I'll install Kubernetes worker to this node. This script copies an install script to worker node. And then, Kubernetes is installed. So I checked that Kubernetes node is not ready. Unfortunately, currently, Kubernetes Cloud Provider, OpenStack Cloud Provider has a bug that has a token issue. Token can't refresh automatically. So I must restart Kubernetes controller. So then, I checked the node again. Worker node was added. So then, I checked a neutron router. Please see this line. Worker 0.3 node routes is added. So I can check that this routing is working correctly. I'll create five ports to Kubernetes cluster. Sorry, I already created the ports. So I can check the ports. OK, five ports were deployed. And I'll log into frontend ports. This is in worker 0.3 node. Check the IP address. This port has 3.3. So I'll pin to another pod that was located in worker 0.1. OK, so I can pin successfully. So it means pod can connect to connect across the node. So next, I demo about load balancer. This is a service load balancer definition. And this load balancer load balancer, previous pods, I created. So this service will create a load balancer on OpenStack. So I can check the load balancer using neutron command pinning created. And also, we can check frontend service. Please see the node port and endpoints. Service load balancer type has a node port and endpoints. Endpoints is a pod IP address and the port. A node port is a worker nodes port. So we can also connect to this service using node port and endpoints. OK, so checks load balancer, this service again, load balancer. This IP address. So we can access this service using this IP address. So check this IP address. ng-export to response the ng-export response via load balancer. So it means load balancer worked. OK, so that is a cloud provider demo. So please continue this session. Yes, sir. Thanks for the demo. So we'll be discussing what is left now. So run side by side. So this is the major area, which needs more and more improvement. Because we have seen that OpenStack on Kubernetes and Kubernetes on OpenStack, there are many projects working on that, like Magnum or Kola. But much work is required to achieve this. Because there are many problems like OLA problem and many other problems which are introduced by using different approaches. So we have to find the approach to run side by side effectively. So this is the OpenStack diagram. We have bare metals running with virtual machines, running with containers. This diagram shows that all these ecosystems are running parallel to each other. But this doesn't show how are these connected. So we'll be going to see what approaches could be used to connect them. But before we go on, we might ask a question, like why do we need containers and VMs together? So containers and VMs has its own pros and cons, like containers are lightweight and provides first a boot time, have small kernel, doesn't require OS. And VM has its own, like it has security groups and it has other advantages. So we cannot eliminate anything. Like neither can we remove containers nor VMs. So we have to use them together to achieve an environment suitable for our operators. So there are two cases, like running side by side is one thing and running nested is another. So both of these have a common solution. There is a project in OpenStack called Korea of working on this. By using Korea, we could take both of these advantages at the same time using that layer. How do we merge containers and VMs? As we talked, containers and VMs, connecting both of these have a single problem, that's networking. There are many others, like storage and all, but we are only going to talk about networking. So a very famous thing is don't reinvent the wheels. So don't use the networking options created by yourself. There are many networking options available in the market. So you could simply use them or you could simply build your car on top of the wheels. So Korea is the project that leverages the power of Neutron and hard work done by the Neutron community. And it could use all its plug-in. So Korea is the project which bridges the gap between containers and OpenStack. So it takes containers networking and bridges it to the Neutron APIs so that they could be used at the same time. So Korea provides these options. That's an integration which, as we talked about, it provides integration between the Neutron and the containers. It has an OpenStack community for that to support all the issues faced by the people trying that. It leverages OpenStack support in the existing ecosystem. It has quicker path to Kubernetes for users of Neutron networking, already familiar with Neutron networking. And it avoids double-encapsulation problem for containers running on top of OpenStack because using Flannel or any other solution poses another problem of encapsulation in which there is an overlay network like Flannel already running on top of OpenStack Neutron. So to reduce that problem, if your containers are running on top of OpenStack, you could directly use a Korea project to bridge the gap. So this is a diagram showing how the Magnum tried to solve the problem of overlay. It just used the Flannel overlay network. So in Flannel overlay, we have two VMs, and there are containers running in each VM. So if they want to try to communicate over the OpenStack network, that's Neutron, they'll do it through Flannel. So if the Flannel networking is used, Flannel would create its own networking layer on top of an OpenStack Neutron service. So this poses a problem of double overlay. Double overlay are very typical to debug, and it has its own challenges and reduces the network efficiency. So in order to avoid such overlays, we have a component like Korea, which is working on all these container orchestration engines, Korea solution. There are two projects in Korea right now, like Korea Lib Network and Korea Kubernetes, which uses Korea library. Korea Kubernetes is for Kubernetes, which uses the Kubernetes networking and converts it into Neutron APIs. And Korea Lib Network is for Docker Swarm, so it could be used with Docker Swarm. And container solutions could be used at the same time. Both of these could be used at the same time with Korea, so that it could provide a common interface for Neutron, so that all the containers could talk to each other in the nested environment or multiple COEs environment. So Korea leverages the trunk-pwn personality. By that, we mean that if containers are running inside VMs, it could directly communicate with other containers with... If they are in the same subnet, they still have the security intact. So they are in different VLAN segments. And apart from that, it uses all the advanced features of Neutron, like Neutron Community has already worked on very advanced features, like security groups, QNS, and load balancing. So it provides that every service do not have to find solutions to bridge the gap for containers. So Korea could be used directly as a plugin for, as an abstraction layer for containers and Neutron, and it could leverage all these. So in Korea, we can, as we talked about, that we can communicate with VMs and containers and containers as well as other containers, or there could be many options like that without the problem of double abstraction or double overlay. So this is the Korea Kubernetes architecture in that it has a controller. The Neutron is listed in the background, in green color, and it has the API server. So Korea Controller watches the API server for all the events, and it uses a notation to annotate the specific resources it wants, and it calls Neutron's API through binding, port bindings, and do the configurations as required by the pods, to bring all the pods in the same network or different networks as needed by the operator. So we have a worker node on which Kubelet and pods are installed. We have a Korea CNI driver for Kubernetes, which will watch all the events of Kubernetes API server as well, and Korea's CNI driver interacts with OVS, or there could be any other controller, and it consistently interacts with Neutron agent to create any ports or routers. So Korea, as we talked about, provides both of these solutions, running side by side, as well as nested containers. So there's a very rapid growth in Korea community in these years, and we think that it would be more in coming few years. And with solutions like Trunkport and other functionalities being pushed by the Korea community, we think running side by side is now also possible with this solution on top of OpenSack. So we'll summarize everything we have talked about, like we have talked about Kubernetes on OpenSack, which is using the cloud provider configuration of the Kubernetes itself. So Kubernetes provides a pluggable interface, as we have seen. This integration will cover almost all use cases, like all the major use cases of Kubernetes and OpenSack integration is covered in the first scenario only. Next is OpenSack on Kubernetes. This majorly involves services, OpenSack services containerized and Kubernetes managing all these services. And the third part is OpenSack and Kubernetes, so which means using our best networking solution to bridge the gaps between container environments and virtual machines. Thank you. So if you have any questions, please. Can you go back to the courier diagram? So in this example, the worker node can be a bare metal machine. It doesn't have to be a virtual machine. Like currently it has to be a virtual machine only. It has to be a virtual machine. So in the case containers are running natively on a bare machine, does courier help bridge the gap? Yeah, like there is a community called Zun, you must have heard about. So Zun is already working with Courier, which is trying to do the same. What is it called, Zun? Yeah. I am Adam Young, working Keystone. I know that Kevin Fox has done a bunch of work on Kubernetes and Keystone integration. And I'm just wondering if there are gaps there and what you need for help on either having Kubernetes services call into OpenSack services or the other way around. So I think you are specifically talking about CloudProvider.gov file and Kubernetes report. Is that so? I'm actually, I know that he's done some work along this line, but I'm actually more interested in the workflows that you have here and the ones that you've hit against where something running in Kubernetes needs to talk to an OpenStack service and thus needs to get Keystone credentials and support that way or the reverse. And being able to keep the authorization view of what's going on there in sync between the two systems. Yeah, because the case you're talking about would come in the first case, that's Kubernetes running on top of OpenSack environment. So in that case, it'll use the authorization by Keystone. In other cases, it will use a different mechanism. Like it will still use Keystone, but it has different mechanisms to interact with Keystone and get the authorization. So yes, we can talk in detail about what are the gaps left. So you think it's pretty well where it needs to be or is there more work that needs to happen there? There is a work required. There are already patches for that. Okay. And Kubernetes Rep only. Cool, okay. One more question, please. So we've talked about different subsystems and novelties. What's the current state of affairs? So let's say, Nuut and Okata. Do we already have curer available for them? Do we have this VLAN tagging for Neutron available and what is Magnum using as a mechanism? Yeah, like we have, in Okata release, there is a Neutron trunk support already launched and there is a demo also on courier's documentation. We can, you can try on creating trunk ports and communicating with ports inside VMs or outside VMs. So like there is already trunk support and with Magnum integration, there is still work on but Magnum communities work is still not completed. Magnum integration is still there. But at least with Okata, we can already try this out. Yeah. Thank you. Thank you. Any more questions? Thank you very much.