 So, okay now, then, so again, how do you feel coming soon and confidence and I guess it's the most my German could do So sorry on that So well, let me introduce myself. My name is Daniel Mediado And I've been the PTL for career Kubernetes for the record cycle and I'll be holding the title at least for the same cycle as well We'll see afterwards, but it implies too many meetings So and this is my colleague, Michal, who's a career core developer and will be also helping me with the presentation So first of all, I guess I'll be starting with just presenting you a little bit about the project just in case you don't know what it is So what's career we have A really nice logo, which is a platypus if I should have some stickers afterwards if you like some just let me know um But that said this really beautiful logo is the second one we had so let me you introduce our awesome cure here And this was the first logo that we had As you see that it's basically delivers containers and that's all we did that we work into So just in case you don't know curious basically a way to A way to to pull down all the network networking in between Kubernetes and open stack so you only only have one networking layer So what would happen if you as of now if you want to connect? Basically open stack with Kubernetes you will have two different network solutions You will have neutron for the open stack part and then open ship our Kubernetes as the end for the Kubernetes part and that will imply things such as double encapsulation Basically double nut and quite a few things So we have a few parts here. So first of all is correct Kubernetes Here we do have a CNA plugin, which is basically the way that yeah, we hook him to Uh, uh, Kubernetes and we provide neutron networking to Kubernetes spots And we also have a controller which provides Kubernetes services using Octavia or neutron alvars version two Which by the way, neutron alvars version two is now considered as as deprecated as of this release and we are just be really known on Octavia So if Carlos Gonzalez is around here, I wish him luck Uh Besides that we do also have one tempus plugin, which we are using to basically Interrogate all our snare test and the snare test that both imply things such as spin up a pod Create a VM and make sure that we you could get network connectivity between those Also, we have a few that's are in maintenance mode, which is career This is a Docker network plugin, but it's basically A library that we use to have shared code And this should have been called career live and we plan to migrate that to our main repo at some point during the same cycle We do also have career live network, which is now also in maintenance mode and this is basically the way that we started the project and Because it just interacted with the Docker containers without having to deal with Kubernetes at all Also for a stain cycle, we used to have a few more I want to know if you knew about those spreads. Is there anyone who knew what foxy was around here and Okay, no one Let me just tell you a little bit. So foxy And foxy Kubernetes and so forth was an attempt to replicate what career does Yes for seeing their containers But uh, they went, uh, we've just retired the other time And with this I guess I'll be handing over to michael So I will tell you about What's what's happened in the in the rocky cycle? um, so as As Daniel said, some of stuff got deprecated. So at the moment Running without courier demon is deprecated and probably will remove the way Of of running courier Kubernetes without that service in in steineries And also the albus v2 support is deprecated, but Um, it's kind of complicated to be to remove so it's less likely to be to be removed completely but I I don't recommend relying on that um Then there's one one thing about upgrading from From queens to to rocky so simply with we have added a new option external svc net where you can uh Sorry, yeah external svc nets that we can that you can set the network For the external services so the load balancer type services And you don't need to specify the subnet if you have the default subnet in that network So that's that's one main minor thing to that had that changed This is the matrix of the support that uh, kubernetes releases. So as you can see 0.5 Uh, it's is the rocky release. So that's what what you get what you get at the moment is released Uh, so it supports 1.9 and 1.10 And open shift 2.9 and 3.10 and currently on master. We are uh testing newer versions So steineries will will most likely support this this list That is uh presented here so We are going to the features and uh First of all we've made the cour controller. So this is the part that is talking to the open stack api and watching the kubernetes api and react those kubernetes events by creating the open stack resources um, so this this was supposed to We only supported running one instance of cour controller at the moment. We implemented active pacif ha This is very simple Simple thing it works exactly as other kubernetes services do that So simply an end point which is kubernetes resource gets created And though the instances of the cour controller are simply annotating annotation there and that cd is doing them into our exclusion This needs running the sidecar container. This is the definition Of it and we only support support that when you're running the cour kubernetes on top of the kubernetes cluster that that it is networking This is providing networking too. So simply when you're running cour kubernetes in in pods On your on your infra. This is basically because we use kubernetes to to do this leader election Then We've added the liveness and readiness checks to the To the cour demon de facto. So this is the list of what what it checks Probably the most interesting one here is the number of cni ad failures. So after a number of Failed tries to add the network for a container At the port for a container The health check will fail and the kubernetes will restart that cour cour demon instance That actually helped us with with one bug that we haven't seen due to health checks being Being enabled. So simply health checks prevented at work from From showing up in the deployments that we're running with this with this feature on Oh, this is the definitions of the those are definitions of the Of those health checks in the definitions of the Cour demon demon set And Another thing is pluggable cour controller handlers. So this kind of This sounds kind of complicated. So I put a very complicated Image here, but what you need to know about handlers in the cour controller is this is this is the example definition of the handler So you see that you define what kind of kubernetes resource it is looking for and on which Endpoint on the api you can find it so what that means is We now have the option to enable or disable some of the handlers so this is At first you see that the default ones so it's Vif will be an elba spec. This means that kubernetes is providing networking for Pots and the kubernetes services through the load balancers on octavia And if you for example use kube proxy for the services now you can disable the elba lb and elba spec Handlers and make sure that Cour kubernetes will only provide networking for pots And this is useful conversion table because we of course couldn't name the handlers with the With the names of the resources they are watching so That's how it is vif is pod lb is endpoint elba spec is service and there are two more that you're probably not aware of And they are not listed as the defaults So daniel will explain what those two are about Yeah, so first of all, I just want to remind or let you know just in case you don't know how this works But as of now in kubernetes, we map kubernetes services To open cycle balancers as I was telling before octavia So another thing that we are working to now is supporting directly ingress controller And by ingress controller, I would mean that we will have one controller that would map to to services and then to pots And I think one important thing that we would like to Especially remark is that this implies that we are supporting direct roads And not ingress directly when you Get them from the load balancer. So also for using this directly you want to use this feature You would need to create by yourself One octavia later seven load balancer This is something that we do plan also to enhance over the same cycle By also supporting overload balancer up until later for But we will just begin about that later on So just to make sure it's under ser this feature in open shift routes and ingress is the is the feature in In cuba even illa kubernetes that is kind of exactly equivalent To that so at the moment we only support Routes in open shift we plan to support ingress as well just It's a matter of time basically because of the cycle So what else also I would like also to highlight this and this is one important feature that we have been able to support in rocky Which is multi tenancy using namespaces So up until us now we were training all the kubernetes spots And within a single m space so that would mean that you wouldn't have any kind of Multi tenancy support But as of now what we are doing is whenever you create a new project That gets mapped to a namespace And that namespace would hook a new router and your subred and would hook out into the into everything Besides that This feature boots fits all the default. There's a new configuration option Which put all you to define default security groups and security group rules And they would get applied per name and space and we define some Um default rules, but for instance you could get it So every namespace and every part is completely isolated And that would allow you to have a more secure environment and completely multi tenancy Well, basically as I told before Uh, also, I guess this is funny We also added support for multi beef multiple interfaces per pod Without using modus or nothing like that And I just want to highlight basically here that uh, this is funny This was the name of the standard group that was this Kubernetes network custom resource definition the factor standard b1 support Which is quite long But uh, this is something that we are full supporting starting career So you would have just to add this configuration option there when configuring your career so against your um great environment and also we have this crd Which is what you would need to To do here I guess that's basically it and we can later on go more in deep detail if anyone has any specific Questions about that Just just to make this work. You simply need to um Create an instance of that crd. So this network attachment definition object in the Kubernetes cluster it defines the subnet ID of the Uh of the network that it is connected to right And then in pod definition, you just specify the additional networks you want to connect to that pod I'll and this is this those are additional ones. So you still get the default one default courier one And additional interfaces. So in the end you end up with a pod having two Neutron ports ports plugged and having access to multiple networks Or subnets or Whatever you think of your network technology is So and I guess also, I would like to speak a little bit about The future work that we were doing on over the stein cycle and over the train. By the way, if you want to damper, I really love the name So these are some features that we are being working into that are mainly done Which is the first one is python 3 6 support. I guess that's uh A global tc goal for this cycle. So we are already supporting that all in all our gates from upstream are fully migrated to python 3 6 We are welcoming back Exactly So if you find a kind of box, please let us know also utp support for the services upgrades checks It's our eye of your support Then that's something that we have been working and we'll be having a session tomorrow over the forum with some some some guys But feel free to join us if anyone's interested. I'll be sending an email to the mail list But I don't plan to hold in tomorrow afternoon Um besides the uh, what are in progress? Well, this this network policy support in case you are not aware of what this is It would be akin to circuit to group roles in open stack So basically you would have a jammel file defining all your um Firewalling for your pots and as of now we you would have to completely redefine them in open stack as well But as of now we support or we plan to do it soon Uh all the events that will be coming for us network policies and those will get mapped to security group And security rules even the default ones for instance And again, that's something that we could call over later if you want that's uh, There are some default allow all and deny all That now we do support too. For instance, if you we get some events from these, um, allow all network policy We would just create one circuit your group with answer to the group that put up an old tcp port and things like that Also, we are also working on further dpdk support. I don't know if there's anyone from intel here anyways Also, we do plan to have more improvements in the name space isolation Basically, that means that uh, we'll be having this network policies be taking over that feature So every isolation and every every security group would be directly defined by a specific Network policy that would take cover the feature there Uh, the open support to we we're currently supporting often on the case too uh, I mean It's uh, it's voting. It's working usually unless they've started their depth of plug-in As a Neutron plug-in, right? Neutron plug-in. Exactly. We still have some issues with the octavia OVN provider We are working on them. Yeah, again, Carlos Goncalves. If you are here, please raise your hand. Okay. It's not here Um, well, and that's something that uh, also maybe you can't comment something about the decentralization about the query control Yeah, this is simply to to move some of the operations that is are done by the query controller So at the moment you can see that we can only have either have one query controller or multiple instances But in active passive So the idea is to move some of the operations to the courier demon the ones that we can Uh to the centralized work and add some availability and scalability to that Um, I guess, yeah, well, this is just if you want to contact us or check the orientation and so forth But uh, I would like to save some minutes minutes because we are running out of time for questions so If there are features that you need from core kubernetes Or because yeah, so so some kind of feedback on that would be very useful for us. Yeah How is related physical bare metal? Well, so We can talk about that. Yeah We look at it later because I mean we should we should we we'll need to think about it because yeah, this is this is We we definitely haven't tried that but I don't I don't immediately see any anything that would prevent that So for instance, we should be transferred for to everything that it's an ml and ml to plug in And well, yeah, anyways, like let's speak about this so I've actually Once prepared a Open stack helm chart for core kubernetes So simply, uh, how it worked because well to start those open stack pods. You need some kind of networking for them So what what I've did is what simply? um Making all those those pods on host networking. That means they are not using the cni Uh, and they are network with the with the host networking and then what I've got was So that I have that I have had the core kubernetes deployment on that On that kubernetes in pods and I was able to create pods That were reachable from the open stack VMs But besides that, I don't think that there's there's a valid use case. It's curious mostly about other way around. So running kubernetes on top of open stack VMs we that gives gives you well simplifies your deployment and avoids The prevents you from having multiple network overlays Yes, so basically it's that uh, as mihal said we have like two different kind of useful deployments One is when you deploy kubernetes and you deploy open stack stack by side You would like to about multiple overlay networks So you would just have query be a proxy between and we would just basically attach neutron ports to its spot And the other use case is when you want actually to have a kubernetes worker or whatever you would like to call the node Inside one open stack VM. We would support that as well What we are speaking sounds like exactly the other way around but we can discuss this afterwards Using namespaces I can show you a little bit later if you are interested it's basically um Uh, when when you create this works, you know, when you create a network Namespace in the kubernetes we create a separate network for for the for the policy that pods in that namespace That's it So there's a matching between namespaces in the kubernetes and networks on the created by cooler kubernetes In the in the open stack. In any case, I guess we have an siren so i'll show you There To the Yeah, but that's that's the internal traffic on the computer on the kubernetes on the vm right because the So the kubernetes running on the open stack vm and this kubernetes is calling the pods on Well, the thing is not that we isolated it's not the same network So basically you you have a port and you don't care whether it's a pod it's the vm They will be a network. So the kubernetes is not reaching the pods Through the through the container network. It's probably I think it's it's not even going out of the kernel So it's just going to the to the networks namespace Yeah, that that's for health checks. Yeah So it should be like that Yeah, so maybe one one more question and I guess we are running out of time Maybe one one one more. Uh, yeah So Yes That was easy. Uh, yes There there is one feature that we haven't told told you about because I don't think we have uh, it's running running in any gate but You can have something like a multi vif pool. That's how we how we how we call it simply you can define for some of the kubernetes kubernetes nodes different different courier vif drivers, which means that You can have mixed environments. Some of the kubernetes Workers will be on vm's and some of workers will be on bermetal And the difference will be that those on vm's will be network to the network neutron sub ports and those on the bermetal will have the The neutron ports created and connected. You obviously need to run a neutron agent on those on those bermetal kubernetes nodes, but Yeah, it should be possible just to mob it to maybe a more known thing. It's one again You want to have several different m or two plugins? Um, okay, then thanks a lot for coming attended summit And if you have any question, we'll be around here for you. So thanks a lot guys