 Hello everybody, welcome to the career update session. My name is Antonis, so what up we made on and this is the end of it is off ski and We're gonna talk about career updates. So thank you for coming So how many of you are familiar with career? Show of hands, okay Yeah, it's the project with the platypus So what we do is we bring open-stack services to containers and the reason why this is started was because Container networking Was not up to where we wanted it to be and we had a lot of back ends to neutron that were good quality and neutron is in production and so on so we started out bringing just Neutron to courier live network. I sorry to docker live network with a project called could live network and Then we started adding things Because that's what software projects do they start with the mission and they start adding things afterwards until they can send emails So we added support for docker swarm first to start with just plain docker then docker swarm then Kubernetes and Recently we also added cinder and manila storage So to give you a bit of an idea about who is involved Here you can see A nice slide about who is this is Monday's like estimated by stack analytics So make of that what you will but as you can see we have quite a few companies The one without the logo the big one without the logo is independent So I suspect some of the big contributors doesn't have a stack analytics well set up but we have core contributors from most of those that have a logo and And we want to thank all these companies for for putting effort and specially to the to the people that are putting effort every day into the Project so the the stats are 45 contributors for the lecture last release this includes sometimes people from Newton making a one of pads and stuff like that but but it's quite a bit of contributors I want to remind that if anybody has a question you can stop me and ask me now in the end You can ask and really don't be shy to ask about anything if you think something is inaccurate or something You don't really understand. Please go ahead and ask so for pike Which is going to be released in not so long time These are the the features and enhancements that we've been working on and that already on master and That I'm gonna demo tomorrow with with arena So We made recently the first release is 0.1.0. So if you want to use it in production You're free to shoot yourself in the foot Because it's it's not 1.0.0 yet We added the Kubernetes services support and that is good because Without that like in the previous times. We only had services support in POC's But now that we have it on master we can start going after more and more improvements and what this allows you is if you have An application and it has several pods You can just target the cluster IP of the of the service and then it will load balance across them And if you want more information about that again, there is a deep dive on that tomorrow Then the way we do that allows us to scale up and down the amount of pods that the service has So it you can respond to the load needs of your application So far we only support new turn out of us we do that's our first first implementation, but it's Steve door Enabled so you can implement your own back-end if you have a different kind of load balancers And you could just call to them as long as they play nice with new with your new turn Back-end you're good to go Then we also added the client and server side SSL so when the cooler controller which is the to give a bit of a of a Overview we don't have a slide for that today, but tomorrow for this But there is the cooler controller that is watching the Kubernetes API and then in each of the worker nodes. There is a CNI Executable and both of them need to connect to the Kubernetes API. So the way that they Authenticate there is either with a token which we don't support yet or with a client side Certificate or no authentication whatsoever. So what we added support for and we have merged and we will demo tomorrow is to use the the client side key and certificate We also got a contribution recently to add guru meditation reports So you can get more information about what's going on on the controller and we have RDO packaging So the demo tomorrow is using that because installing from PIP always makes me nervous What we have in flight for pike some things will probably make it some things may not make it so the load balancer service type support is Something that Is already supported in Kubernetes for open stack But how it works now is so you have open stack VMs In which you have Kubernetes clusters and what you want is to make those services Accessible through one of the neutral load balancers and there is a cloud provider that does that but the problem is It doesn't use neutral imports for the pot So there is double encapsulation most likely or some other problems like that. So the way we do that is Just as we set up a load balancer for For the service load balancer type for us only means Take a floating IP bind it to that VIP and and you're done nothing more needs to be done So it's it's gonna be a really small change that if I was not doing demos all the time For this week, I would have done it in a day or two. So token support will also be helpful because It will assist on something that you never will explain later, which is deployment it is easier for applications to to just use a token and to Go to with open open SSL go to the CI and to the CA and sign stuff and Finally resource pools because one of the the biggest bottleneck for this integration is that for each port sorry for each pod that you create you need to go to Neutron and Get the port then you need to bind it then the Depends on which back end you use of Neutron. It needs to go and ask information about About the port and so on and that takes quite a long time So that it will help a lot tomorrow. We will show some graphs about that So for the other sub projects Quirerlib network we added the swarm mode support. I don't know how many of you are familiar with it Between legacy swarm and and swarm mode. Do people know the difference? Otherwise, I can All right, so swarm mode basically was started. I think 1.13, right and What it has is it includes a cluster it store itself and it makes some different assumptions So it broke our plug-in, but now that has been solved and we support it in local scope mode Which means that the networks have to be defined on each of the machines because there is a back that? Well, I don't need to go into the details of the back, but basically with global It does the things different than than before and we need to add support for that but one very cool thing that we had the user try it is this Weekend is the the docker plug-in Infrastructure so before you either did docker run and you had to run it yourself or You did peep and so on so now it's nice of the docker plug-in install And it already takes the the container from docker have it tells you which Volumes or which mount points it will it will set up and then you just need to place your configuration there so it's it's quite handy and finally TLS support so that between docker which is with its live network component and Coulder live network there will be a TLS communication. So that can be useful in in some cases and Finally fushy for the storage part For pike that will have both cinder and manila support and there is a talk about that just later I'm gonna give the details So if you want more information about that, but now I leave you with it and I took a few more information So I would like to summarize as the main areas that the courier project is tackling during the pike release For skill abilities Tony already mentions. We are adding support for resource pools Which will allow to spawn much faster at the Kubernetes pods instead of doing number of API calls during the pod creation it will actually go to the pre-created pools of neutron ports and By skipping doing the vice versa to the neutron API. It will be much faster in addition We also realized during our integration that the flow or that currently neutron reference implementation has is not quite Efficient and it takes a lot of time how the information a propagates From the compute node to the neutron to notify that the port port is actually in active state So we try to work with neutron community as well to improve this workflow, but if you have a your Beacon Implementation of the neutron which is not reference implementation for example a dragon floor open This problem actually doesn't exist because there is no message bus that is use the flow is a quite a different for a modularity a We designed The Kubernetes integration in such a way that it will be from the day first they're very extendable So we use the stevedore a base loading of drivers and handlers to manage and create Project subnets security groups and eventually the ports and if you have your different schema That you want to manage neutron resources You can just add your driver or handler and much more of this will be explained in details during the tomorrow session so you can Just put it in your configuration file and this Should be work working a in the interop area we Put an emphasis to have the feature of parity with a current Kubernetes networking implementation So as a first the networking mod that we support is actually the Kubernetes native networking mod Which is the flat network where each a pod has IP from the same subnet. They can communicate easily one with the other We also support the Services currently Which is already merged is the cluster IP type which is default and Kubernetes and we hopefully achieve the load balancer service type which allows a Communication to the Kubernetes services from the outside of the cluster and hopefully we'll keep it for a for pike Yeah, we'll have to we have to it's because otherwise you you have to assign the floating IP by hand And it's a bit of a bother and with the live network as Tony mentioned we Have an I think it already marriage V plug-in V2 support for it for a dockers swarm a new mode so we try to keep a The pace with the orchestration engines that we integrate to have Better solution from the resiliency Point of view we try to close different corner cases that we realized during the integration So for example some case that if you deploy your service and in the Kubernetes cluster and the service by design So post to go to the Kubernetes API currently. This is doesn't work. So we just realize as we go more interesting cases they were to quite Not trivial integration and different components may come also out of sync. So this is Also handled during the this release, right? And if you find corner cases, please feel free to bring them to the IRC or to the mailing list because Sometimes it will be some that we have already encountered and that we have a workaround if we don't have already some patch under review for that for for many stability a we have a different configuration settings for your deployment type, so you as a cluster administrator can just choose what drivers and headers to to you And it's easily adjustable on the deployment time On the security as Tony mentioned before there is a TV support for the live network and the SSL support for the client and communication and the Kubernetes domain and Just to mention that the improvements we did in the user experience area with live network the the time that it takes for the Container to come up and that you as a user can actually log into your container It comes to to be active was this time was quite improved because we changed the mechanism how a Courier filters the ports which are were created and managed with it in neutron So we use the tag mechanism just to annotate a relevant neutron resources and with the Courier tag and then it can be easily returned So no need to retrieve the whole list and filter it When it's retrieved from from the neutron Our plans for Queen Queen's quite ambitious Hopefully we can achieve them all but if not they're already prints plans for the next release for for okay leads On the scalability domain We want to reuse the same infrastructure that is developed for the Ports resources pool and do the same stuff for the services because right now We what we do we implement Kubernetes services By using neutron Elbas, so there is a lot of calls to the API to create the pool the listener members editions and we want to Improve this but what is possible just to have it pre-created so we can just grab resources from the pool and not do all the API calls which are quite time-consuming On the interoperability Domain we we currently have support of the services by a having HEProxy Service provider for the Elbas V2 API. We want to switch it to Octavia and It will actually help Also to add some Additional functionalities that we currently lack and there is a already existing support and Kubernetes for the ingress controller So we plan to have Octavia a Playing the important role in this integration as well and additional functionality that we plan to add is support for a Kubernetes network policies So for those who are not familiar with the Kubernetes network policies It's actually the mechanism that allows to provide application isolation on top of the Kubernetes cluster and There is a solution of is a cube proxy in the Kubernetes in the native Kubernetes implementations are different vendor solution already existing we plan to do it by using neutron security groups For resiliency and we want to make the existing Kubernetes controller a Highly available. So for Queens the plans are to have it active passive mode We'll see what what can be done in this area for manageability We already started to discuss adding some see like tool which will be facing the cluster administrators so different settings and the defining defaults and Viewing the status of the cluster can be done through the CLI because currently there is either Kubernetes or a Docker API and there is Newton API, but there is nothing in in between So it will be sort of the aggregation point that they will be facing the cluster administrators right, it's it's maybe to give an idea to something that probably you're familiar with That is the I think Nova manage and Keystone manage to perform some administrative tasks that Will be something like that What we also plan to do is to have our integration components especially for the Kubernetes which the Kubernetes controller and we also have the CNI part so we want these components to be hosted by Kubernetes and orchestrated by Kubernetes which does great Job by orchestrated services. So our integration actually also the service. So why not use the same infrastructure? for security we plan to To add the support for Keystone trusts, which will just take the user tokens and It will actually delegate them to do their operations So we don't need to have additional service endpoints for the integration components and for a modularity what we realized is the infrastructure that we did for a Courier Kubernetes to integrate with Newton is the quite generic in its sense and it can be easily reused for adding Kubernetes fushi support for the bare metal Kubernetes clusters, which will add storage volumes So with a bit of rework that we plan to achieve in the Queen's release We'll have the same infrastructure that can serve both integration of the neutron Manila center with the fushi so a bit of summary of the Features and enhancement plan for the Queen's release It's Octavia support for Kubernetes services Kubernetes network policy and ingress controller support For Kubernetes as I mentioned before the Kubernetes network policy semantic Can be easily mapped to what we currently can do with the neutron security groups So this is a plan. We previously had some POC that already did this So we'll just need to make this happen upstream and And the ingress which will allow to manage the inbound connections to the cluster For the fushi we we plan to have the bare metal a Kubernetes cluster support for a Assigning cinder and Manila volumes to the containers And there is a session about that later For Oki so in case some of the functionality won't make it to Queen's Definitely bill be done through the rocky release, but in addition we also a Took a plan to take care of a different Items for example for scalability. We want to scale our Kubernetes controller. So Hopefully it will be active passive in the Queen's release But in rocky we want to have multiple instances Which will be working in the active active mode. So it won't only solve the high availability issue, but also a help with the load Distribution of different tasks between between number of controllers for the interoperability and feature parity a Currently there is ongoing discussion of supporting multiple networks and multiple interfaces that happens in the Kubernetes networking special interest group The same questions were raised with the courier team as well So we want to have some convergence with what is going what is going to be decided in the Kubernetes, but there is already some work started In courier for example, there is some POC work done to integrate the SRI of V with interfaces into the with with help of courier into Kubernetes on the user experience front we actually want to achieve much easier deployment experience so we want to integrate with existing Quite good tools that can help with deployment like cube admin or helm and use existing paradigms to deploy integration components like config maps Right and so on For security we just need to keep a Looking on what is going on in scuba netis and keep the same a policies and and hence as it goes And for modularity we want to make sure that any third-party drivers or handlers that Maybe maintain somewhere and it's not mandatory up strip. We can be easily integrated or existing framework And we want also to improve this by adding some sort of profiles that will aggregate the drivers and handlers so from the Cluster administrator perspective it will be quite easy to set up it through this configuration tools In addition we want to enhance the see like tool by adding some reports and statistics especially from the controller side that can Provide information of number of processed events the state of the pools so the Administrator can easily tune the number of controllers and The size of the pools and so on so We Need your help either your operator or developer in case your operator Please help us to design not just cool technical solution, but solution that will address the real problems so in this area we will like to hear if What kind of deployment you you need to support either it be a metal or containers inside VMs What is the scale of your cluster and what kinds of workloads are planning to run on your Containers cluster and if you're a developer There are a lot of so just here you see number of ideas in the areas that we would Definitely need some help, but there are a lot of other stuff Yeah, if somebody wants to implement Network policy and wants to jump straight at it more than welcome and If you want to hear more there is a another session today later, it's 340 p.m about courier and fushi Tomorrow, please come to the courier a project onboarding if you want to learn more and have some hands-on experience Try to run DevStack for example, so it's tomorrow at 11 a.m. And there is a courier Kubernetes session tomorrow at 5.50 with Hopefully live demo. Yeah, we'll have a live demo will tempt the demo gods hopefully better result that this morning in the keynote and We'll also give a very deep dive a very deep dive on to like what are the components that we use like the facilities from new drone and so on so I think that if you want to to join The development body. It's very interesting to go both on the onboarding and the Kubernetes session And if you're an operator the Kubernetes session and the fushi session are probably a place to be So if you have any questions Please go to the microphone. Otherwise, you can find this after the after the talk Alright, so thanks a lot for for coming and hope to see you in the other sessions. Bye