 Hello, everyone can hear me? Okay, good. Hi everyone, I'm Gal Sagi. I'm presenting here with Tony from Red Hat and Muhammad from IBM. We're going to present project career. I usually open with a question here about if you know the project if you heard about it But this time I think the proper question will be anyone here didn't hear about career before? Wow, okay. So we are either doing good or you are too shy So project career, I want to start with a little story of how the project started because I think it makes a good point Me and Tony here didn't know each other around one and a year and a half ago We first met at OpenStack Israel event. It's one day where we're doing OpenStack slides and talks and We met and we started talking about the problems in containers networking and started throwing ideas about career and From here to there the project was started and The point that I was trying to make is that these summits and these gatherings or OpenStack days or meetups are great place for collaboration for meeting new people and talking about common challenges and My advice to you is spend the time to talk with the person sitting next to you and raise ideas and Try to see how we can move this project forward So this is the outline of what we are going to talk about I'll give a brief introduction about the project then we'll talk about the current state our Kubernetes integration and advanced policy How all of you can get involved and help in the project and a quick demo that we'll have So I talked how the project was started, but why did we start it? Well, we noticed back then that users were starting to deploy this new thing called containers and They were deploying it side by side with OpenStack, whether it's isolated or whether they want to Connect them in a virtual topology with the OpenStack workloads Running containers inside OpenStack VMs nested and now the big deal is running OpenStack inside containers And what we notice about all of these environments is that you would usually deploy a specific solution for containers with infrastructure and drivers and specific solutions For your OpenStack workloads and we ask why? Why double this complexity? Why having so much elements just to implement something and We started by looking at the first step at networking and What we notice is for Kubernetes for Docker. They are reinventing new networking abstractions like Lib Network Like the CNM, CNI and so on and all of these new abstraction have specific solution like Flanel, Weave that implements them While we already have a pretty mature and production grade Ready abstraction in OpenStack that is called Neutron What we also notice is that most of these solutions and most of these modeling are still Still in early stages and they are lacking a lot of security policy a lot of advanced services and a lot of features That we have in OpenStack. In general, it was very hard to connect containers on one network with VMs and bare metal servers and We thought that it's not really necessarily and then when we looked at these abstractions For example, Lib Network and we compare them to Neutron And I'm not going to get into the argument if VM networking is different than container networking I think that right now the application is what matters and When you're looking at the high level on all of these things then they are pretty similar. You can express the connectivity for containers with Neutron just as well Another main problem that we notice in this environment are nested containers inside VM So many of these users were deploying Containers nested inside VMs to receive tenant isolation or to use some sort of management tools They had for VMs And when you look at these environments, they are they are a mess. I mean There is there's so much overhead and unneeded layers to connect to containers sitting at each different at two different VMs And we are I mean, we are networking people and we are happy with the first demo that we do is a ping Right, we do ping and then we all clap and it's all working But when you look at the day after the ping Okay, when you look at how you manage this how you upgrade this how you update and troubleshoot And debug this environment Then when you have to solution, it's just double the complexity and doesn't matter how good these solutions are and There are already solutions in the Neutron ecosystem. That's all these challenges So career mission as you'll soon see is exposing these features and these capabilities to you So you can leverage one network in infrastructure for open stack and for containers now we started We started on the networking side and then when we move forward We noticed that the same problems Exists in other areas, right? We have these problems for the networking but also for the storage also for Authentication and the way that we see career now is kind of like a glue a bridge layer between containers Ecosystem and open stack and right now as you'll soon see where Tony and Muhammad will present. We have a full working integration with networking essentially Mapping you could use Kubernetes or docker API and these maps out of the box to open stack and to Neutron so you can enrich your containers environment with all of these capabilities and richness and You don't have to worry about different solution and different infrastructures in your environment, so Let's have a look at where we are right now with career and what we have As you guys know career is a open stack project under the big tent and As gal mentioned a moment ago. It is the bridge between the services that are available in open stack and Containers and it aims to support All the container Runtimes starting with Docker. Hopefully expanding into rocket and whatever comes next and we also wanted to Support multi-node clustered environments, so there it is Kubernetes or Docker swarm or mesos so we want to have all of these Container technologies supported We are an open stack A project part of the open stack community and we have been working closely with other projects in open stack In particular with Magnum and I have to apologize for starting the session a bit late because we were running From a design session we had with Magnum as we figure out how to provide the services that career has to Magnum We have been working with Kola You can essentially get a containerized version of career and of course we use neutron and the storage services and Keystone to provide all the services to containers. There is a Pretty diverse group of people who now are working on these projects from various companies Newcomers are always welcome and we have a weekly I or see me things that you can easily join So let's look at the features that we have We now support Keystone both version two and version three We started with Docker and Lib Network and we have the Docker Lib Network remote driver an IPAM driver we have Partial support for Kubernetes. There is a still some work to be done but hopefully we are going to get it done in this cycle and We provide as I mentioned we started with networking and Docker and we Can essentially provide all the services that neutron has to containers in particular Beyond the basic resources networks and subnets and subnet pools and load balance and all that You also provide security groups and whatever we have in neutron as long as there is a Reasonable counterpart in the container board whether it is Docker or Kubernetes We can essentially take advantage of that service and seamlessly provide that to the container. I Have a star over swarm. This is the old swarm as of Docker 111 in Docker 112 a new swarm was introduced and It has been in the latest release Limited to a particular driver that Docker has that has is going to be addressed and hopefully by the next release Docker 113 we should be able to Take advantage of the swarm mode as well so let's have a closer look at different components of career there are some of the If you look at your right side, there are obviously pieces of code that Interfaces with neutron services, whether it is Keystone, Neutron or Cinder there are some configuration management for career and Port binding or plugging a virtual interface into the network in addition to these basic Components we have some components that deal with supporting the networking model in Docker Which happens to be different from the networking model in Kubernetes So we have a set of components that deal with Networking in Kubernetes and we pretty soon figured out that these Components those that deal with Docker and networking and those that deal with Kubernetes Have different requirements. They use different packages in terms of packaging these services. It's best to package them separately Docker and Kubernetes have different release cycles So as new things are added if you want to keep up with the changes we need to be agile in And separating these pieces of code into different repose would be the best So that's something that we did in the previous cycle that the cycle that ended and it took a Good amount of time, but we divided the career repository into three repositories and right now We have the career repository is something that we call career leap that has the basic components that is common between all the other Repositories that are career leap network and career Kubernetes so the career leap is what is used by both Docker and Kubernetes implementation of career and career leap is Live network is for Docker live network and Kubernetes is obviously for Kubernetes So let's have a closer look at the part that is common between both Kubernetes and career Needless to say similar to what Nova needs you have to somehow connect your container to the network And similarly We do this plug-in and Plug out of virtual interfaces into the network. That is part of the common code the career library If you look at the top Figure that's kind of the most common use case where you have containers in your hosts and the way you Can connect your container into the network is by using a pair of virtual Ethernet interfaces and having one connected to the container another one to your neutron network During the past cycle we have added support for Mac wheeling and IP wheeling That allows you some form of Networking in nested environment we plan to support VLAN and You take advantage of supports and ports in neutron to have a full support for Connecting containers that are spawned in virtual machines In environment similar to what for example Magnum does uses so These are the three options that I have listed and we have support for a bunch of vendors that you may have Seen them as net neutron plugins There is a small piece of code for plugging the ports again to the networking infrastructure and and we support a bunch of Neutron plugins whether it is Linux bridge open flow dragon flow oven Plum grade me dough net and if you have any other Neutron plugin that you want take advantage of career. We will be happy to help you and get that done Just to look at What is happening under the cover a little bit more This is how you access or you create a network in Docker Docker network create and with a bunch of options which are specified the subnet you want to be Used at the gateway and finally you specify a network name As soon as you specify career as the driver and as the IPAM driver career will create this network and Then you can spawn or create a container by Connecting it to the network that you just created by specifying the net option in Docker run so what happens when you create a network and Create a container that is connected to it under the covers career creates a network Neutron network And you can see that When you see at the output of Neutron net list that I have shortened just to fit in the page you see that the Neutron network has been created the name starts with career dash net followed by this beginning of the ID used for the Container if you look at the output of The network that was created by Docker the UID of that network is 0 8 1 9 2 and As you can see the same kind of name is Partially used for the Neutron network and we use Neutron tags to keep this association between the Neutron network and Docker networks in Persistent storage. We don't add any storage of our own. So you don't keep any persistent data Beyond what is stored in Neutron? so you don't have to deal with any complexity that Would arise from using yet another database You can connect your containers to existing Neutron networks that can be useful if you have a Neutron network that you are already using for VMs or for bare metals You can now use the same Neutron network for containers You need to just specify either the name of the network if it happens to be Unique and that would be good enough if not, you have to use the UID and that would do the trick this is again more details about how this is done if You use an existing network. There is a tag that get associated with that Neutron network and then You delete the network in Docker, for example, you don't delete it in Neutron because it was used by other reasons and we Have some limitations with much older versions liberty and beyond that Before that that you can look at later on To have a closer look at export ports That's another option that Docker provides when you can expose a certain ports for certain protocols that essentially gets Implemented under the covers by career by using Neutron security groups here. I have a Docker run that uses dash dash expose one two three four is the port number UDP the protocol and Just to show how what is happening under the cover I have Found the port that is associated with this container that just got created you can see the output of Neutron port list and If you look at the port more closely using a Neutron port show You can see that now there are two security groups associated with the port one is the default security groups if you have a default security group and the other one is the one that we just created and You can specify a range of ports different ports and protocols and all that and they get translated to security group rules in Neutron and Get associated with the port With that I handed to Tony Hi, can you see hear me? Well. Yeah, I think so. All right. So how many of you are familiar with Kubernetes? Good. Okay. So this is light is probably a bit redundant But basically what I'm telling here is what Kubernetes is it's a container orchestration engine the components it has namely the API server the controller manager the scheduler and the cubelette and The way to interface with it in terms of networking is the the container network interface Which is sort of a standard? Of course that the docker doesn't fall because they have their own maybe earlier. I'm not I don't remember C&M and And what it does is it gives you the option of adding containers to a network and removing containers to a network It doesn't really specify which network that is up to the configuration that you put on the host Or that you get from somewhere else as you will see so our integration By the way, if you have questions like we can take them at the end I'll try to go fast so there will be time for that even with a demo So we have a watcher that that with it does it it connects to the Kubernetes resources In a watch endpoint so it everything that happens in the Kubernetes API generates an event that we get Then we translate that to Neutron operations we go then back to Kubernetes and we annotate the Kubernetes resource with information about the neutral resource that we created or modified or whatever and Then this information in Kubernetes is seen by the CNI driver, which finally plucks the things into place so from the definition you could see that the only things that interface with the Neutron API is the watcher but with Kubernetes API with CNI driver and and the watcher interface All right, so here you can see it a bit more visually how a deployment would look like So typically on the master node you have the Kubernetes API and the controller manager We add to that the watcher the watcher can't really run anywhere, but If you just have a few machines you can put it there It's probably the most logical place which is where it talks the most with it also needs to be able to talk to Neutron obviously and Then on each of the controller node, sorry worker nodes We need the CNI driver the CNI driver should be able to talk already to Kubernetes API because that's that's how it works basically, so we don't add any kind of Requirement because it just it just goes there to Kubernetes API to to get the port information and the thing that we're adding in this cycle in the previous cycle in In Austin you could see that we demoed swarm talking to Kubernetes over Courier provided Neutron networks and now what we're doing is we're working on courier to make it highly available The watcher so that if it falls you will not be left up to figure out If you started if something some namespace was created some resource was created and so on so in the in the reference implementation of Kubernetes Would you have it's on your right? What you can see is that there is cube proxy redirecting from any port or any endpoint to Specific container running either on that machine or another machine and that maps to the to the what you have on your left hand side, which is the The API resources which there is a service endpoint and then there are pots That are endpoints to that to that service and the service IP is considered a virtual IP So the way that we do that is Using Neutron obviously we use Neutron for everything. So it's using L bus this time V2 in the in the last summit We presented with V1, but that is that is gone. I saw a bytes that removed it completely. So we had to upgrade to that and So what we do is that for inter inter cluster communication? We just go from pods to a load balancer in Neutron So depending on which Neutron vendor you have it's going to be one way or another and Then it just goes to to the other ports. So you will if you know Kubernetes you will be wondering. So is this equivalent to the to the load balancer type that Kubernetes already has for open stack and The answer is almost it's not exactly the same So how we implement the load balancer type is we just keep the service load balancer You can see here, but on top of that the only thing that we need to do is add a floating IP So then it's already externally accessible So you do not need Kubernetes to have to go and talk to to Neutron anymore with with that plug-in All right, so that was all well for very metal, but when you went to get to nested environments you need something more and and like Mohammed explained for For the the binding part here. It's a bit of the same. So we use the IP villain Mac villain and and the villain sorry the Neutron ports and supports to to give you that access and we only have tried it with Neutron Provided alabas, but it should also work with Octavia the problem that we see now with Octavia is that Unless I missed something that happened very recently. It uses VMs to create the HAProxies and we would probably like to to have the load balancing happen at the container level or somewhere that that was more Distributed so that it would be on a feature parity or a reliability parity with With cube proxy and not just better in terms of network properties So how do you get involved? Well, there is something that we already have which is the packaging In terms of container packaging so the container packaging that we have is every time that there is a new commit I have a service that pushes it to Docker have so you can try it That is the default configuration, of course if you want any other kind of configuration you can build it yourself It's very easy. You can use DevStack. We have plugins for that, but then a good way to get involved out of course we have color integration that We come from IBM contributed So color is a way to deploy it, but only for the Docker Integration not for the Kubernetes that we will add very soon, I hope and for the Distribution packaging that that's something that if anybody wants to contribute I will be very happy I have already the system defiles, but we need to do all the rest so the Priorities in this cycle are very clearly to finish a Kubernetes integration with HA support With the properties above we have designed sessions that I will show tomorrow to talk about these aspects HA and multi-tenancy the policy will probably come a bit later the new policy resources in the in the Kubernetes API and then The magnum integration which we have been discussing and that may does be late for this session But the nested work has already been ongoing and we will show it in the demo Louise here will show it in a moment if I stop speaking for a while and Finally to to make the releases so we have storage We have pushy driver and we have a session also tomorrow to talk about that with the senior people that they have a go driver and we have to See how these two efforts converge and The work sessions the career magnum you're too late for that already, but if you want to join any of the other ones You're more than welcome. We especially look forward Operator feedback people that have use cases that they maybe would want to use career, but we don't cover them So your voice needs to be heard so that we can work on that and if you want to to contribute You can file blueprints. You can file bags. You can come to the weekly IRC meetings They happen every week at 2 UTC Afternoon and there are some getting started guides Don't worry if you don't cannot copy this the I will update the slides later and tweet about that But now let's get to the to the demo that Louise here will do and if that it's live because this is a role We have only live demos if Let's let's see if the demo gods respect us today Can you hear me? Okay. I hate the trackbed Yeah, that's probably a little small right that okay Okay, so what we have running here is a dev stack and then I have two VMs running with Fedora server 24 We picked that because it has the IPvland driver installed So what I'll be showing you is using the IPvland you could use Mac feel and as well We just chose to use IPvland, so I'm just going to show you first So this is our dev stack user and I just Hopefully it's connected or maybe not. This is what a live demo does I guess So here I'm just going to show you that we just have two networks created Private in a public one, so I'm going to join the private one and all of these give up. Okay, so on this VM So I'm just going to After we run the script so So I'm just going to create a Docker network and join with the courier driver and join the can it Join the existing Private network there so So as you see there at the you you you ID of the network is in so that's how it's joining it So that's created there and I'll just show you in my dev stack user that no new network was created So we sort of have our public and private So then I'm just going to run a container on that network So we should have a IP address in our 10.0. Sorry 10.0 79 so then on this other VM I'm also going to create a network for first. I'll show you so to allow the The container to ping outside you have to associate the IP address to the port of the VM So if I just show you the ports so the VM is 10.0 the 31. So if I just show you the I'll show you the port and that the IP address is associated with this Oopsies my coffee too much. Yeah So we can see that 79 is associated with so then what I'm going to do here is restart this one So then this one I'm just going to create a container again And just show that they can ping between them That's wrong. I'll have to so once again, I'll just It's already running. That's okay I should be able to run it directly So we'll see you got a 10.80. So if I can ping between so the other one was 10.0 79, right? So I'll just quickly show as well. Just they it can ping the other VM on the network as well. So it's not just containers. They can ping whatever As we said before the the pinks make us happy. So if you have any other question We're kind of about to be out of time But if you have any question now or you want to catch us later The to show the board of the of the new VM that the other one Oh The other one will have 30 is 32 Yes They may be ready of the science yeah, so you can copy I might have a question for you guys sure All right, so the question was about well everybody heard the scalability limits The scalability limit of course is the the one of your new turn vendor So for the for the reference implementation of of Newton if you would be using a lot of Of security group magic and so on so obviously that you would have the IP tables problems But now that there is the new contract driver that should be taken care of if you are putting each tier of your application in separate Network or sometimes everything on the same network some other the drivers like Mute the net if you put a lot of ports in the same bridge We typically doesn't happen so much in VMs But if you would say I don't know I make a slash 8 and I put all my Containers there that would be a problem because then you would get all the broadcasts I'll ask the question differently and then Sure So so far some some some folks have complained that Newton doesn't scale So you in your experience career and dogger and other considered environments. I haven't heard this type of physics Why do you use Newton if it doesn't scale? Why not like start with something that you know something new So the idea not of not sorry the question is what why to use neutral networks if there is a concern that they may not scale so the the answer to that is that the reason could it exists is so that You can have all your infrastructure under the same networking and under the same support to the concerns about scale I'm aware of of some of the Neutron solutions are scaling to thousands of nodes and And this this is for the Asian running on the machine and and the limitation is For some of the of the neutron drivers of how many nodes it can it can run not so much to how many ports It can be bound so that doesn't really worry me a lot like I'm sure that there are neutron drivers that are not going to be a good choice for this, but You it depends on what you use of course like here We have people that were reporting to me a lot of of nodes the other day, so it makes me feel calm so in docker one dot twelve the Service so that it just real open as is implemented with an IPvS namespace, right? are you looking into replacing that with a loop balancing as a service as you Did here or right are you asking the docker guys to actually patch this IPvS new space into a Neutron network and make it a doc and neutron port so the current idea. I've not checked Much since it doesn't allow you to choose anything else than the than the overlay driver I haven't really checked Which is the best way to do it, but the current thought we had was to if we live network adds a Goal to us to our live network driver about doing something about the service to do it there and ignore completely the IPvS part So to to to leverage Elvis be do yeah, it doesn't look like that Docker wants that kind of service whatever is in swarm mode in a pluggable Fashion so those services are all being added to the engine as such It is beyond the docker networking Leap network or ipan driver That's a long discussion to have The docker I guess but as of now The idea is that those things are embedded in the engine and you if you want to use it use it If you don't want to use it don't use yeah, we have to we have to close So if you have any other question you can catch us because there's another presentation in four minutes