 Hello, can you all hear me well? Yes, that's good. So first of all I want to welcome you to this session about Open Stack Courier. How many of you were in the keynote today? Please show of hands. Okay, almost everybody. How many of you say it failed? How many were looking to the other side? Okay, okay, not too bad. Final question, at least for now. How many of you were in the session yesterday of Mohammed Banikasemi about, okay, good, good. So we're gonna have a bit of similar demo at the end, but we're gonna give a slightly different perspective, maybe not go so much into the Docker networking. And yeah, maybe it's time already to introduce myself. My name is Antonio Segura Puimedon. I work at MiDakura. I'm Gal Sagi, working for Huawei European Research Center. And he's gonna give the first part of the presentation and we'll keep changing, asking questions and so on. So if you have any question at any point, feel free to raise your hand and if it's the right moment we can we can change. So as I was walking yesterday in some of the session, Courier was mentioned quite a lot, as most of you noticed. And I'm happy for this because it makes my life easier right now to explain to you. And it also shows us that it's something that interests the community. It's something that we are tackling a point that others were trying to tackle and address. And this is something good that we want to see going forward. It's important to also notice that Courier is not trying to solve everything and all the problems. It's merely trying to bridge between two different solutions. So before I will delve into the project itself and what we are doing, let's first see what are the problems that we notice in today's containers networking world. And these problems are what led us to start this project and work on. So what we saw is that network obstruction are being reinvented and redoing over and over. We can see Docker Lib Network, we could see Kubernetes models, CNI for Core S. And all of these new models are still experimental and they're still evolving and changing quite frequently. We saw just an example of this happening a week ago where Docker changed their Lib Network API completely. So it's hard to integrate and follow all these changes. Another thing that we notice is there are many new container-specific solutions for containers networking like Flanel, Weave, Socket Plane, which is now part of Docker. And all of these solutions are pretty container-specific and when you compare them to the current Neutron solution are much less feature-rich and much less mature. We also notice another thing is that the current Neutron solutions out there are trying to integrate with all of these abstraction and new models but they are doing it separately. So each project is doing it in its own repository and redoing things over and over again. And it doesn't really make sense when you look at the OpenStack environments. Another common deployment model for containers today is nested inside tenant VMs. So we have the tenant VMs connected in one networking infrastructure and then you have a completely different solution inside the VM itself for the containers networking. This introduced what we call the double overlay problem because you now have performance and latency problems because the packet going from one container to another needs to go through all of these layers and solution. But it also introduced another and I think even more important problem is from management and debugging. And these are areas that people sometimes overlook but you now have more than one solution implementing your networking. Sorry. And when you have too much solutions you debugging and understanding where problem is takes more time and when you look at other advanced things like when you start doing upgrades and updates and high availability and clustering and deployment of all of these things it's complicated. What we can see here is as I mentioned all of these new solutions for containers networking like Weave, Lanell, socket plan which is part of Docker. And when you come to think about it containers networking is not that different from VM networking and adding more and more solutions when we have open source alternatives is not the process that we might want to see. I think this diagram shows one of the double overlay problem that I talked about and this is common deployment for project Magnum for example. We have the containers nested in the VM. We have the tenant VMs connected with a networking infrastructure usually a neutron plug-in of some sort. And then another different solution like Flannel commonly inside of it this again introduced the problems of double encapsulation and of course management and deployment problems that I talked previously. So how are we trying to solve these problems with career? Well I think this sentence pretty much sums it up. We identify that we already have a well-tested mature and deployed networking abstraction and it is neutron the open-stack networking abstraction and in career what we try to do is leverage that abstraction for containers networking. So what we are doing in a career what we are doing is we are trying to map between all of these different abstractions and Docker live network is just one example we are also looking at Kubernetes and other models and we are mapping between all of these new models true career to the neutron API. By doing this mapping what we basically introduce is a vendor lockdown free solution for container networking right because now every neutron plug-in that you have can be used for containers networking. And of course by doing all of this mapping we are also focusing deployment and focusing development efforts in one location so instead of re-implementing and redoing this part for each individual solution and this takes a lot of effort because you want to do a testing and you want to test that the API are not changing in any of the models and you want more review power and doing it separately is not time-wise smart especially when we are targeting open-stack environments. The double overlay problem that I mentioned are already neutron solutions that are trying to tackle in this use case for example we have OVN, Midonet and Dragonflow and all of these solutions are trying to solve the nested containers use case using one networking infrastructure that is both used in connecting the tenant VMs and inside the VM connecting the containers. This of course reduce all the complexity that I talked about management-wise because we now have one solution that is solving the networking end-to-end but as we will see as well this also reduce performance and latency problems because now we don't have to use the double encapsulation we have two solution that can sync together do most of the computation in the compute node and then a very lightweight thing in the VM. And of course another important aspect of connecting containers networking to neutron is we get with zero effort all of neutron features and there are many important features mature features for networking like security groups, netting and of course that vents services like load balancing as a service firewall as a service and so on. And as neutron mature we will keep getting this solution with zero effort. So here I want to show you a little bit deployment examples of career and how it looks like in an open-stack environment. Tony will soon describe that deployment is a very important case for us we're thinking about containerizing neutron plugins and compatible with COLA but we want to just to give you a grasp of how the environment looks like. We also are working on how to integrate with multi-node environments and orchestration engines like swarm, Kubernetes and so on. So in this example we can see the Docker daemon at the compute node and it has the career service running next to it Docker allows you to live network allows you to register remote drivers so they hijack all the Docker CLI and API and transfer it to the driver so what we do in a career is we take these APIs and translate them and call to the neutron API. So now you have any neutron plugin that you deployed in this environment will enable networking for containers. Another deployment use case that we have is what I talked about the nested VMs and again it's very similar to the other solution. We are hijacking again the API calls from Docker's from inside the VM communicating with the neutron service. Here we can expose the features that I talked about for plugins that support nested containers so now we can orchestrate networking both for the tenant VMs themselves and the containers inside those VMs and we have very a lot of ideas how to you know deploy this use Magnum hate templates to deploy a career inside a VM so it's it's going to be a very simple process. A little bit overview about our project so career is a fully open source project it's a team effort we are part of neutron big stadium we are working of course for first step we are working on a Docker and live network but as I mentioned we are also considering other orchestration engines we are working very closely with other containers oriented projects like Magnum and Kola and of course neutron but we are also trying to represent neutron and open-stack in other communities like the Docker community there is one example there was a feature for label support in Docker that we try to prioritize and so on and we are working on both fronts of course we have a weekly IRC meeting and I invite you all to join we're doing everything on the mailing list launchpad and the open-stack way so this is a review states graph for liberty keep in mind that we are a new project the reason why I showed it here is because I think it shows that we already have a large diversity in the people and companies that are working on this project and this is something that both me and Tony would like to see going forward I think it's important and we would like to welcome all of you to come after talk with us and join this effort and in this time I would also like to thank everyone that helped and help this effort until now so up until now I described to you the problems that career is trying to solve in containers networking how it aims at solving them and a quick overview of the project from here Tony will describe to you the features we have implemented for liberty and our roadmap going ahead all right let's see if I manage to make this work I was looking all the time like which button is which I think I went back right yeah apparently it's it's reversed so I had to figure it out okay so as Gal said we are doing a live network remote driver it's not the only thing we'll be doing we'll also be integrating with all those container orchestration engines right but yeah let's let's talk first about the remote driver when Docker acquired socket plane they tasked them with developing a refactoring of the whole networking stack and making what is now known as live network it comes with a multi-node overlay driver but for for third parties for vendors they proposed to have the remote driver interface so basically to connect to it you just have to tell Docker somehow where is your your server or your demon that is listening for for bindings for for creation for creating all those abstractions that the networks the endpoints services and so on so as you can see here in the picture Docker sends the the rest API calls to to the courier remote driver Mohammed yesterday already did a description of some of those calls that the basic ones the joint network to container and so on and then we translate that to new turn right now like we're just creating networks creating port and so on but as we said we want to work together with the community to to add much more to that so that you can maybe make some patterns for load balancing you can always use those features directly with the neutron API but we want to expose some of that as well through the through the Docker label support and as you can see here and you could see here in the conference like yesterday it was shown with OBS this morning sadly the demo didn't go but it was with OVN and today you're gonna see the demo with with Mido net all right so how does the beef binding work very easy it's it's like really simple but somebody had to do it right all right so yeah you just you just get the request then you make the the vf pair and you give the information to docker so it would we do is we handle the the common part which is to give the information to docker and to and to retrieve the IPAM information from Neutron right and we then pass off control to the to the script from the different vendors we usually it's like a one liner or two liner script in bash but I mean you can write it in whatever you want right but it's it's like a really simple model like RC in its scripts like you just put the file into a place and it detects the beef type and if the beef type it's called middle net or OBS it will search for a file an executable called middle net and it will pass something standard input return standard output and that's it so as I said the common part and the other part of course for the for future cases like nested that may vary a bit for for the for the nested layer I'm gonna talk about it later and it's important to tell that we're running currently as root but we don't plan on continuing continuing doing so what we went to do actually is to leverage the the capability system of the Linux kernel so that we will run with a normal user that just has the minimum capabilities so probably cap net admin and not much more I hope and this is something that we also would like a neutral community to start using and we will start discussing it on Friday probably so for deployment right now really simple we still don't have the package so you have to check out the code but we're gonna have package there's the typical conversion file in itc courier courier.com if if you came here yesterday you saw that that Mohammed was modifying the token there rapidly in the in the configuration file you can also give the configuration using environment variables so if you're I don't know for example me I don't like to write configuration variables when I'm changing stuff all the time so I just said the environment variables but yeah we're gonna have packages we're gonna maintain them for the major distributions like cardio and canonical open stack and so on so it's it's just gonna be available but at the same time since we like containers so much we went to integrate with Kola we have been in discussions a bit already and yeah there's already a plan so basically each vendor that wants to integrate with us just has to make their own flavor with their own binding script and the Asians which probably they can reuse that effort for normal Newton support for Kola and then if all goes well you should just be able to to run courier and Kola in about two minutes with with the deployment of the parts that you need so probably only neutron and keystone maybe horizon if you're into the graphical sort of things it's not passing the slide alright let's see yeah that never fails almost so yeah for the nested use use use I would like to know is somebody here running Magnum okay more people need to run Magnum come on it's it's a nice project and how many people in that case when to run Magnum or run any other way the containers isolated in VMs all right there's there's some hands so yeah we don't have that yet but we are gonna build it it's on the on the road map that I'm gonna discuss later and the idea is the same we're gonna have a specific binding driver that that's the part that is necessary on the on the VM and on the lower side it probably will not need anything and the Asians of the of the vendors should be able to tackle it by itself because we are gonna identify on the Newton database that port is a special so that the Asians should be able to do it I think OBN does something very similar right now in fact we got the idea from that and we just want to standardize it and make it into a into a more common thing I think it I should be pronouncing it oven but I just you know been pronouncing it like that for a long time all right so let's go I need to change like this yeah so here you can see what will happen is basically that on the I don't know if it's useful if I point with this it's not visible anyway so the good ear in this case obviously will run on the VM so Magnum could just deploy it saying here you will find Newton or if for security reasons Newton cannot be accessible from the VM there is some proposal for being able to pass information in a very specific way from from the container sorry from the VM down and we may use that it's still as you can see not completely final but the basic concept is on the VM side the containers will be assigned a villain tack it can be with a bridge or or with OBS doesn't matter it's gonna be something very simple and then on the on the bottom part the vendor has to insert the flow that separates that that tack into a kind of device that then can be processed naturally by their own agent so we're gonna see if that will also receive some common code on everybody or anyone wants to do their own thing but yeah it's it's just it's still a question and the good thing and I think that it's really important when you compare it to just providing neutron networking to the VM and then making an overlay on top of that even if you got rid of of the of the double encapsulation which I believe that they did recently is that by doing this you get each container endpoint or however many nicks you want to run there a single resources that you can target directly with your neutron commands and and much more fine-getting control over over your infrastructure and your networking needs so that's that's for him because neutral he's more neutral than me so I'm gonna let him do so as I mentioned the earlier career mission is to bridge between two communities right the containers communities and docker Kubernetes as us and every everyone there and bridge between the neutron community and open stack and what we identify is that there are some missing parts that needs to be addressed and we are sometimes actively pursuing to to address them and implement them in neutron and we are sometimes they are being already addressed by other use cases and other members of the community and we are just making sure that what we need and our use cases are being handled as well so we could see here a few examples of this we are working on port forwarding in neutron which is a usable feature without the containers as well but this is important for us for feature parity with docker port mapping we are working on allocating being able to allocate tags to neutron resources this is important again for other use cases as well but for ours we want to to be able to pre allocate networks and ports for containers so you have the entire networking deployment ready and when you want to start a container on only you need to do the binding part okay so this is decrease deployment complexity and increase boot up time of the containers other items in this list is we want to formalize the way to define nested containers we are going to leverage villain tracking for that it's already have abstract naming like support and the parent ports and we believe this will be compatible with our use cases and other neutron features that we see missing and needs to be done and by as I mentioned when we connect containers networking to neutron we get with zero effort all of neutron features and neutron evolves and it's keep progressing and new features are added and these features are important to containers networking the end goal of course is to reduce complexity so instead of reinventing things in other abstractions we are using a neutron once so there is the security groups netting advanced services of course load balancing and firewall as a service that's next step to implement and there are many use cases for that for example link in Kubernetes service to a neutron load balancing and we can see that the current the current abstractions for containers are adding more and more features like Docker are introducing a pluggable IPM right now which neutron already has as part of Liberty so we are working how to connect these two together's for the roadmap this obviously is the roadmap that we took to get to where we are now there is no Liberty release of course because it this is quite in heavy development we are catching up to two patches from live network all the time the interfaces are changing it seems we will stabilize with with 1.9 for a while which is about to be released so so that's gonna be good and it's gonna give you a chance to try it if you want I'm gonna show you how in the demo but yeah basically we put some specs and we did some talks with the other with our own new new to the community and with with Magnum to set the basis for for the path forward then we did the binding layer which was the absolute must for for showing something like the the pinging demo and we also did the authentication which will allow you to eventually use different tenants you just use the token from the one you belong to and naturally you're gonna get the right access to the to the networks so what are we gonna do for for Mitaka Docker live network added support for a kind of configurable IPM some some API endpoints probably in the beginning we're gonna just use dummy one since we already do IPM as it stands now but we want to leverage it as much as possible make sure that we make the most of it then the nested containers then the next one which is the top one actually is functional testing to use the open stack infrastructure to be able to run the proper gating for the for the patches that it tests all the whole thing because now it's a bit manual we have unit tests they are very good but when you start to to to get speed you need to make sure that that everything is working and we want to to have it test the different vendors so we have to figure out if it's gonna be third party CI or how it's gonna be but it's already in discussion so we're gonna get there and finally the features that he was mentioning that we're pushing for with with neutron those features are important and will allow you to make a better usage and more similar to the to the one you expect and have come to to love in in Docker like the the ability to forward parts so you don't spend the floating IP for all your containers which at some point may get a bit expensive and not everybody has that amount of floating IPs and going forward for the end release yeah we're gonna go and enable more and more of neutron and what we really want to have is also support for the CNI the content container networking interface I believe and also we want to really ask you to join us and to contribute if you have some container orchestration engine that you want to have integration with please come to the IRC talk with us we're gonna probably have some kind of integrations directory in the repository so that you can make the code if the plugin is necessary so that's the right place to make it and then we can maybe even build some kind of testing for it so it's automated and so on so time for the demo time to put the heavy gear and and get to the serious stuff so who wants to see the life and who wants to see the life how many people oh man well I mean it depends on the network because it's it's very it's very it's very spotty the Wi-Fi let's see if I can get to my server in the other side of Tokyo all right otherwise I haven't recorded so at least you're gonna see something and get my comments on it but all right are we in or not let's start this I'm out of the network but no problem I have a fail-safe solution so let's see trying to connect now okay it's not showing yeah I'm I'm gonna move it now I was just connecting to the network all right so first let's do a SSH shuttle so we have access to the horizon fine I'm already locked in because I otherwise I would forget the password all right so let's go to the machine I'm connected with much so even if we lose connectivity we should be able to continue in a little while because as I said it comes and goes here you can see that we created a subnet pool so the idea is that you can create a subnet pool called courier and one called courier six for IPv6 completely up to you if you want to use it or not right now I'm not gonna use it and then you can see that we still don't have any Docker network created for courier so let's do that Docker network create demo okay so the network was created yes that's absolutely true I created with with a regular one so I'll create a different one doesn't matter test the courier is it capital okay yes yes yes absolutely absolutely yeah yeah right all right okay so now we go to show it at least and we can see that a network with a horrible long name which is the UUID that the Docker gives has been created we will try somehow to make it better this is this is for mapping reasons since the UUIDs cannot are given by the different applications so you cannot use the same which would be nice but in any case one is much longer than the other and you can just not map one to the other because you would not be able to the mapping in reverse so well what can we do all right so now let's try to run something I will create a different one publish okay so we will create first the finger I I called it test the network so as you can see you give you can do in a single command create the network the port and everything and start the container but you have to specify the driver and the name of the network when doing so nice we lost access to the Wi-Fi just just give it a second it usually comes back I was expecting a better networking here so all right so we created and now there's a little goblin going and plugging the beef and so on and now is plugging it into the overlay and hopefully after second or second if we didn't lose the Wi-Fi we're gonna see it inside taking a while okay here we are so yeah you see we are in the network that I created we have an IP if you would have added the IPv6 quitter 6 subnet pool you would get an IPv6 by default as well let's do that again so we're gonna get a pingy on the test network and since that's gonna take a while as experience showed let's go to look a bit is it possible to see it from where you stand or should I make it bigger bigger right all right so let's see if I remember how to use Chrome okay is it is it okay now cool so yeah yesterday Muhammad didn't show this part because it was I think it was added recently it just if you want to be able to graphically see okay what what do I have plugged we recently added code to to show where is the where is the the binding or which bindings are there and which belong to our courier remote driver so as you can see here you get the information courier container so that means it's ours so you shouldn't mess I mean delete them because otherwise you could create inconsistencies but you can just do it with the Docker commands all right so here we are we have four this is the pingy so let's ping it ping I'm lazy to type so I'm gonna copy it okay and we see that the ping works yeah big success all right yeah so this this one in particular as I said it runs on on top of middle net right now there's some changes to the multi-node code but it should just work in a couple of weeks because there there was some change in the way that KV store the key value store that has the network information between the different Docker containers propagates information and well I prepare well taco which I have to thank a lot for the demo made prepare the demo earlier than that so there is that and now let's see how we can use a neutron to affect all this so since Gali is much better than me than the neutron I with usage of Newton I have to use myself the UI because I really I really don't know the questions because we have I think it's out of time or well I'm just want to show the security groups thing so it's are we out of time completely or yeah I mean it's maybe the idea here is again that we could use a neutron features to block this ping with security groups and any other neutron features like the floating IPs and things that I mentioned earlier and I think it's important if anyone has a questions right now maybe yeah you can ask the questions while I do this so it's it's cool how does this work with neutron port hot plug for Docker so non-existing ports on Docker hot plugging through neutron excuse me well can you repeat it you can see that the pinging hasn't stopped because I deleted they work with neutron port hot plugging so if you don't have a pre-existing interface inside your Docker and you do a neutron create a new interface so basically where what we are doing is v. Ethernet pair right so we have the neutron port that is created in in a neutron and we only need the only thing that we need to do is bind the namespace of the containers when it's created to the neutron port and it depends on the infrastructure of course depends on which backend you use for example it's connecting it to OVS bridge port connecting it to Linux bridge connecting it to any other thing this is exactly the generic with binding layer that Tony mentioned yeah and if you wanted to do something more exotic like I don't know clear containers from Intel or hyper dot the sage that they do kind of VM container this mixture as long as they implemented the live network the only thing that would have to change is that would have to add some information in the in the codebase that when it's one of those beef types or or something like that it would detect it and it would create a tap device instead of of a vf and then it would be put so it would just work with a bit of hand holding question on the security group is it so the honor for the VM we have a Linux bridge that put in the security group for containers you directly plug the vvv pairs to the to the container there's no place to do IP tables well actually I think so the question was what would happen in the OBS reference implementation with security groups that currently uses Linux bridges it uses some BF between Linux bridge and so well everybody is familiar with it right so what happens is I think that the the beef driver the beef driver for OBS when it detects it's security groups enabled should create the extra bridge Linux bridge and do and do the two-step plugging that is necessary for that it should not be too difficult it should be a couple of lines and maybe we can make it configurable in a section for the so in the configuration file we can have a section for that and yeah it's all a matter of the of the binding script there's no reason why it would not work but I'm not sure if anybody is gonna want to do that because from what I heard yesterday they are gonna get the contract support to do all those things right so maybe just the normal plugging where it's gonna be able to the security groups in the in the release I'm not sure it depends so for the next for the nested case containers within a VM what do the tags indicate does it indicate the tendency of the containers no the network it does yeah okay very good thanks okay so thank you very much thanks all for joining