 OK. It's 11.50. So let's get started. Hello, everyone. My name is Favad Khalik. And I have with me Tony and Gal. So all of us are part of the courier team. And we are trying to bring you one of these important features as part of OpenStack that I think pretty much all of you have been waiting for. So I'm from Plumgrid. Tony is from Medokura. And Gal is with Huawei. And Gal is our PTL for courier project. So with that, I think let's get started. Before we get started, by the way, I can't see any of you here. I would like to do a quick survey of how many of you have used courier so far? No one. Oh, we got a couple of guys there. Nice. And how many of you have used Magnum? Nice. We have around 20 people. That's good. So great. Let's get started. This is us. And if any of you have any questions later on, please feel free to reach out to us about that. So the agenda for today's presentation is we'll talk about a brief introduction about Magnum and Neutron. We'd like to cover the integration of Neutron, Magnum, and courier. So I'm not assuming here all of you know all the things. So we'll give a brief introduction. And those of you who have a background about some of these things, please bear with us. Then we'll go over the problems which are there today in the networking for nested containers. And the way the solution exists today is there are some limitations there. And Gal will cover that. After that, I'll go over the design of how this problem is being addressed in the OpenStack community right now, and the solution of that, and how this is being implemented. And then Tony will cover the capabilities using this design, any limitations, considerations that you guys should be aware of, what is the current status of the work so far, and what are the next steps on the roadmap on the courier for this particular project. And then Q&A for you guys to ask any questions if you have any. So with that, let's go over a bit of introduction about Magnum. Magnum is a container as a service for OpenStack. And the way Magnum works is that Magnum uses your OpenStack components like Nova, Neutron to provision these container orchestration engines or your things like Docker, Swam, or Kubernetes, or Mezzos using your OpenStack components like Nova, Neutron, Cinder, Glance. And it has this notion of Bayes and Bayes model, Bayes like a node where your container orchestration engine runs, and that is capable of provisioning your containers and its networking and et cetera. And over here, what you see in this figure is you have a couple of Nova instances, both of our running containers. And in this example is a Docker, Swam, Bay, provisioned by Magnum. And similarly, if you're running Kubernetes, you have a Nova instance. The only difference that you see over here is there's a pod, which is like a logical entity. A minimal deployable unit for Kubernetes. So Magnum is capable of provisioning all of them using its APIs. So the way it works is that a high level architectural overview of Magnum, this is important. Background for the information that will show you how the integration actually is going to work. So please pay close attention. You have Magnum Client, which talks to Magnum API. And then there's a component as part of Magnum, which is Magnum Conductor, which communicates with OpenStack Heat, and which also communicates with internal container orchestration engines like Kubernetes and Docker, Swam. So you have Heat, Heat Templates, which Magnum provisions. It uses Nova, Neutron, Cinder, Glance to launch these infrastructure layer VMs where then Magnum goes on and provision these container orchestration engines. So the way it works is that at high level, all of this goes through and you use Heat Templates, you use CloudInnet, provision the infrastructure for containers, let's say Kubernetes or Docker, Swam. And let's say you have Docker up and running in one of these Nova instances, and then you're good to go and able to launch containers. And this is what Magnum provides you today as part of OpenStack. A bit of introduction of Neutron. I'm pretty sure it does not need introduction. So 30 seconds. It's a network as a service for OpenStack and provides you with the ability to provision-rich network topologies using different pluggable backends. It's technology aglostic. And it's extensible if you want to add your own features. You can add your own APIs and functionality. It's part of it. And it also provides you capability to use different advanced services like load balancer, firewall, et cetera. And the goal over here is that Courier, Magnum, and Neutron is going to come together and provide you networking for containers. Over here, I'm going to hand it over to Gal, who's going to cover the introduction of Magnum and the problems we have with container networking so far. Thank you, Fawad. Everyone can hear me? I can't see, so there is no. We had a career introduction session today in the morning. We had it at 9 o'clock. I'll do a very brief introduction to anyone that is new to the project. But if you want more details, it's better that you watch the presentation. So what is Courier? Courier is essentially, this sentence says it quite well. We recognize that the users and the employers are deploying OpenStack and are starting to deploy side by side or inside this new thing called containers. And they prefer to do it with their own orchestration engines, their own networking models, like Kubernetes as a CNI, Docker as the CNM, the Lib Network. And all of these are evolving, are experimental, are quite different and still new in terms of the features they offer and services in terms of networking. Courier mission is to take all of this and bridge it to OpenStack. Bridge it to OpenStack networking to Neutron. We realize that Neutron is relatively mature. It's a relatively production grade. It has the richness of all the solution, features that are being tested, CI, being developed by a large community. And we are using Neutron and OpenStack advance services to implement networking for containers. So you can see here that we can see that if we look at Neutron abstraction and Lib Network abstraction, they are quite similar. And networking is evolving. We are starting to put applications in the middle. And we can see this for containers. We can also see this for VMs. But if you look at the high level abstraction way, it's still we are connecting endpoints to networks and ports to networks in Neutron. We have this similar concept. So if we are running this in mixed environments and together, why having to correlate and connect a few solutions when we can do it at one? And career is, of course, fully open source. It's part of OpenStack Big Tent. We are doing everything the open source way. We have a weekly IRC meeting. It's alternating meetings. So we have people in Japan, in Europe, and in US. So you have no excuse. Any time zone is good for you. We are doing everything from the design to the implementation in the OpenStack way. And career is working on one end, on the container communities like Kubernetes, Mezos, and Docker Swarm on one side, and also bridging between individual containers like Docker and Rocket, and bridging all of these to OpenStack. But we are also working on the OpenStack side on supporting features that are needed for this connection. One example is the network tags. That is an important feature for us for attaching existing networks and so on. And we are also working with other OpenStack container related project like Magnum and Koala to bring full integration. This is a quick overview on the components. We have already full integration with Docker Lib Network and Docker Swarm. This includes the pluggable IPAM for Docker. We know to map all of these to Neutron IPAM and to Neutron construct. Tony showed in the morning today about our Kubernetes integration, which I think is quite an exciting design that you should all get familiar with. And we are able to enhance container workloads today with Neutron constructs, with advanced services, with security, and all of these things. And we are also doing features that are helping deployers and users to use containers. For example, a feature that Muhammad presented in the morning about being able to attach to existing networks. So you can attach. You have a network. It has VMs on it. You can attach to this network your container's workload and have a mixed network control with Neutron for VMs, for containers, for nested containers, and VM. So what are the problems that we see in the realm of nested containers? And just a quick introduction, nested containers is all about isolation. We want to have a tenant isolation between our containers. So we run them inside the tenant VMs. And these environments, and this is what Fawad mentioned about Project Magnum, these environments are too complicated right now. They require, first, they require two networking solutions just to connect between two containers in our environments, and I will soon show it. They make it very hard to enforce policy construct, to enforce isolation, to enforce things that we need in networking because of this complication. And of course, there are the usual concern of performance and overhead of management. Now we can see in this, this is a quite common setup for nested containers. We have tenant VMs, and the tenant deploy containers inside of those VMs. And usually, we have a Neutron solution that is doing the networking connecting between the VMs. And then we have a whole different solution inside the VM just to connect the containers. So you can imagine the lifetime of a packet in this environment going all over with all over these layers. And performance, it's one concern, but this can be avoided. You could use Flanel host mode. And there are solutions, but as I mentioned earlier, to me, the most critical problem is when you look a step a bit further, like how do you orchestrate these environments? How you deploy them? How you manage and monitor this? I've spent some time monitoring and debugging virtualized environments lately, and it's complicated enough with one solution. And now you have two solutions just to connect between two containers, and not to mention how you do upgrades and updates of these environments and things like that. It's just make things complicated. And as Fawad and Tony will soon show, we already have solutions in Neutron that can handle these use cases, that can solve these with one infrastructure and simplify this environment. And career mission in this realm is to expose all of these solutions to the user and simplify all of this. And this is just a quick example that showing us some of the challenges of enforcing policy in these environments. We can see here, we have three VMs, and they are connected, they're owned by a single tenant, and they're connected to a single Neutron network. And then think about this thing when the tenant is trying to deploy containers from different networks inside of these three VMs. We have a mix of networks and a mix of responsibility, which makes policy enforcement and security a big challenge. And I'll hand over to Fawad now that is going to describe some of the solutions that we have with career. Thanks, Gal. Let's go over the solution that we proposed as part of this problem that the gal just went over. Before we go over there, let's discuss some requirements and the use cases around this area. So the requirement is that, as part of Magnum, Magnum deploys these containers inside the VMs, you would want to have one network that would be connected or one network be connecting your VMs and containers and bare metal and not to have to worry about these multiple layers. The other requirement is that the policy enforcement has to come from one plane. The other thing is that, as part of this implementation as of nested networking, all of the components that are available in OpenStack should be the ones which are providing networking for containers other than something which is coming from outside. And there are many other things which we can go over. That's coming from the requirement point of view. If you talk about the use cases, let's go over some of these use cases that we have listed over here. You have a container which is running inside a VM or you have a container which is running on bare metal. That should be able to communicate with another container which is running inside another VM or on a bare metal, same or different host. And that should be able to communicate seamlessly using neutron networks. You don't have to worry about anything other than neutron to provision networking or separate networking or have to worry about separate planes. The other thing is that your containers which are inside the VMs or your containers which are on bare metal should be able to communicate with your virtual machines. In this case, the example is like hybrid workloads where you have some containers, you have some VMs, and we have talked to some operators which have lots of use cases around this area. And this is one of the main reasons they want to have a single networking plane. And another reason is that you have something behind in the physical world, you have physical boxes, let's say maybe using an ironic or something else. And you have nested or bare metal containers, and you want to have communication across both. That's another one of the areas that this use cases are being covered here. Then neutron becomes the first class citizen of one networking for these containers. And you get all the capabilities that neutron provides in this design. And Gal mentioned and Tony mentioned in the previous session for courier is that neutron offers so many features which are well-tested running in production with those specific back-end implementations. And we already know their work, and they have production mileage. Why go down something in U-Path? You should leverage something which has been running for two years in production. And this is where you get all the features. The other benefit over here is that our use case over here is that you have, let's say, new features being added in neutron, something that's a QOS being added, or let's say, tap as a service being added. You would want to use those. How would you use those? You don't have to add something new in courier or something new for container-specific networking. You just leverage that. So all those benefits are being leveraged over here as part of the use cases. Then another important point over here is that micro-segmentation or policies. How do you enforce that? And this is where neutron security groups come to the picture. You don't have to worry about something going inside the VMs and maybe putting some IP table or something else to provide your policy, which is different from your VMs and all the above three points where you have bare metals and virtual machines for communication of policies. So all of that is taken care of using consistent policy enforcement. And of course, the advanced networking capabilities, which are there as part of neutron, are also being leveraged as part of the use cases. And all of these use cases will be addressed by the solution that we proposed here in this session. Now, let's talk about how this is going to work. So the diagram you're looking over here is that we have two Nova instances. Both of them are connected to a port. And you've see four neutron networks, network one, network two, three, and four. And all of them are connected to a router. You have containers inside both of these Nova instances, container one, container two, and the left Nova instance, and the container three, and container four, and the right Nova instances. One more thing I'm referring back to the slide I showed on Magnum. Magnum provisioned these Nova instances using its APIs and heat templates. And it provisioned, let's say, Docker, Swarm, or Kubernetes inside these Nova instances. Now you have these containers being provisioned, and they need connectivity from Neutron, not from something another layer on top of it. And this is where how it's going to work is that in Neutron, there is this new feature or extension being added. It's called trunk ports or VLAN-aware VMs. We leverage that feature to provide functionality of networking for nested containers. And how it's going to work is that you have these white ports onto these Nova instances. These ports are responsible for providing networking to these Nova instances. And Nova instances can be a VM, can be a container, can be bare metal. Then you see these little blue ports over there with these VLAN tags associated with them. These ones are which will provide networking to these containers inside. So let's say there's a logical diagram over here. You have container one attached to this port, which is with the VLAN 100 on the left instance, container two attached to another port, which has VLAN 200 on the left instance, then container three on VLAN 400 on the right instance, and then container four on VLAN 100 on the right instance. I put these VLAN 100 intentionally over there to show that this VLAN does not have to be unique across instances. The uniqueness of the VLAN relies only within that particular port. That white port over there is called a trunk port. And I'll go over the details of what it is and how it really works. And the goal over here is that when you provision these ports, these are standard neutron ports. So a neutron port, let's say you are using L2 Gateway, you provision a port that's connected to a physical box, and that's a neutron port. Using tap as a service, that's where you're sending your traffic, that's another neutron port. You provision a container, and you want to have networking go into the container, that's your neutron port. So this, in the end, is a standard neutron port, which means for upgrades or for anything else from that perspective, there's nothing new for an operator perspective. You provision a neutron port, you associate over here through the trunk port API, and boom, your networking for containers is up and running. And if you go to the bottom side of the picture, you have these four networks, they were provisioned to neutron APIs, or they were provisioned through, let's say, Courier using a lib network driver, or let's say, using the Kublate Courier CNI driver. And you provision them, they leverage neutron, they're up and running. At this point, the lower layer or base networking is coming from neutron, and you have this topology up and running. And let's say these two containers need to communicate with each other. The packet from the container will come to this particular VM, in this case, VLAN 100. The networking implementation will figure out there's a mapping of a VLAN to this particular container. We'll strip off the VLAN, we'll onboard this onto a particular network, whichever it's supposed to go. Let's say VLAN 100 going to network one. VLAN 200 on the left instance going to network one as well. They should be able to communicate with each other. If they want to communicate with this NOVA instance on the right side, they should be able to communicate as well, because network one is connected to a neutron after at that point, since it's a port, it doesn't matter because everything else is standard and everything else works in the same way. Because as soon as this onboarding mechanism through VLAN is happening, everything else is standard neutron and there's nothing special over there, or there's nothing new over there, which is not happening today. On the right side, if you see that similarly, you have VLAN 400 and VLAN 100 for these two respective containers attached to this neutron network, and they should be able to communicate. One thing to notice over there is that these two NOVA instances, they are attached to network three and four, respectively, and network three and four are again neutron networks. And the dotted line coming from this white port to this network three and dotted line coming from this white port to this network four shows that these are trunk ports and the other ones are, they have VLAN tags. So you could have your, the VM port also connected to network one, which means there is no restrictions and it's completely flexible from that perspective. And again, going back to the basics here, the takeaway from this slide is that, again, this is just a port for you, a neutron port, attach it wherever you want, as neutron allows you to do it, and you'll have connectivity using the design that you're posing as part of the career magnum and neutron integration. With that, I'm gonna go over how neutron trunk ports work. So let's say you have a NOVA instance. Trunk port is like a logical entity. It's not really a specific port. So you provision a NOVA port, or let's say you have a VM. Let's take an interesting scenario here. You have a NOVA VM with a port. Let's say port zero over here. It's already up and running. And now what you do with that, you go to use these trunk port neutron or VLAN-aware VMs neutron extension and let's say I want to use, make port zero, a trunk port for this VM. And as soon as you do it, you see this dotted line or this box, this rectangle come outside of it. And now you're capable of adding additional ports to this trunk port as tough supports and you can specify their VLAN ID or their metadata. Right now we have VLAN support over there and maybe in the future we'll have a more type of segmentation available as well. So let's say now as part of this, you have port one and port two also associated with this particular trunk port. And port one is connected to this network one, port zero to network zero and port two network two. Now as you see over here, the mapping is really happening based on the VLAN. You have one port, you have one NOVA instance. NOVA instance is running containers inside and your container to this network communication is happening through this mapping which is happening through the trunk port which contains information like VLAN ID, all the information that our standard port has like a MAC address, IP address, et cetera, who owns this. Yeah. One more thing that to note over here is that some of you must be wondering that who owns this port? Normally when you provision a port there is like, there is a host binding. So the host binding for this host is actually coming from the parent port which in this case is the port zero and port zero resides on one of the compute nodes and that's where the host binding is inherited all the way to this containers because these containers in the end also live on the same host. So in the end, I would like summarize over here. You have these ports combined into one VIF and the port zero becomes a trunk and port one and port two become the supports for this trunk and this is how the communication will flow and we'll take any questions, this is new and if you guys have any questions we'll be happy to take those. Now, coming from an operator perspective how this actually works in the entire picture you have Magnum, you have Neutron, you have Gurrier, you deploy let's say Kubernetes or Docker Swamp and you wanna see like how does it work? So let's say you're being a user here and being a user you are able to communicate with Magnum Client and you say I'm gonna select this provision this Bay model of type Docker Swamp and this Bay model I'm gonna use to provision a Bay on a couple of Novenstas with the Swamp manager and Swamp agents and all this communication happens it goes in provision on Novenstas this is a logical diagram by the way I have one Novenstas here shows everything in one box. So you provision Docker Swamp, it provisioned Docker daemon and the goal over here is that when Magnum launches these Novenstas it will know that now it needs to provide networking from Courier and Neutron it will provision the Courier agent as part of these Novenstas and as soon as Courier agent is there it will be capable of doing all the things we talked about in the last two slides. At that point Docker daemon is let's say in this case we're using Docker Swamp so use lib network for that and Docker daemon is talking to Courier agent through of course the driver implementation which is a remote driver and as soon as you go to Docker Swamp you start provisioning containers or you provision a network before and start provisioning containers those containers will come Courier agent will take the information about the container will know what IP to sign and at this point one more piece of information is that this will be intelligent enough to figure out what available VLANs are there will care those VLANs and make sure that those VLANs information is given to Courier which is a service running on one of the nodes that will communicate with Neutron using the trunk port Neutron extension this will go and add a support onto this particular trunk so that the communication for that container will start happening on that particular Neutron network. I'm gonna go slow again. I'm pretty sure I went too fast here. So the thing to note over here is that DockerDemon is running here with libnetwork and it will communicate with Courier agent. Courier agent's responsibility is simple. Define the VLAN which is available create a web pair and attach to the container inside. That's it. At that point it will hand over the rest of the job to the Courier to talk to Neutron with enough metadata to go back to this instance and create a support. At that point both the ends will come and attach to each other and you'll have communication of the NOVA. Instance containers running is a NOVA instance to be able to talk to over Neutron networks. One thing to note over here is that these VLAN IDs are not supposed to be exposed to the user so you don't have to worry about what VLAN to use or what not to use and this will be a VLAN ID or VLAN or segmentation allocation engine responsible for allocating these internally and make sure there's no conflicts and they're unique as per the scope which in this case is a trunk port. So with that all of this will be able to have your networking app and running for these containers running inside these NOVA instances in a sense that you're good to take it for production grade container networking in OpenStack. And this is where I'm gonna hand it over to Tony to talk about capabilities and considerations and next steps. Yeah, thank you. So we covered a lot of ground so I expect that you will have a few questions so I'm not gonna take too long on that. Hopefully I'll try. Yeah, so for what, cover the trunk and the support and one thing that you have to take into consideration when you're given such a tool that this is quite powerful and it allows for a painless maintenance of the ports that you get on the containers is that we have to get each of our integrations be it swarm as he detailed how that looks like that there is a courier agent that talks to a courier outside and so on. The Kubernetes one looks different, the Mesos one looks different. So it is important for the operators that want to provide different services that we do this in a way that it is consistent the way in which the segmentation ID is allocated so that when you want to inspect that is gonna be easy. The Neutron spec and the first patches for the implementation give a pretty good idea about how you can consult that information. So if you want to reach out I can give you more information about that later but it manages very well the resources for you so if you need to manually delete something or you want to manage that yourself instead of Magnum that is something that you can do and the good thing about this solution is that it gives you further isolation so if you're in a Magnum or a manually managed nested environment the good news is the VM that may belong to a tenant the tenant may have several users for the containers so the tenant VM which may have some logs or some things that you don't want to be seen that kind of stuff doesn't need to be exposed with a floating IP anymore because the floating IPs will be able to be assigned to containers in the future maybe even with port forwarding that you will have a single floating IP for several port consumptions as well the good thing about the spec is that it gives freedom in terms of the segmentation technology so right now it's villain but if that would be problematic for your environment or a vendor would find some better way to do that that takes less resources or gives you more than 4096 per port that should be relatively easy to add it's just adding an integer and then that the vendor supporting it there is always a bit of the concern that what happens if some vendor implements some and not others am I kind of a bit locked in so what we expect is that the villain one is going to be like the standard that everybody implements so that shouldn't be too big of a concern another thing that is a concern with the villain usage is that so nowadays you could potentially have different villains communicating out of your VM and you may say okay but I want my 10MVM to be able to send traffic over that just like it was doing up until now so the answer to that is probably you should be able to define some villains that you don't want the location to take from you and the good news is that those villains that villain traffic is just gonna work exactly as it is as that is mandated by the specification so limitations, I had to bring a bit of bad news so the policy is applied at the host level right now which I mean the container communication container to container inside the same host even if it goes to the host through the beer tail and so on it will have an extra copy right through the host kernel and it will give a chance to the neutron agent to do something with that apply policy which is what we want in the future it may be that we use something some extra technology like BPF to give a lighter weight policy for the VM so that the traffic container to container inside the same VM doesn't need to aggress the VM but of course that's always depends on how much of a pain that is for the operators and we will really welcome your feedback on that as I said before only villains for now even though I suspect that when Newton comes around we're gonna have something more obviously since the VM is running containers what if the container is broken I could change the villain that my container is plugged into right and then that would actually move it to another support so that is a concern I mean even though it gives you more isolation you should still use container isolation technologies like SELinux or whatnot because otherwise yeah the networks are only gonna belong to the tenant because you're gonna actually create a support for a different tenant on your tenant port only the admin can do that but it's something to keep in mind finally the logging so for these components that we're gonna have so the decoder agent that does the translation and creates the neutral resources usually is going to be deployed on an admin owned VM and it's a bit of a pain for the admin to have to go into the VMs for each of the tenant deployments and have to collect the logs from there so that's an area where we want some input about what people would prefer if we just build in support to export those logs or it's something that the people prefer to handle by themselves and yeah it will for now make you choose another implementation than the reference implementation because OBS is still as far as I know if somebody knows more current information doesn't support QNQ which is necessary for doing that so yeah that's also another concern so you would probably be able still to have ML2 and use a provider for this and a provider for the rest of your deployment if you really are running OBS so the current status is that the spec was approved both on the on the neutral side Russell Bryan did that Fawa did the one for the career side so everything is kind of ready to go into implementation the neutron people have already started and their patches posted with good reviews it's gonna be there really soon then the swarm integration is completed so we usually go first for bare metal integration just to get all the basic infrastructure to integrate with the orchestration engine and then we move that into the nested world so the swarm nested part though that split good erosion that Fawa presented is something that still needs to be done but we have all the path already worked about doing the live network driver and we have all that library ready for us to use Kubernetes ID mode earlier it already works but it needs to be upstreamed and it will have probably a very similar design to the mesos one so in the design sessions that we're gonna have later we're gonna cover that and we're going to try to come up with a model that can be shared between the different integrations that allows you to reason in a very similar way like using CNI when we're available not in swarm and so on so it will give you a sense of familiarity so the next steps which you can help define by attending the work sessions is to just make sure that the Neutron transport implementation gets there, merged and we can start to use it to finish the bare metal integrations which we're on a good path but we welcome any contribution then make more resources available so for example for the swarm integration now you can choose existing networks as we said but maybe you want that when you create a Docker network that it automatically links you up to a specific router Neutron router or that you can tack the containers when you start them with swarm and say I want this container to serve an endpoint that I have behind the Neutron load balancer so you should be able to do that and these are small things that are driven by requests that we get from the community and that will empower the usage but are currently not there. So yeah, we need to make the magnum pieces like heat templates, images and so on so that for example the swarm one will have this small code erasure the mesos and the Kubernetes one will have the API watcher and the worker nodes will have the CNI driver and yeah, that's basically it okay there is this small piece about the administration VM about the logging and so on but I think this should be relatively smaller than the other ones. So you're welcome to ask any question to any of us and we'll try to answer to the best of our ability. All right, okay. Couple of questions, one is on performance so every time a container is created you'll have to create Neutron ports and supports so you don't want to slow down the creation rate. Have you given any thought to the performance? So first there is this feature that I presented that we can actually and we are free it's right now only for networks but we can basically bind it to already created ports. Okay, so you could have this pool of ports pre-allocated and pre-created and then you only do the binding part. We haven't yet got into testing the latency and what it means to create a port in the system but we'll definitely need to do this. Yeah, in the very metal case from the Kubernetes experience we found that it adds almost nothing like yes, the CNI driver has to go has to get some information so it has to go to the API to request that extra information and more than the time that that takes to be done the concern for me is a bit more like that creates a potential bottleneck where you would have to maybe to add more API nodes than you would have otherwise because you have this extra request coming from the worker nodes but yeah, time-wise it's very negligible. Okay, maybe we'll, I'd like to understand the pre-allocation thing later. Sure, sure. Any comment on the status of ironic networking and is that ready to work with Curio? Well, we haven't yet explored this area but it's definitely something that we want and I believe that if ironic integration with Neutron is going to be good enough there's, I don't see a reason why it won't work with Curio just as well but it's definitely something we are going to explore. Yeah, I mean, first we're targeting bare metal like this is not deployed by ironic but yeah, we need more people that is running ironic to join us and to give us their use cases, show us the example deployments they are doing and then we'll work it in our roadmap. Thanks. Question over there? Two questions. Would this same concept work for nested VMs? If you have like VMs running. You mean a VM in a VM? Yeah, it would be very similar. And do you have to use Curio and or Magnum to do this or can you manually assign these supports to these VMs? So, Curio is for containers if you're running nested VMs. The good thing is that Neutron trunk point implementation is agnostic of, we are a user of that so you could actually use and build your automation around that. Thanks. Yeah, I mean, just a sec, we have a lot of, we have people that they don't use Magnum and they are planning to script that by themselves. So, and the API is really nice and easy to use and it prevents you from doing bad things like deleting a trunk where that it already has support. So. A couple of questions. So, are subnets actually shared with other non-container networks or do you have to have like an exclusive network and subnet for containers? There is no restriction. No restriction, so we could use the same network for containers as well as VMs and they can talk directly to each other. Yes, this is one of the powerful things that Curio brings and this is a use case that we heard quite a lot. Okay, so my other question is, does that also apply to security groups? Can we? Correct, sure, go ahead. All the Neutronic sources. Yes. So, one thing that we have for example, because for example in Kubernetes that I'm dealing the most with, the policy is still being built in. It's not yet released. So, in the meantime, so that the people can use security groups and have safer deployments what we do is we allow you to specify as an option security group. So, when you would start something with Docker run or with Kubernetes pod definition, you would be able to say, I want this UUID of this security group and Curio takes care of that for you. Okay, thank you. Yes, about the security groups. Are the security groups on per port, I mean support or on per container? So, they are per port, which means that both the support and the trunk can have different security groups. So, one of the biggest reasons why we're doing this, apart from all the ones that were listed is that it is important that all the networking endpoints that you have or that your tenants or users have should be something that you can target with the Neutron API and add something like a security group to a specific port that belongs to a container, because with the current state, you just cannot do that. If there is more questions. That's all. I think we are out of time anyway. Any questions, please reach out to us and thank you so much for coming. Yeah, thanks a lot. Thank you.