 Imagine, you walk into a bespoke tailoring boutique, and to order your suit. The tailor welcomes you in, gets you comfortable, and starts showing you one fabric after another. Talking about the advantages of one, the disadvantages of the other, tells you how soft it is, how comfortable it is, how it is good in the cold, so on and so forth. Dear fellow open-stackers, open-infra-experts, and technologists in general, welcome to our talk on tailoring your Kubernetes suit with the best networking fabric. Today, I have with me a few other tailors, or designers if you will. The first is Dinesh, who is a solution architect with Ericsson, and then I have... Hello, my name is Nishant Kumar. I work for Ericsson, and I'm also a developer at the Airship Project, and I'm sure you would have heard about the Airship Project a lot. And I'm Uday. I work with Emirates NBD in Dubai, and I used to be part of Ericsson earlier as well. So what are we going to cover? What are the fabrics that we are going to cover? It is flannel, calico, canal, weave net, courier, a comparison of what each of them have and do not have, and we will take it forward from there. So before we start off, an introduction into containers. There are various networking solutions, and there are various container solutions as well, orchestration services as well, Kubernetes, RKT, so many terms come to mind. But to have them all, their implementations all standardized, was how the CNI came into being. So the CNI specification is basically an interface between container runtime and networking model implementations. So the container runtime between them sits the CNI, and each of them, whether it be weave, calico, canal, or romana, all have their own CNI implementations based on the specification. So what is the basic requirement? One is, obviously, IP address management, root advertisement, and being able to communicate and have connectivity between the container network. So let us look at a sample CNI configuration. This is for calico, and you can see the back ends, the policies that can be implemented, and so on. Some of this, if you have attended other talks, may be a repeat to you, but for those who are new, this may be something which is worthwhile going and exploring about. So to start off with the first fabric, I would ask Nishant to take it forward. Thank you, Uday. So let's talk about Flannel. Flannel was developed by CoreOS, and it was for the purpose of networking within Kubernetes, but it can also be used as a general SDN solution for other purposes. It's fairly easy to set up L3 network fabric for Kubernetes using Flannel. So the idea here is that we have to create another network over your host network. This network is called your OLA network. Now, the pod that would come up would belong to this OLA network, and thus the packets are routed within this Flannel network which is created by the daemon process run by Flannel. So Flannel uses HCD as a store for retrieving network information. It can also directly talk to the Kubernetes API server, and it should not need HCD separately just for Flannel. And as I said earlier, it uses a daemon process on each node which tries to look up the Flannel configuration settings, and then it sets up the Flannel network, and then it routes the packet as per the configuration. Flannel has support for different backends. VXlan is the most recommended backend because of its high performance. Apart from VXlan, you have host gatefay, you have UDP, but UDP is suggested mostly for debugging and not to be used in production. There are some other cloud provider backends as well, but they're in the experimental phase like AWS, GCE, or early VPC. Flannel does not support network policy. Let's try to understand configuration and leasing process within Flannel. So while I was talking about OLA network, the question arises is what is this network that our cluster is going to use? What are the network that nodes will be assigned? How will the pod get those IP addresses, and within what kind of range will Flannel set this up? So this is a sample configuration file for Flannel. It's in JSON format, and the most important thing here is the network address. So this address becomes your OLA network address, and subnet length is provided as 24. You can give any values and override it. You have given the backend as VXlan. So suppose if we add a new host, so the host A will have a Flannel de-demand process running within it, and it will try to read this configuration file and try to gain a subnet address. So in this case, we can have a host A, and it will try to gain that address, which in this case becomes 10.100.5.0 slash 24, and when it registers, when Flannel registers this address within HCD, it becomes a key, and if this registration happens successfully, it gets this lease for 24 hours. So before one hour the lease expires, it can again try to renew it. So in the same case, if we try to add another host B, it will again take that subnet information out of that bigger network address space. So this is how Flannel manages. It gives each host a different subnet address, and that's how it tries to route these packets. And it is Flannel's responsibility that each host has the subnet lease for that required amount of time. Now, since we know that each host has a subnet address, each host will have its own public IP address as well. So it is very important for Flannel to keep that mapping within it, and that's how we can see below, within HCD it stores this mapping. So if we try to loop via the HCDL command that we have a subnet called 10.5.34.0 slash 24, it will have its public IP maintained there. So therefore Flannel has an overview of all the clusters. It knows which subnet is associated with public IP address, and thus it knows how one packet could be routed from one node to another node in the complete cluster. Let's try to understand a packet flow between two pods running on two different nodes. There are some different networks here. One of them is your host network, which is running at 10.0.1.0. And then you have your overlay network, which is 192.168.0.0. So this is the overlay network that will be defined in your Flannel configuration file like I showed earlier. And each host, in this case, we have node one and node two. Both these nodes will have their subnet ranges, which is taken from the bigger address space. So node one has 192.168.1.1, and node two has 2.1. So let's try to understand if pod one is trying to send a packet to pod two, how will that packet flow look like and how Flannel uses it for forwarding these packets. So from pod, it's a usual processing that happens. It will go via the default container gateway to Docker zero interface. But then once it finds out that this packet is meant for some other node, there's an interface called Flannel zero. Now this interface is created by the daemon process Flannel D and this interface is a virtual Ethernet device. It's basically an implementation of VXLAN. And for network is just another interface which should know how to route these packets. Apart from creating this interface Flannel zero, Flannel D, daemon process also writes some kernel information, kernel route information. So in this case, kernel finds out that this packet is meant to be for another node. So it transfers this packet to Flannel zero interface. When the packet reaches this interface, there's encapsulation happens through VXLAN and a UDP header is added with source IP address and destination IP address. And then Flannel zero interface pushes this through a special VXLAN port to another node. So this packet will flow through the host interface E0 and it will reach out to your node to host interface. And since it was published out to a special VXLAN port, the kernel from the node to that is a destination node knows that it needs to pass this packet to Flannel zero interface again, where the de-encapsulation happens. And again, a usual process flows and the packet reaches your port too. So this was an overview of how Flannel works. Let me show you a demo as well. And I'll just keep explaining what's happening here. So first, this is a Docker interface. Here we'll see that we have Docker, default Docker network 172.17.0. Once we configure Flannel with a network, this network should change and it should start using the Flannel's network and not the default Docker network. So here we have to set up the HCD cluster since Flannel is going to talk to HCD. We have to set these clusters on both the nodes node one and node two. And we can check the cluster health. It's running properly. And this is the configuration file, config.dation, maybe you cannot see it, but the network is 10.0.0.0.0 slash eight. So once we set this up, we run the Flannel demon process. Then this overlay network should be used for all the containers. So we are setting this configuration using HCDL. This is how using, we use the HCDL CUTL command to set these configurations. And then once we start the Flannel demon process, so this is the configuration. Here we are starting the Flannel demon process and it's going to interact with HCD to get those configuration files. And here we can see there's a new interface which is created called Flannel.100 and it uses IP address within the range of that network address that we provided in configuration file. So node one has 10.14.144.0. And again, similarly, node two as well, we have 10.13.224.0. So this becomes our overlay network and each node has been assigned its own subnet IP range. We're going to start the Flannel demon process in node two as well. And now we're going to look using HCDL LS command. What are the subnets which have been registered? So you can see there are two subnet registered. And if you do get request, then you can see a public mapping which is done. So 192.168 series is the public IP address and it is mapped using the subnet that Flannel uses, the subnet range. So here you can see it is the public IP address which is stored in HCD. So Flannel knows where to route this packet, which subnet belongs to which host. So now it's time to stop the Docker demon and we have to start the Docker demon because Flannel knows about the network but Docker does not. So we have to set this up and we do it via a file called FlannelSubnet.env. This is a file which FlannelDdemon writes out when it is started. And this file has a key called Flannel underscore subnet and this will contain the subnet range for that particular node. And therefore we have to stop Docker and then we have to again start Docker process using this Flannel subnet ID and we'll be giving an argument called BIP which will use this interface and use this subnet ID. So now we are going to start Docker. And if we again do an inspect on this Docker, we'll find out that now it is using our subnet range which was provided by Flannel which is in the seas of 10.14 and 10.13. So now whichever container comes up, it makes sure that it has this IP address. So we have started to container and it's able to reach internet. And if we do an if config and check the interface, we'll see that it has got an IP address which is again a part of that subnet range which was associated with each nodes. And here we are again checking that we can reach Google and now we'll try to ping both the host and we can see that connectivity happens. So we are pinging to the IP address of node two running container which is running on node two. And then similarly we can ping the container which is running on node one. So this and it's successful. So it means that Flannel network was set up successfully and both the containers are able to talk via the Flannel network. So at this point I'll hand over to Uday who will talk about Calico. Thanks Nisha. Thank you Nishant. So unlike all the other solutions, Calico does not use any overlay. It simply uses the layer three IP and it is meant to simplify and scale and use telco grade protocols and use BGP in turn. So how do we achieve this? It is because of Calico's architecture. So first and foremost it uses layer three based routing approach and BGP for routes distribution. We do not use more the overhead which comes with the layer two VXLAN method. With Calico it is just via BGP. It is policy driven for network security. We later take a look as to how this policy can really help and how this influences other solutions as well. This is the overall architecture which is up on the Calico documentation. You can go up and visit this site as well. There are some really good blogs over there. The first on the left hand side is Calico CTL. It is essentially a command line tool through which all of Calico's components can be run, stopped, so on. There is a key value store which could be either HCD or the Kubernetes API data store. There are various orchestrator plugins. Orchestrator plugins such as OpenStack, Kubernetes, and even OpenShift can be used over here. And the next one is the Calico node. Calico node is the main workhorse. It sits on every node, monitors the pods and communicates it back and takes care of the workings. We will further get into what the Calico node takes care of in the next slides. The next one is Dicastase or Envoy. This is essentially a Kubernetes sidecar project. What it takes care of is workload to workload encapsulation via TLS. This is essentially the internal working architecture of Calico. So the components are Felix. Felix, again, works with the Calico node. It sits on every node, finds out the routing and the IP tables, and communicates it back to the HCD database, which stores the complete information. And this entire thing can be orchestrated from the orchestrator plugin, such as OpenStack or OpenShift, as we spoke about. HCD, again, is a data value store, has all the route distribution, and has all the IPs which are assigned, has all the information over there. The BGP client has an important role to play, obviously. So Calico supports two methods. One is a mesh, and the other is BGP route reflector, which sits on top of the rack switches. Now with a mesh, there are some layer 2 issues. Again, I would encourage you all to read about that on the blog. And top of the rack switches, it reduces the number of connections and number of communication or switch and interport communication that might happen when we use a mesh network for a large scale deployment. And BGP uses what is known as BIRD. It also has the option of using GoBGP, which is essentially the BGP route reflector written in GoLine. Let us take a look as to how Calico works. So we have two nodes. On the left-hand side is where the Kubernetes master node is running. So we see various things in blue, API server, controller, scheduler, HCD, and the Kube DNS. So as soon as we can see the routes which have been populated on the IP table on both the nodes. So when we get Calico started, the Calico controller, HCD, and the Calico node are deployed, and immediately there is a virtual interface, which comes up where you can see an orange Cali interface. And if we bring up another node, on the new node communicates, both the Calico nodes communicate with each other via BGP peering. You can see on the bottom, the IP tables have been updated. So 192, 259, 192 is what is the interface. They have been updated based on the management interfaces and the Kubernetes network that is set up, which you can see on the bottom. Now if we are to come up with deploy any application, such as a simple busybox application, what happens is that the KubeNet would talk to HCD, which in turn would talk to Calico node. And Calico node would look at the routing and the IP tables, and communicate back to the KubeNet, which in turn would spawn the busybox, where the IP is assigned. The same, the IP tables are updated with the routing information. And we can see that on the bottom left-hand side, there is Bird, which is acting as the router reflector. The same goes if we need to launch an Nginx application on the Minion node as well, where a Cali interface comes up similar to what has happened in node 1. So we'll take a look at the short demo. Essentially, I want to show how the network policies are working. So we've downloaded the Calico YAML, which is the configuration file. We'll take a look at the sample configuration, which can happen on a very small setup. So we can see various configuration parameters have been updated by default, such as the HCD endpoints, the cert files, the key files. We can see over here that Calico secrets have also been updated. It is in rolling update mode. The back-ends over here is nothing but Bird, but it can also be configured to go BGP. The HCD has been updated. We can see the IP pool, what has to be accepted, what not, Felix configuration, the CNI configurations, and how the Calico node has to interface the controller information, the ports, and the default mode for the HCD certificates that have to be updated over here as well. So now let us go ahead and try to launch this. Before that, we have to create a directory, normal, Kubernetes cluster creation process. We take ownership of that. Finally, we get HCD installed, whereby using Kube CTL apply, we have the HCD node launched. Now, next we will set up RBAC on the Calico configuration. We can see various Calico Kubernetes controllers have come up, and the RBAC has got set up. Finally, we will launch Calico using the sample configuration. We can see all the nodes have come up, HCD secrets, Calico node, and Calico Kube controllers. So this is how all the pods look. You would be seeing various pods running. Some of them are in container creating. Sometimes they do go into error state, or they go into a crash loopback. But this is normal, and Kubernetes works its magic, and it will be respawned. So we can see one in error state. It will come up in a while. It's gone into crash loopback. We can see it is running now. There have been three restarts on one of the nodes, and it has come up. Now, we'll go ahead and do a simple policy demo. This is up on the Calico documentation as well. I would encourage all of you all to try. We'll set up a busybox application, and we'll try to ping it, and we are not able to do so. So this is it. Now, we've spoken about Calico. We've spoken about Flannel. What is this new thing called Cannell? And how do we get both of them to work? If you are to go onto the Calico website or the documentation itself, you would be able to see that there is a specific thing saying only with network policy. So that is nothing but referring to the Cannell solution. So Cannell is just a packaging of Calico and Flannel. Essentially, it makes use of Calico's network policy and Flannel's networking so that we are able to deploy a very simple VxLan configuration. It has a manifest called Tectonic to deploy them both together. So as I said, Flannel for VxLan networking and Calico for policies, because Flannel by itself does not have any capability to have network policies put in. So both Calico and Flannel now support Kubernetes data store, API data store, and HCD. So to look at the next solution, which all of us can consider, I would ask Nishant to take over. Wevenet creates a virtual network that connects Docker containers across multiple hosts and enables automatic discovery. With Wevenet, portable microservices applications could run anywhere, whether it be on single host, multiple host, between different cloud providers and between different data centers as well. Applications sees the network within Wevenet as a giant Ethernet switch, which means it again runs on L2 network over VxLan encapsulation. So your application containers can talk to each other seamlessly without worrying about configuring your ports, ambassadors, or links. And they can make use of these services pretty much easily and seamlessly. Wevenet uses standard protocols. It means that the tools that have been developed over years and decades could be easily used and configured and could be used to monitor your network infrastructure. It is network operations friendly. So it uses VxLan encapsulation between posts. So you can, again, use a favorite tools like Wireshark to inspect and troubleshoot network protocols. It is secure as well. Wevenet supports encryption. So you can encrypt your traffic. And even you can interact or communicate with hosts, which is running on an untrusted traffic. It can run with anything. It has a Docker plugin, Kubernetes plugin. It can run on Mesos, Amazon, ECS. So it's pretty much a tool which can run anywhere. And we should always look towards it. So let's try to understand how Wevenet works. A Wevenet network consists of a number of peers. So in terms of Wevenet, these peers are nothing but the Wevenet routers. These routers have a unique name which tends to persist over restarts. These peers tries to interact with each other and share the topology information. Each peer will share protocol messages. And they'll share which peer is connected to which other peer. Therefore, Wevenet has an overview of the cluster. And it knows how to connect one node from another node. So Wevenet creates a network bridge on the host. Each container is connected to that bridge via vithpair. So the container side of the vithpair will have its own network address. And it can be configured via Wevenet IP address manager or it can be provided by the user itself. Wevenet routes packets via two methods, a fast data path and sleep encapsulation. So let's take a look in detail. So sleep encapsulation is something where So Wevenet creates an overlay network between Docker hosts. So it means that when you have to route a packet from host one to host two, the user application will route the packet and it has to go back to the Veev router process, which is a user space process. So in this case, there's multiple contexts switching happening between kernel and user space. So therefore, the CPU overhead and latency would be more. So therefore, it is not the recommended way and it's not very efficient if you're using it in a large cluster. So Wevenet also supports fast data path. So this is by default in the newer versions of Wevenet. And in this case, it leverages Linux OVS data path module where Veev router instructs kernel to directly process these packets. That means encapsulation, decapsulation on the source and destination node happens in the kernel itself. And therefore, it is pretty much fast. And the CPU overhead and latency is pretty less. Wevenet and Kubernetes, it's as simple as running this command and it will install Wevenet pods on all your nodes running. And after that, whichever pod comes up will be using your Wevenet network. So it uses your pod network Kubernetes model. It means that every pod will have its own IP and each pod can directly talk to any other pod in the cluster without using any kind of NATing. Wevenet does support network policy as well. So it comes up with a network policy controller that monitors Kubernetes for any annotation network policy changes and configures IP table rules to allow or block any traffic as defined in the network policy rule. And then you have a Veev cloud. It's a UI interface where you can pretty much visualize your cluster and see how things are happening. So let me show you another demo for Wevenet where we are going to launch two peers on two different nodes. And we'll see the containers can interact with each other or not. So we'll start off by installing the Wevenet binary. So we'll do this on both the nodes and we'll give them permission, executable permission. And then we are going to run Veev setup so it preloads all the images that Wevenet uses. We'll be doing this on both the nodes and then we can launch Wevenet. So it's as simple as running just the Veev launch command and it is going to create a peer network, but it won't be having any connections as of now. So if we do a Veev status, we could see that a Veev router has come up on this node. So the Veev router has come up. Currently there are no connections, just one peer. It has an IP address range between 10.32.0.0. And then we are going to do these same steps on node two as well. So this is used for setting up your Wevenet environment and this is most important for Docker. This is going to set up your Docker host variable. So as soon as if your container runs, Docker is going to use the Wevenet network within. So now we are going to run another container and this time it should use your Wevenet network. So from node two, we can check that we are able to reach node one. And similarly we are going to launch another Veev router but this time we are going to provide the name of the Veev router running on the node one and we'll be providing Veev one. So this time if we do a Veev status, we'll be seeing that there is a connection which has been established and there are two peers which are running. So these two nodes can talk to each other. These two routers can talk to each other and they'll be sharing the topology information. And again the similar steps on node two as well where we'll set up the Wevenet environment for Docker to use the Wevenet network and we're going to launch another container test two. So we are inside the container and we'll try to ping the container running on Veev one. That's a node one. And we can see that the packet was transferred successfully to an IP address running under Wevenet network range. We'll also try to create a Netcat service on node one on the container test one. And we'll see if we can transfer some data from test two container to test one. So we'll try to send this data to test one container. And there it is. We can see that it has been received at test one container. So that's it. Thank you. And now I'll pass it on to Uday who is going to talk about Courier. Thank you Nishant. So very often if we want to have VM and pod communication, there are three ways to go about it. Either you can have Kubernetes running on top of OpenStack. You can have both of them running side-by-side on a bare metal. Or obviously we can have a hybrid model wherein both of them are running but they're communicating using some of the networking methodologies that are already there. But it could come with its own overheads. Now how do we solve this? Or what are the possible solutions? So the solutions are that we have either bridges or load balancers because essentially Kubernetes requires load balancers or annotations to be able to have pod and VM communication. VM to VM or pod to pod is quite possible. The second advantage or the second option rather of doing this is using some other communication solution such as say VXLAN. But that comes with its own overhead and it can add on to it becoming very complex. So such as if we have Flannel and we have VXLAN running on top of that, if you're going to add the encapsulation that is required by Neutron with OpenStack that could add to its own complications. Or the third option is we set it up all manually which can be quite cumbersome and quite complicated. So Courier solves this by using native Neutron-based networking in Kubernetes. How does it do it? We'll take a look in the subsequent slides. But Courier introduces two important components which is controller along with the CNI daemon. We used to have something known as a CNI driver which used to be sitting on every node. Now that functionality has been taken over by CNI daemon. There are a few design principles behind Courier. I would encourage again y'all to go through the documentation. Loose coupling between integration components. We do not want the integration components to become a hurdle in the future. Flexible deployment options obviously and independence of communication between the pods when it comes to the controller and CNI driver. That is because in case they are brought together that could lead to its own complications. No dependence of CNI driver on Neutron. The reason being there could be multiple backends that we could configure with Neutron. So therefore we do not want any dependence. The third is allow different Neutron backends to bind Kubernetes pods. Whatever backends that might be supported we want it all to be available. So in the future changing these decisions could be much easier. The key components for Courier are Courier controller. So Courier controller essentially talks between the Kubernetes API server, performs CRUD actions, and talks to Neutron. It is supported by the Courier CNI daemon. As I said, there used to be the CNI Courier driver, which has been deprecated from Rocky onwards. This is the workhorse. It sits on every node, talks to the Neutron agents, talks to the Kubelet on one side, checks the configurations, the annotations via OVS passes it on to the Neutron agent and therefore to Neutron. And therefore enables the communication between pods and VMs. We also have a port manager design. The reason being that ever so often if a VM or a container needs to be spawned, the ports have to be first of all allocated. Then they have to be assigned. And it takes a while before the VMs can come up. VMs or pods can come up. So by having a pre-allocated list of ports available, this entire process can be speedened up. Then we have a VIF handler, which takes care of the virtual interfaces and configures what is the default VIF that has to be handled. We have the Courier Kubernetes health manager, which takes care of the health of not only the various pods, the Kubernetes pods, but also of the Courier controller by itself. So in case if it were to go down, it has to take care and it has to send notifications. Next is the ingress integration. This ensures if some external facing URLs need to be communicated from the pods, assigning those taking care of that part of the networking. It also integrates with Octavia. So Octavia is the latest project on a load balancer, which is coming off from Neutron. So it integrates well with that as well. So Courier has a lot of use cases because of its close integration between both the big projects of Kubernetes as well as OpenStack. So one of them is integration with Contrail. This was something which a lot of people wanted and it is finally available with OpenContrail or what is now known as Tungsten Fabric. It takes advantage of Neutron networking. Most of us OpenStackers are aware of Neutron networking and now Kubernetes being the hot thing, we can combine the knowledge of both using Courier. So it allows telecom grid networking with various SDN solutions. There are a lot of telco use cases that are available which can be sorted out by Courier. Again, it's a great solution to run VMs, containers, but also bare metal through Dragonflow. I would encourage you all to check out what Dragonflow is. So let's look at a comparison of the various solutions. Some of this, we have taken the help of Chris Love, whose blog is a great resource. When it comes to orchestration, Calico supports, as I mentioned earlier, OpenStack, Kubernetes, OpenShift, Flannel, Tectonic, and Kubernetes. Cannell, none particularly, but essentially uses both Calico and Flannel. Vivnet uses Kubernetes and what is known as VeevCloud. Courier, it's OpenStack-based and Romana has COPS or QBADM. Cloud provider, a lot of them support OpenStack and many of them support AWS and GCE as well. Scale, now this is something which is subjective. It is based on our research on the internet. We do not have any use cases where we can tell you about what can scale to what extent or not, but we have just used what information we could gather. Security, most of them implement micro segmentation, things like Vivnet also have cryptography and IPsec. Flannel is coming up with IPsec, but it is experimental currently. Network policies, again, barring, I think Flannel, most of them support network policies. Cannell supports it because Calico does it. Encryption, again, barring Flannel and Cannell, most of them support it, and Romana, of course, the others support it. Network model, Layer 3 for Calico, BGP, using Bird and GoBGP, Flannel and Cannell and Veevnet all support Layer 2, VXLAN, Host, Gateway, and UDP. Courier, Layer 3 again, and Romana, Layer 3, OSPF, along with BGP. The back-end data store, almost all of them have HCD and Kubernetes API data store, barring Veevnet, which does not have an external data store, and commercial support, it's available for most of them. Romana, we are not sure. The documentation says Pani Network supports it, but we couldn't find too much in terms of the support details. Finally, stitching it all together. What if you want multiple solutions to be brought together, multiple solutions to work together? I encourage you all to look at this project called Multis, which is a multi-fabric suit. As the main name suggests, it acts as a multi-plugin in Kubernetes and provides multiple network interfaces to communicate with the pods. It supports various reference plugins such as Flannel or DHCP and Mac VLAN, but also supports third-party integration. Could be Calico or Veev or any of the others which are available. One of the main advantages of this is it supports SRI-OV, SRI-OV DPDK, OVS-DPDK, and for both cloud-native and NFV-based applications in Kubernetes. Now, it serves as a contact between container runtimes or various other plugins, and it can call plugins like Flannel or Calico amongst the list of third-party plugins I had mentioned earlier. This reuses the concept of invoking delegates, which is what the method that Flannel uses, and it uses a CNI configuration file rather than implementing its own CNI the way the others have done. So any questions, we can take it, I think we're almost out of time. We can take it after the presentation. Y'all can reach out to us anytime. You can also contact us. These are our contact details. And before I finish off, I want to thank, we've come across various impedances, including visa issues. One of our colleagues, because of visa issues, wasn't able to come along with shifting countries, so on and so forth. And we really want to thank our organizations, Emirates, NBD, Ericsson, and most importantly, the OpenStack travel support. Without these guys, we wouldn't have been able to be here or make this presentation. So I'll leave you all with a message of the community, by the community, for the community. Let's go OpenStack community. Thank you.