 I hope you are having a good summit so far, good evening to you all and a very warm welcome to all the attendees joining this event virtually. I am Park Guzmami and today I will be trying to highlight some of the open source CNI plugins and their approaches towards IP address management and container networking. Now before I start my talk, I wanted to highlight that networking is one of that aspect in the container or maybe even cloud computing that is open, often overlooked maybe because of various, maybe it can be a bit complex topic to some of the folks or they might be scared, I mean towards the networking apart for various reasons. So this is one attempt to make it a bit simplified and I am going to give a very high level overview about the topics and concepts that we are going to touch. So the approach that I am going to follow today is, so we are going to talk about adapting open source CNI plugins for IPAM, we are going to start with understanding in brief what CNI is, we will touch the topic about plugins, what exactly plugins are, we will try to understand what IPAM is and then we will focus on the need to adapt the open source CNI plugins. Alright so the textbook definition given by cloud native computing foundation about CNI is that it is a specification that defines how to configure networking for Linux containers. Now it does that by providing a set of APIs for networking solutions to integrate with different container run times. Now if you take a look at the pick over here, there is container run time of your choice and then the CNI sits on top of it and within CNI it ships lots of components called as plugins. Now what exactly these plugins are, so before we take a look at plugin let us try to understand a bit more about what exactly CNI does and how it does whatever it does. So for example if I need to establish a network within a container, I need to have its own network namespace. So I create a network namespace, now once I have that I need to create a bridge between the host network namespace and the container network namespace, once I have the bridge I create a couple of virtual Ethernet pairs via virtual because we are dealing with containers and not like virtual machines or physical hardware. So any Ethernet any networking component that we will be dealing with is going to be virtual. So we create virtual Ethernet pairs and then we attach one end of it to the networking namespace and attach one end of it to the bridge and once we have that we finally assign the IP address to uniquely identify that pod and we bring up the interfaces and this is how the interface comes up and the pod goes live. So this is a sample kind of a algorithm sample code sample program to achieve certain steps to achieve a certain desired result. Now this exact same requirement is there for almost any other container orchestration RKT, Docker, Mezos, Kubernetes. So why not we kind of create a standardized version of it and then try to ship it with any other orchestration. So the very thing of creating a standardized version of it, creating kind of a library of it is called as a plugin. So before we move on to plugin, let's try to understand that if we want to create our own CNI, what it must do or what are the absolutely most necessary roles that this CNI should perform. So I don't want to bombard you with a whole lot of theory. So I'll just highlight a couple of points from this must have roles. So it must be able to create a network namespace as we just discussed, then it must be able to identify the network of the container and it should be able to deal with the bridge. If we are adding container or deleting container, then it should deal with one of the output format. It must support command line arguments so that we are able to fire the commands and interact with it through our CLI. It must be able to manage the IP addresses, which is the exact topic of this talk. And it must return the results in one of the desired output format, JSON, text, tabular, whatever it is. So if any CNI is able to do some of these roles or majorly all of these roles, I think that would be a pretty good thing. Next, so what we have seen so far is the textbook definition provided by CNCF of what CNI is. Basically, a plugin is a collection of program or a code. So here you see a few examples of plugins such as loopback, bridge, PTP, Mac will land, IP will land, and then there are many third party plugins as well. So examples of third party plugins that has been adopted or that has been accepted by Kubernetes is Calico is there, Vibnet is there, Flannel is there, Cilium is there. So just to give you a brief introduction about all these third party plugins, so Calico is a popular CNI tool that deals around network security based on cloud native architecture. And it is mostly used in the enterprise version, enterprise level environments, Flannel is something that is simple and lightweight and very easy to install, but it is mostly preferred for small scale clusters and not something that is larger in size. Then we have Vibnet, it is into providing network automation and observability features and then we have Cilium that is basically based on the identity based access solutions. So these are all open source CNI plugins and since they are not maintained or developed by CNCF itself, they are third party. Now let's try to understand what exactly IPAM is. Now here I'm not touching IPAM from the Kubernetes or from the continuous point of view, I'm just talking about the simple plain IP address management. So basically it means that if you are doing something which falls under assigning, monitoring, tracking or managing IPs, you are kind of dealing with IP address management. So it's not that you have a device, either virtual or a physical device, you assign an IP address and you just go about it. No, it's not that because at your level, at an individual level, you're dealing with just one device, but at an enterprise level, you are dealing with huge clusters which might have thousands of nodes and it's a combination of nodes, virtual node, physical node, etc. So you need to have a proper procedure set, a methodical system which will define how your assigning of IP address will work, how the tracking of IP address is working, whether the device is, if it is not in function, whether the IP address is being revoked or not, it should not be reused. So all this mechanism needs to be there properly being set and defined. So that is basically what IP address management is and it is basically an integrated suite of tools. And it also like encompasses the concepts of DHCP and DNS. Next, now with respect to Kubernetes, what exactly IPM is? So Kubernetes also relies on IPAM. Since Kubernetes mostly works at the cluster level, it needs to manage or deal with thousands and millions of containers, pods, and it requires those many IP addresses as well. So there is no option for Kubernetes to skip IP address management. So it needs, it requires IP address management and it definitely relies on it. Now Kubernetes has its own inbuilt tool to manage IP address, that is Qproxy, but definitely that comes with certain limitations. Now before that, in Kubernetes, each pod will require its own IP address to talk to each other, I mean to talk to other pods, to talk to services, and to talk to external network. So if IPs are required for so and so reasons, then IP need, the addresses need to be managed. Qproxy is something that is actually handling the IP address management, but the limitation is it can't scale. So it works, it works fine, it works efficiently to a certain extent. But once you cross the, cross its threshold, I'm not sure about what the threshold is, but if you cross it and move to a very large complex cluster, or maybe a very complex topology of network, then Qproxy seems to not, seems to not work efficiently. So that is the main limitation of the current by default IPAM plug in that, that Kubernetes has. Next, so then there are some challenges presented by IP exhaustion. So basically what IP exhaustion is, back in 1980s, when we came up with this concept of IPv4, it was divided, I mean it is divided into four blocks of 8 bits each, 32 bit. So back then, everybody thought that this many millions of IP addresses or billions of IP addresses would be enough. But then within 10 to 15 years by late 1990s, it was very clear that those many IP addresses would not be sufficient because it is not about an IP being used by a single user, it is about an IP being used by a single device. And it was very clear soon enough that a single person can, would need or would require multiple devices. So in our day to day life, we are, at an individual level, we carry phone, we connect our laptops to VPN, we have a number of devices. So each and every device at any point would have at least two to three IP addresses. If you just check about the ethernet that you have, you would see the number of IP addresses being consumed at the individual level. Now just extrapolated to a scenario where you are running enterprise level clusters and you would understand that the need of IP addresses is actually a very complex issue and it's something that needs to be managed very efficiently. So there are, so this causes IP exhaustion and there are some problems or some challenges that is presented by IP exhaustion, the first is network congestion. Now since the IPs are getting exhausted, the pods in the Kubernetes cluster would compete to have the same IP address or compete to have the IP address from the same pool and that might result in network congestion. That might result in potential downtime or maybe service interruptions. Now that can be a result in factor in network congestion. Then there is security risk if IPs that we are adding or the IP addresses are exhausted. It can be tempting for the network administrator to reuse the IP address. By reuse, what I mean is if there was an IP address that was assigned to a pod, the pod gets killed or whatever, the IP gets released to the pool. Now before the IP was properly released, the admin ends up using it or assigning it to a new pod. So that might lead to having the live access to the previous pods sensitive information. So that poses like a sensitive, like a security risk. Next is difficulty in scaling. Since my pool itself has, for example, 100 IPs and my cluster demands 500 pods or 1,000 pods, that definitely would not allow me to do so. So if IPs are getting exhausted, my cluster would not scale the way I would want it to be. And then increased complexity, in case of IP exhaustion, if my pool itself is having less or limited number of IPs, it is the network and admins would be tempted to go for much complex solutions, such as netting or maybe go for overline networks, such kind of complex solutions. So there are much more challenges by IP exhaustion. So let's see what can be done about it. Okay, so this brings us to the main topic that because Kubernetes uses, by default, Qproxy and Qproxy comes with its own limitation. Hence, there is a need for the Kubernetes itself to adopt open source based CNI plugins. Now, there are various third party CNI plugins in the market, in the market space that approaches FAM differently. So let's understand what exactly the limitations of built-in plugins as of there is. So there are a couple of plugins, Hostlocal and DHCP. So Hostlocal plugin assigns IP address from a predefined pool. This approach is good for smaller cluster, but once we go at a bigger level, it will surely cause problem. Then DHCP, IPame assigns IPs to pod using DHCP IPame plugin. It can be good enough for a large cluster, but it needs an overhead, like a DHCP server. So again, we don't want such overhead. Now, one of the IPame or one of the third party CNI plugin that I have been working on and used is Calico IPame. It is a completely open source plugin and it has certain key features that because of which it is being widely used as one of the most sort of network plugin. It uses distributed IPame architecture, meaning for every node, there would be a separate and specific IP pool, not at the cluster level, but at the node level. So this ensures that the IPs can be assigned or allocated to the pods residing on that node very efficiently and quickly. And it doesn't need a centralized server, like the DHCP IPame. Since the IPs are allocated at the node level and not the cluster level, it doesn't require a centralized server. And if the local pool itself gets exhausted, it can request for more IPs from a central pool managed by Calico IPame control itself. Next, Calico IPame supports network using BGP, Border Gateway Protocol. This allows network admins to segment their network into different subnets. Now, this segmentation can come from the request by the application itself or from the maybe there are some security concerns that needs the segmentation. So this allows the network admin to enforce some traffic related policies. Then it also support, it has support for network security, network security such as policy informant and encryption of traffic between pods. So these features ensure that the cluster is compliant with industry standard, along with handling the management of IP addresses. Yeah, so that was all about the topic. And if you want to check out more about this Calico open source or Calico approaches IPame, if you wish to contribute more to this project, just check out this GitHub repo. And there is also a program being run by Calico called as Calico Big Cats, where you get to meet the maintainers, the developers on a monthly basis. And you can just try to understand more about the project and be involved at a very deeper level. So this is the recap. We saw about what CNI is, what IPame is, what plugins exactly are, then what are the current challenges faced or presented by IP exhaustion, how Kubernetes is dealing with IP management and what are its limitation, and what features or what solutions third party open source plugins such as Calico present. So yeah, so that was all about it. And I am Parth Goswami, I work as a customer enablement engineer at Cloudera and also am a Calico community ambassador. And that is my community platform where I regularly write blogs and share my open source stuff. So yeah, thank you, that's it.