 Hello. OK, let's start this tutorial session. So this tutorial is about the CNI from CNI 0 to CNI hero, the Kubernetes networking tutorial using CNI. I'm Tomohome Hayashi. And I'm Doug Smith. So I'm Tomohome Hayashi, working in the Red Hat, and then works for the OpenShift engineering, as well as the up-stream CNI and the multi-CNI maintainer, as well. I joined the network pumping working group, as well. And I am Doug Smith. I'm also a member of the network plumbing working group. Tomohome and I both work together on multis CNI, which is a CNI plug-in that allows you to attach multiple network interfaces into pods and Kubernetes. What Tomohome didn't mention is that Tomohome has also created a number of other CNI plug-ins, such as ones that you might even take for granted, like static CNI plug-ins to override routes. And I've also created another CNI plug-in called whereabouts, which is for a dynamic allocation of IP addresses. So today, what we're going to look at first is an intro to CNI. Tomohome is going to walk you through everything about how CNI works. In the end, really, it might feel like a lot, but really, in the end, it's really a few just simple things that you need to take a grasp of. What we will do is we will be very comprehensive here. With that in mind, we're also going to walk you through configurations and all of the details and how you can actually configure CNI. Tomohome's then going to take a glance over how you're going to develop CNI plug-ins if you have an interest in that. And as Tomohome noted originally is, there are a number of slides that we have as reference material in the slides to download that we decided were maybe too much detail, but they could be really useful for you when you go there. With that all in place, since you've got the basics, we're going to do a hands-on tutorial where you can follow along if you'd like. Download the deck. There's a link to a GitHub repository that's got configurations for kind and all of the resources that are used in there. There's also kind of a bit of a troubleshooting aspect to that, and then I'll kind of touch on that. And last but not least, we'll kind of show you how you can get linked up with the CNI community and where all the resources are. With that being said, I will let Tomohome kick it off with an intro into CNI. Thank you, Doug. So let's cover what the CNI is first, input stuff, right? So what the CNI does for you is maybe simply to say, in case of the Kubernetes, CNI provides the network connectivity to your Kubernetes port. So if you're launching some simple part, at that time, if you're typing the IP address command or the Linux, then you can see the loopback and eth0 is added, right? So that is CNI plug-in dash. I mean that loopback address and eth0, ethernet interface is created and they put your network namespace of the port. I'm described as the network namespace writer. And then in additions, of course, eth0 have IPv4, or in some case, in your deployment, have the IPv6 address as well. So this IP assignment is also done by a CNI. And then, of course, if some network provider do some specific additional configuration such as the NF table, IP table rules, or interface MTU changes, this seems to be dashed in the CNI plug-in as well. So when the power is added, these stuff is done by CNI. And then, of course, they open the door and then close the door. So if power is tearing down, at that time, the CNI plug-in is removing this IP address and then also the interface and then try to do the next usage. So here is the rough picture, but rough diagram of the Kubernetes deployment in the Worker node. Worker node have the kubelet and the container runtime, like the cryo or container d. And inside the cryo container d, libcni code is in. So libcni code is from the CNI GitHub repository. And then we have the pods. So today, we'd like to describe that the CNI config and the CNI binary, so that these stuff is a target to describe. But before that, let's go to focus on the pod. So the pod is not container, you know. So based on the Kubernetes document, the pod is shared context of pod. It's a set of the Linux namespace, secret of some potentially other fact, first set of the isolation. So Linux namespace is the partitioning feature in the Linux kernel. So this is used for the container or some stuff. So the container uses the Linux namespace, such as the PID or mount and the sum and then isolate the resources from the container host side. And then container is a little bit different from the pod. So pod is not just a container. This pod could have a multiple container inside the one pod object. So this means the multiple PID and namespace and also the multiple mount namespace is in the pod. But regarding the network, multiple container sharing the one network namespace, that is a little bit unique than the usual container, Docker or part amount and so on. So this CNI is invoked at the mainly two types, I mean the pod creation and the pod relations. So that this, you already know about that, the CNI is the plugin architecture. So each CNI implementation is provided as the plugins. So CNI project under the CNCF provides the several plugins as the reference implementations, Mac, IP, VRAM, and then also the host local IP management. And the third party, I mean that the several vendor provides the CNI plugin as the Calico, Selume, OpenCommandatus and so on. So the when the, so how CNI plugin is used when the pod is created. But this picture is showing that these stuff system flow. Okay, let's imagine you create, you do the cube category pod. Then this pod object is goes to the API and then the cubelet recognize these pod object. And after that, cubelet sending the RAM, Pond, Sandbox, GRPC core through the CNCR, like container runtime interface. Container runtime is recognize these stuff and then try to start the pod sandbox which means the several PID name, Linux namespace and of course the network namespace as well. And then after that, container runtime call LibCNi to create the interface. LibCNi leads the config from the CNI config directory. So mainly this is the CNI net D. Then using this config, LibCNi invokes the CNI plugin. CNI plugin gets these informations given by the LibCNi site then creating the network interface. So that's the whole picture of the how CNI plugin is invoked. So let's focus on the CNI config side. So first things CNI config is the text JSON file not the YAML file. So of course, if you are good at the Kubernetes everything is written in the YAML and then you may thinking why CNI using the YAML, JSON not the YAML file. So this is the slightly, kind of the historical point of they have. I mean that the CNI is not under the Kubernetes. I mean that the CNI is just the separate project. So this means they are officially Kubernetes, the, how do I say? CNI is pretty independent of the Kubernetes and then Kubernetes uses the CNI, these stuff. So that's the stuff. So they are, yeah, unfortunately, CNI config is the JSON text file. And then this file name should be end with Conflist, like the 0.1 FUBAR Conflist. So sometimes if you're looking in the old implementations deployment at that time you may find the 0.1 FUBAR.conf. But in this presentation I'd like to focus on the Conflist because the Conf is a little bit old and then this is deprecated at the latest CNI version. So that this, so if you're putting in multiple CNI config in the CNI directory but the container D and the cryo takes only one file from config directory. So that these stuff is choose by the first file by sorted ASCII code. So yeah, you need to keep in mind ASCII is slightly different from the alphabet or some stuff, right? So keep in mind. So CNI provides the CNI config as a part of the CNI specification. So CNI specification means this contains the how CNI plugin should works, including the config file format and also the other result, other formats such as the result object format. So the CNI community upgraded this specification periodically, so from 0.1, 0.2, 0.2, 0.3, 0.1 and 0.4, 0. And then now the 1.0 is the latest. This is released at the 2021, likely three years old. So this is pretty almost same syntax but a little bit different among the versions. So maybe that if you have a trouble, you need to keep in mind sometimes your version does not fit the configuration style, but also that this presentation focusing the 1.0, 0.0 as the latest. Latest is good. So that is the simple configuration, CNI configuration example. So this, the JSON allows us to the higher class structure nested object. So here is the parent structure has the CNI version named plugin. And inside the plugins, we have the another object. So these stuff is showing the CNI configuration has the higher class structure. So this configuration is mainly for three types of the configuration. One is this config for CNI runtime, which is the live CNI. And then next is the config for container runtime, for example, the cryo container D. And then next is the config for CNI plugins. Based on the, this configuration, this sample configuration, the first three line is mainly for the config for CNI runtime. CNI version specifying the which CNI spec version is used for this CNI config. And then the name is the config name, config identifier for live CNI. And the plugins contains each plugin specific configurations. So plugins means, of course, the plural. So this may have two or more plugin configurations. I will explain about it later. So let's go inside the plugins field. We have a type master icon. So type is a generic configuration term in the CNI config, which specify which CNI plugin is used. So this field value, in this case, IPvran, is literally should matched in CNI plugins directory's binary name. So this means the IPvran type IPvran means IPvran plugins used. So that is the common configuration value, types and the value. And the master is the IPvran specific configuration parameter. So if you're using the other CNI plugin, at that time, the different parameter needs to be configured. So please take a look into that there for each CNI plugins that document. The IPAM section is slightly different. I mean that the IPAM is also have a nested. I mean that the IPAM contains the different CNI plugin configurations. So in this case, host local CNI is used to IPAM. IP, the IPAM is the IP address management. So the IPvran plugin creating IP address under the interface only without IP address and then host local assigned IP address. So this is the delegation of the CNI. So when live CNI is executing the CNI plugin, at that time, of course, the program plugin in this case, consuming the input and then do the output. So this picture is showing that input and output. So when the IPvran or some other CNI plugins executed, at that time, two types of the input is consumed. One is the environment variables, which is the, of course, Unix systems, the environment variables standard SDDMs, which contains the several informations. One most important thing is a command, CNI and above command, which contains the what the plugin should do. So if the CNI command indicates the add, so this is the IPvran plugin should create the interface and if IPvran is called with CNI command del, then IPvran plugin will be moving the interface. And then another CNI container ID, CNI, CNM, CNR, so that these stuff is the power port based parameters is provided at the environment variables. And then next is a standard input. Standard input contains the real your CNI config with if you provide the hcni netd00huber.conf list, then these file contents is goes to the standard input. And then IPvran will do a job for you. And then after that IP plugin output, there are three types of information. One is how it goes, exit code. Exit code, zero means success and then if something is failed, then this goes to a non-zero value is added. And then also for further troubleshooting, plugin could add the several error message at the SDD error. This SDD error is captured by the cubelet and the content runtime, I mean that upper component. And then you can see these error methods through journal, Cato, or some stuff. And if these plugins succeeded, at that time that these result is in the standard output. So this contains the interface name, IP address and the MAC address and then other informations. So here is a sample of these stuff. So that this output calls the CNI result object. So this is same as the config and the JSON format. And then you need to keep in mind that the CNI version is provided, as I told before. So these CNI versions specifying the output types. So the different CNI version may contain the different name and value for interface. So by the way, this result of the contains the interfaces and IPs. So CNI spec allows to return the multiple interface. In the IPs field, the interface zero means this IP address assigned to index zero of the interface. So in this case, the ether zero have the 10.1.1.3. So that's the result object. So this result object is retrieved by the container runtime side. And then at the result, Kuberet captured these informations through the CRY, the port sandbox status messages. So that they are simple case. And then next is a little bit complicated case, the plugin chain. As I talked before, plugins means not the plugin. Multiple plugins can be added in this field. So in this case, we have the two CNI configurations. One is IPv1 and the next is a tuning. So the first things is what does it mean? So let me explain that. So they are live CNI executed, our first CNI as the IPv1 is executed first. IPv1 plugin will create the interface and IP address assigned. And after that, second CNI is executed with these informations previously created the interface. In this case, the ether zero. Then, so in this case, the first IPv1 host loader is execute and then create the ether zero in the port. Then after that tuning, do some additional configuration to the ether zero. In these configurations, the syscatl is executed for this ether zero interface to changing the several attributes of the interface such as the socket option maxcom plus upfielder. So how these plugins are executed? So this picture is explained about how to do that. So of course, as I thought, the plugin change is the first CNI is executed and then next second CNI is executed. This is not a parallel. So first CNI call is the pretty same as I explained before, but the second CNI call is slightly different. This means in the standard input, CNI config is dynamically changed by the libcni injecting previous output, which means the IPv1 output is goes to the pre-result field of the CNI config. And then tuning CNI consuming the itcni net either config plus pre-result then identify which interface is the target to configure. For example, the left side is the CNI config in disk and the right side is the adjuster dynamically captured about the CNI config consumed by the tuning side. As I mentioned, the right side, the red rectangle, pre-result is dynamically added and then this contains the result object which is generated by the IPv1. In this case, interface eth0 is found. So the tuning plugin can consuming these parameters and then it knows which interface should be target for this configuration. And then that's the CNI config mainly, but I need to also explain about the capability and the runtime config. This is also the feature of the CNI config. And then if you may see previously your experiment, you can see the several annotations in the port side like the Kubernetes IO slash ingress bandwidth as well as the egress bandwidth. So of course this annotation is specifying the bandwidth of the ingress and egress. So this means that this should touching with interface, right? So how to do that? This is of course done by the CNI. How to do that is they are what the capability and the runtime config is required. This runtime config capability feature is used to contain a runtime also the upper layer component, I mean that in this case, the cubelets to inject the additional power port specific parameters to CNI plugins. So in Calico or Frontline or several implementations uses the feature, for example, the bandwidth or port map plugin is used for that. So let's see the one example. This is the Calico's deployment, the CNI config. At that time, as I thought, plugin have a three or not two. Three CNI is executed. So first is the Calico. And then Calico will create an interface and also the assigned IP address. And then next is the port map and then bandwidth for each port map and the bandwidth plugin is doing for each jobs. And then at that time, you can see the capabilities port mapping through in the port map side and the capability bandwidth through is in the bandwidth side. So that is also the CNI's features. So if Kubernetes, if you're creating a port with several bandwidth annotations, Kubelet inject this parameter is the runtime config and then live CNI is invoked. Live CNI, identifying these runtime config parameters and then also the config file-side capabilities, then inject the appropriate runtime config in based on their capabilities. So for example, bandwidth capability case, right side of the configuration is they are just captured CNI config just before bandwidth is executed. At that time, we can see first, pre-result contains the generated interface information. Then after that, runtime config, bandwidth, ingress rate, ingress first, ingress rate, ingress first. So these additional parameters are dynamically injected based on their port annotations. So this capability runtime config feature is, yeah, unfortunately, this is not in the CNI specification but they are kind of the optional document. Conventions.md contains these stuff. So that's the configuration. I've covered almost CNI config. From the CNI plugins perspective, CNI plugins get the standard environment and the CNI config then creating the interface or some CNI chaining. Some plugin may change some attribute or some stuff. And then CNI plugin will output as the JSON object as the CNI result and if failed error code is happened as well as error message. So that's the configuration and then let's go to the develop CNI plugins side. I mean, so previous talks is about how to use and then here is there how to create the CNI plugins. As I talked before, CNI plugins input the standard environment CNI config. This is pretty same as the last slide, right? So to creating CNI, you just satisfying the these requirement, which means your CNI plugin first, first standard environment values and the CNI config, then these parameters satisfy enough to create the URC interface to the pump. So this UR plugin creating the plugin interface by Linux, NetRink API and so on. Then after that, you gathered the interface information and then just output the CNI result as the JSON. So this is, yeah, it's simple to say, but of course it's hard, you know? But simply to say, you just do this stuff, that's the whole. But also they are in additions, you need to care about the several stuff. So the first is the how to integrate the Kubernetes. I mean that if your CNI plugin wants to get the pod object, at that time, as I told before, CNI is not integrated with the Kubernetes. So CNI plugins does not have a way to access Kubernetes object. So for each CNI plugin, you need to get the created service account or other way to create some API account to get touch the Kubernetes API object. And also you need to keep in update that these service account for certifications expires. And then next is the CNI have a little bit unique behavior how the command there, I mean when their port is deleted. So in the CNI specification mentions, their command should not return error even though plugin have some error. So this means the first, if the Dell is failed for some deadlock or some certain stuff, but this, in this case, the CNI plugin site, they should not return any error messages and then also the error data code. Otherwise, currently the container runtime is designed that their command should not return error. So this means they are unacceptable. It's happened. Container runtime is getting in the mat. And also the initial base of this stuff, sometimes CN from the upper layer side, container runtime, with CNI may invoke multiple command there to one pod object. Okay, let's imagine from the CNI plugin site, leave CNI invoke the command there, multiple time to the same object. So first command there will remove the interface successfully and then just return succeeded. But the next invocation site, sometimes from the generic programming point of view, this may return the error. Hey, the pod is no longer exist, but this should not happen because as I told before, the command there does not return an error. So that's the, you need to keep in mind. And then the other things is about the CNI version. As I told before, CNI version is in CNI config as well as the CNI result. And then as I told for each version, CNI result is just right to change it. So here is the three CNI result object based on the CNI version, 0-2-0-0-4-0, 1-0-0. And then interesting thing is the, so these information mentions the same IP address and interface informations, so this means, so 0-2-0-1-2, the IP4 and IP6 field is required. But on the other side, the 0-4-0 is IPS field is in. And then next is the version is removed. So for each version's expected output is slightly different. So if you're writing down the CNI plugin, you need to care about these version differences. So this seems to be slightly complicated, but that is required. CNI plugin should passing the standard environment config, then you're creating the interface and the output, you should output the CNI result based on the CNI version. Okay, let's go to the hands on from Doug, go ahead. All right, Tomo, thank you very much. So at this particular link, there is everything that I'm going to do in this tutorial. And here's the thing, there might be a lot of detail in what Tomo just covered, which is extremely comprehensive, but in order to go and do what we're gonna do at the command line, there's really only two things that you need, which is a kind cluster, so that's Kubernetes and Docker, which is a real easy way to try out Kubernetes to have a development workflow, et cetera, and how to use the kubectl command. There's no special magic, rocket science, or anything else. So what we're gonna do in this particular tutorial, which is kind of rapid pace to keep it rolling, but what we're gonna wind up doing is two things primarily. We're gonna install a CNI plugin into a cluster. I'm gonna use Flannel, which is a way that provides pod-to-pod connectivity, so Tomo explained, what does the CNI plugin do? Well, it creates interfaces and gives you a way to get connectivity to a network. So we're gonna do that, and it was also kind of convenient that in this particular scenario, Flannel didn't install perfectly cleanly, so we get an opportunity to see how to debug that. So that's the first part of it, and then the second part of it is we're going to create a custom CNI plugin that's created with Bash. It's, you don't need really any special programming language knowledge for it. It just uses some basic Linux primitives, as Tomo mentioned, like it has to read environment variables and it has to read some data from standard in. So it's just a couple dozen lines at max, and it's just one that's what we call the dummy plugin that logs some data so that you can kind of see, hey, what variables and values came into this CNI plugin, write it to disk, and then we output some kind of fake information about it. So that being said, I'll move on to the demo, yeah, thank you, Tomo. All right, cool, so the gist here is we've got these three panes on the left-hand side. What you're looking at is just some general commands where I'm gonna use a kubectl create and stuff like that, and then on the right is as if you were debugging a host. So it's Kubernetes and Docker, so debugging the host is really doing a Docker exec. So that's where I'm gonna put those commands, and then on the very bottom is really just a kubectl watch to check out what pods are there. All of these files and everything that's created are part of the GitHub repo that was in the link, and I, as a first step, create the kind cluster. In this case, it's already created so that we don't have to wait for it, and the main thing to really pay attention to in this config is that we disabled the default CNI. So typically on a kind cluster, this would already be bootstrapped for you, but to kind of emulate what would happen on a vanilla cluster. So when the cluster comes up, we go and look at the pods that are there, and we see that there's a couple pods that are pending, and it's because the nodes are not ready, so when I do kubectl get nodes, those are not ready. So that is based on CNI. So what I'm gonna do to see why that is the case is to exec into this Docker container, like we're on the host, and I'm gonna look in the Etsy CNI net D, which is the CNI configuration folder. There's nothing there, and having the presence of a CNI configuration in that folder acts as a semaphore to mark that the nodes are ready. So the kubelet knows, oh hey, your network's ready if there's a CNI config there. So I'm gonna install flannel, which I did on the left, it's with the kubectl create, and we can see the flannel pods come up at the bottom, but it doesn't actually fix everything. We see the CNI config on the right. There it is, but those core DNS pods are still in a container creating state. So even though it says kubectl get nodes, as those are ready, it might not actually be ready, so we gotta figure out what it is. So in this case, you can do a kubectl describe pod and we're gonna get kubernetes events. Those were propagated from the kubelet to the kubernetes API to your kubectl command so that we can see what's going on. So I take this and I go to describe the pod. Tell me you can fast forward once, thanks. Yeah, so we have here a kubectl describe pod. Error, which is fail to find the plugin bridge in the path. So that's what happened. It got propagated back up to these events in the pod, and we can fix this since we know where things are thanks to Tomo's directions about, hey, you have a CNI config and you have a directory and you have a CNI binary directory. So I go and list the CNI binaries, and there's no bridge plugin there, so that explains why we get this complaint that was failed to find the bridge plugin. So there's a number of reference CNI plugins that are available from the CNI community and Flannel happens to use one of them, which is the bridge plugin. Not all CNI plugins work this way. Flannel uses what we call a delegation pattern where it kind of delegates some work to the bridge plugin. So what I will do here is go and install the bridge plugin. So I've got a reference CNI plugin, YAML. Tomo, you can fast forward once. We see those CNI plugins, Damon set comes up, and then on the right here, I list the plugins and now we've got a bunch of them, including the bridge plugin. So because of that, now those core DNS pods down at the bottom are running. So we've fixed the issue that's there. Now that we've got a working cluster with a CNI plugin that will let our pods come up, let's go and customize it and what we're going to do to customize it is we are going to create our own CNI plugin. The first thing that I do here is I am going to write a quote unquote binary onto disk. So it's called the CNI bender because usually you would have compiled applications here, but in this case, I am actually going to just make this bash script that you see here, which really just does a couple of things. What it does is it has a logging function, it logs environment variables, it reads standard in, and then it outputs a phony response. We just kind of make something up here and you'll be able to see that Kubernetes reacts to it. And yeah, that's our kind of phony information that we give. We just set a static IP and in order to configure this CNI plugin to run, we will have to make a configuration for it. So what we will do here is we're gonna in the CNI configuration directory is, so we can do one fast forward here. And one more, thank you, yep. Oh, I have to make sure that bash script is executable or it won't get executed and we need that CNI config. So as Tomo mentioned, in your CNI config directory, the ascapedically first config file is used. So I can leave the flannel configuration there, but I just add an additional one that's prepended. So 00 ascapedically comes before 10. We'll say we want it to be typed dummy because that's the name of the binary quote unquote on disk. Now that that's all set up, I'm gonna spin up a pod because that's when our CNI plugin is gonna execute. So on pod creation, on pod deletion, CNI is exercised. So we'll create the pod and it comes up down there on the bottom and since this pod executes on the host and it writes a file, I can go ahead and just cat this log file, which has the information here. So you could take this from the demo, execute it and you can see all of those environment variables. So you could use that to build a richer plugin that actually did something. Something you're gonna see here down on the bottom, on the third row in the IP column is the dummy IP address that we output. So we output some JSON that said, hey, this resulted in the IP address of 192.0.2.22. But that's actually just a big fat lie that we told the KubeLit, it was just totally fake. So what you're gonna see if you were to actually execute into the pod is the reality of what happened. So Kube API thinks that's the IP address but when we go and do a KubeCuttle exec and issue the IPA command that would list our IP interfaces, et cetera, we actually don't have another interface there. We just have the default loopback that you get and no actual IP address. So let's see how this looks differently if we use flannel instead, right? So flannel is gonna actually create an eth0 and actually give you connectivity. So I go ahead and I delete this pod and once I delete that pod, then I remove my dummy configuration. So tell me you can fast forward once. Yep, thank you, one more, thanks. All right, so I remove that configuration then I'm gonna spin up the pod again and on the next execution, flannel is gonna be executed and now we can see down there at the bottom with the sample pod IP address 10244.1.5, which is actually true and we will check it out this time by executing into the pod and saying IPA and we actually do have an eth0 now and that IP address matches what's in the Kubernetes API. So that's kind of the magic, quote unquote, of how that actual IP address ends up there when you do a kubectl, exec, or get pod, so. I highly encourage you to give it a shot on your own and run through the steps and it's kind of a template that you could use to flesh out an actual application. Tomo, thank you very much. All right, so a few kind of tips and tricks for debugging CNI plugins. So if you have something going on in your cluster that is related to CNI, like I did during the tutorial, when you do kubectl describe pod, there's a high likelihood that you're going to get Kubernetes events that came up so you can check out what's going on. Once you have some information there, then you can kind of dig into where these CNI assets actually live on your host, which will be in your CNI bender and your CNI comfder. So that's where you're gonna find all those goodies. One thing that I have certainly seen happen before is that you make some change related to CNI, you install another CNI plugin, let's say it writes a configuration and that configuration does not actually wind up alphabetically first before your other configuration. So you're like, why isn't this operating the way that I thought? So check that CNI comfder, that may be what's going on. Definitely the first place to check out. Another thing to remember is that the type field is a required field in the CNI configuration and it's not like some kind of magic that is not representative of something, it is the name of the binary in your bender. So if you don't have that exactly right and that binary doesn't exist, it is going to fail. So this is just an example of like fat fingering it or having a hard to read typo in there that's not exact. So that's one of those things and one of the main things that happens all the time. Just like the demo is that you have some dependency that wasn't apparent in your configuration that the CNI plugin asked for, hey, I wanted to use a delegation to this other plugin and it wasn't there, so you actually had to install something else. So that's kind of the most common one. And last but not least, it's not always apparent that the readiness state of your nodes is totally a CNI thing. So if you have nodes that are in a not ready state, definitely check out what's going on in your CNI cofter because that is just a marker and it's only so much information for the cluster. Like this is, as Toma mentioned in one slide is CNI kind of predates Kubernetes. CNI was intended to work for a bunch of different container orchestration engines and we live in a world where Kubernetes is really the king of container orchestration engines. I had Junior Guy on my team who I was saying, I'm like, hey, well, CNI is container orchestration engine agnostic and he said, what's a container orchestration engine? And this is somebody who worked on Kubernetes every day. Well, I realized that his entire career was just working on Kubernetes. So there's a little bit of a disconnect in some of these places. And this was just one thing that was kind of taken like, hey, how do we figure out if the network is actually ready? So the decision was made to use that config file just as a semaphore. So if you're writing a CNI plugin yourself, you might want to have considerations about when you write that CNI configuration onto your host to mark that that is ready. So Tomo and I work on OpenShift and we in the OpenShift land, we like to have a really opinionated way about how your cluster is gonna be lifecycle managed. So we have a really delicate dance about how and when the CNI configuration gets run on disk. So there's really a bunch of checks in the background to be like, oh, hey, is your actual network really ready? Before we write that config. So what happened with Flannel here wouldn't happen in that situation because we're kind of checking for stuff like that before we mark the nodes as ready. Last but not least, there is, of course, plenty of community surrounding CNI and it's really some of the strengths of CNI, I think, are the community ecosystem behind it. First thing you should know where it is is the CNI.dev site, which is a kind of pretty print of all of the documentation for CNI. So it's just all rendered HTML that looks nice as an easy to go place where you can find all of the parameters for all of these community developed CNI plugins. And so in Tomo's examples, we had there's lots and lots of parameters there. There's really only three or four of those parameters that are required for every plugin and then each plugin is gonna have its own set of parameters that you'd use. So you can check out CNI.dev, you can kind of pick and choose like a fast food menu of the parameters that you like and mix those up and kind of make your own CNI meal from them. Further from that, that is generated from the documentation. If you wanna get into the guts of it, you can go right into the github.com slash container networking, which is the CNI community. That's the namespace where everything happens and there's really two repositories to be aware of, one of which is CNI. And CNI includes a number of things. It has libraries, it has debug tools, it has probably even a couple small plugins maybe like, there's actually a plugin called dummy that will just create a dummy interface, stuff like that. But the most important thing I think that's there is these two markdown files called spec.md and conventions.md. When you bring up the spec the first time, it's sort of like drinking from a fire hose. They're like, wow, there's a lot of stuff here, but I'm hopeful that after coming to this particular talk, you have enough of the jargon terminology to be able to go through it because I think once you understand that the basics are really basic, like it's environment variables, it's standard in, you output JSON, you exit with an exit code, it's, you have a lot of kind of creativity between those two things and it really does seem like, wow, there's a lot of stuff here and it also can be since it has that history of kind of predating Kubernetes, it's a different API and I think that sometimes people that are really accustomed to Kubernetes take a look at it and they're like, wow, that is more than I bargained for, but it's not that hard to get in there and once you start kind of picking it apart and making your own customizations, the specification itself shouldn't be as hard to read and is a really good reference. There's also the plug-ins themselves, which is a bunch of community maintained plug-ins that do a bunch of kind of utility type of functions. So say you want to connect to a bridge to have your networking through a bridge, that's one way. Tomo used the IPvlan, which is a sort of like network virtualization. Tomo's using that as an example, very handy Macvlan similarly. So check those out, there's a few different ways to also do IP address management there too, like DHCP or static IP addresses. And lastly, there is also a CNI community update and Tomo and Casey Calandrello will be presenting this tomorrow at 2 p.m. So go check it out and you're gonna see some of the latest and greatest from CNI community. And thank you a bunch, I appreciate you taking the time to check out this tutorial. Any question is a welcome. You were in few places quite explicitly referring to Linux, but I understand that the same applies to Kubernetes on Windows, the same CNI is there. So I'm basically looking for this type of tutorial, but relating to Windows. There are some Cone cases which are different, like referring to Linux parts on Windows, finding the places, and that's kind of creating problems when you have to deal with it by yourself if there are any documentation that would be interesting. That's a great question. I believe all of the basics should still apply. And I, so one, I'm not super familiar with, with running Kubernetes on Windows. However, there is, for example, in the CNI plugins, the community maintained plugins, there's a build script for those in Windows. And there are Windows maintainers that are definitely considering what's happening with CNI. I would guess it's probably similar where you're gonna have to like, Tomo, if you, let's say you had a Kubernetes system that you just inherited and you were trying to figure out where your Binder is and where your Comfter is, would you look at your Kubelet config to figure out where that is? Yeah, I think so. So the first time, the difference of the Windows and the Linux is network stack is different. So probably CNI reference directory, I mean that the, so CNI GitHub.com have their plugins repository. Inside the plugins, we have the Mac, as I told people, Mac, VLAN, IP, VLAN, these stuff exist. But this is, so several stuff is, of course, built for Windows as well. But some CNI plugins is only for our Linux. As far as I remember, the Mac, VLAN and IP, VLAN seems to be only for their Linux. So the first things is they are from the CNI perspective. Supported plugins is different. Windows have them more limited and the Linux seems to be full features. And then in additions, the how to invoking these stuff is, slightly depends on the Kubelet side of the Windows. So as far as I know, the Kubelet cares to invoking or actually, I'm not sure that the GRPC for Windows side, but the Kubelet sending the GRPC call and then the content runtime of the Windows container care about to invoke the CNI. So maybe they are, of course, the journal cutoff is not existing in the Windows. So maybe the similar mechanisms is existing in the Windows so you can capture these stuff by the Windows manner. One thing I would say to try is if you're brave enough is to try to look at the contents of the tutorial and go try to find where your CNI binary directory and where your CNI configuration directory is in your lab Windows cluster and try running through the same steps because I kind of think that if you know where those two directories are, it should mostly apply. You might have to convert the script into a PowerShell or something like that, but check your Kubelet configuration on your Windows cluster to find those paths. And I think that it should be pretty similar. Thanks for asking. Yes, please. Just a quick question. You mentioned CNI runs only on creation and deletion. I see you have a plugin for DHCP. Or do you handle stuff like lease, renew or learn stuff that happened during the lifetime of the container? Okay, so thank you for the questions. So the, yeah, as you mentioned, the DHCP have a different protocol of the CNI. So yeah, as you mentioned, the DHCP is the politically lease and the request is happened and then kind of the heartbeat-ish messages. So in DHCP case, this is only for the DHCP, if you're running, if you're setting up a DHCP CNI, adopt them for each worker node, launch in the DHCP CNI server, which care about the CNI protocol. And the CNI, DHCP CNI plugin is interworking with these demo set to sending to request for some certain part or lease and then these stuff. So this means therefore each lease or this command of the DHCP side is care about the DHCP CNI plugin server. So that's why the DHCPs worked very well. Is that answer your question? Yeah, perfectly, thank you. Okay, cool. So question is welcome about how to create or how about writing down the CNI in the different languages other than the goal or everything is welcome. Call once, call twice. Okay, let's close this talk. Thank you for your time. I appreciate it.