 Okay, thank you for joining. Let's start the CNI recap and update. I'm Tomohami Hayashi from Red Hat, and I'm Casey Calandrallo from Isovalent. Okay, let's go to the today's agenda. So they first quickly introduced up what the CNI is, and then after that the update since the 2020 and the 2021, which means the CNI spec version 1, recap, and next version CNI 1.1 update. Okay, let's go to the stuff. Yep, as I described before, I'm Tomohami, and then I'm Casey Calandrallo. I'm a CNI maintainer as well, and I've been working on the project since about 2015. Yep. Okay. So when I look in the CubeCon in North America and the EU, the last CNI update seems to be the 2020 August at the CubeCon EU introduction to CNI, and this means the, maybe roughly four years is past. And what's happened, roughly, the first CNI spec version 1 is up, they released at the August of the 2021, and after that, we do eight plug-in versions are released. I mean, the eight CNI plugins in the CNCF CNI project from version 1.0 to version 1.4.1. The last one is the just released at the last week. And then now we are March 22nd, 2024. We do step up, and then after that, in some day, CNI spec 1.1.0 will be released. Let's see. So before that, let's go to the overview of the CNI. This is the just diagram that is stuff. So CNI is applicable to Kubernetes, but this CNI is independent from the Kubernetes. So this is more abstract. I mean, at the left side, we have the container. Maybe this is going to be the part in the Kubernetes context. Then methods or motors or cry or continuity using the lib CNI to invoking the CNI plug-in. So based on the users provided the CNI config JSON file, CNI plug-in will create an interface for you. From the logical components perspective, CNI is provided attachment to network. I mean, that this means the CNI config mentions how to attach the network, not how to create the network. And then also from the specification point of view, CNI specification allows you to multiple attachment to the container. And then, of course, multiple attachment may connect to the several multiple networks such as the network A and the two is the network B. So CNI protocol is written in the JSON. We have the two types of the JSON file. One is the CNI request. We may send the CNI config usually, and also CNI response. In spec, we describe it as the CNI result. So each config and the result, we have the CNI version to specify which CNI spec version is used. For each CNI version, we have a several field and a format as well. So if you know about that, then please take a look into that spec document in GitHub 3. So mainly CNI have the verbs. We may send the commands in the spec. We have the three commands, add and the del and the check. Add is attaching the container to the network. Del is attaching the container from the network. So when you're launching the port, the add is used. And then when you're removing the port, del is used. So check is the verified attachment. I mean, the check checks the interface attributes such as the IP address or some stuff to verify this interface is the expected state or not. So as I told, so the August 2021, roughly three years ago, CNI spec version 1.0 is released. This is, of course, first major version. So the previous version, I mean, the 040, we have slightly changed it, but a little bit big changes. But the .conf is deprecated. So previously, at the spec version 040, we have supported two types of configuration. One is the conf and another is the conf list. Conf is the single configuration as I described in the left side, but at the spec version 1.0, only supports the conf list, which is the plug-ins array, which we say in the chaining format. So if you're introducing the CNI spec version 1.0 to your environment, please check that the configuration file, which end with the conf list instead of the conf, and then the plug-ins field is used to describe each CNI configuration. Let's go to the version 1. Go ahead, Casey. Sure. So fast forward three long years, and the CNI project is moving forward. We're about to release version 1.1 of the specification. 1.1 is an incremental improvement. It brings a lot of changes that people have requested, some small, some large. The data types that are defined in CNI are going to be improved, so the data types are richer. The verbs that are provided is increasing by two. And we have version negotiation, so we've reached TLS 1.1 in terms of each functionality. Let's look at the two verbs, because I think these will be most useful to the broader ecosystem. I don't know how many of you are familiar with this particular little quirk of Kubernetes, but you've probably all seen node status not ready, network not ready. Where does the kubelet get that information from? It gets it from the runtime, and where does the runtime get that information from? There's one bit of information, which is does there exist a file in a directory, which is kind of awkward. We have APIs now, we can move on from this. So CNI 1.1 introduces the status verb. It has a very specific meaning, which is, is this plugin ready to accept ad requests? So it replaces this write a file for readiness dance. It also gets rid of the case where if you want to go transition back to not ready, you delete the file, and then if you delete the file, you can't delete containers. And then if you can't delete containers, you have an unmanageable node. So finally, we have solved an actual problem. This is super, super exciting, that little bit of chop wood carry water that makes Kubernetes incrementally better. The other major verb that is being added to CNI 1.1 is garbage collection. So the runtime, which in this case, container D multis cryo, gives the CNI plugin or plugins a list of valid attachments. And the plugins can then use this as an opportunity to delete extraneous resources by some sort of mark and sweep or garbage collection, right? And the extraneous resources that often get left behind are IP tables rules or most critically IPAM entries, right? Stale IPAM entries are no fun. Running out of IP addresses is a disastrous event for a node. So CNI 1.1, new exciting verbs, solving real problems, three years at a time. The next steps for CNI 1.1, so we have an RC1 that's been cut. We are working on, we CNI is the protocol. There's two sides to a protocol. You need to have a client in a client and a provider. So the community plugins that are provided by the CNI project, like Bridge and Mac VLAN, that PR has been filed, we should have 1.1 support for that merged shortly. Likewise, the new verbs, however, also need to be implemented by the runtime. So those four boxes back on slide number three, cryo, container D multis and mesos also need to be updated and take a release in order to expose these verbs, right? A more interesting question, okay, that's the path forward, right? An interesting question, though, is like, when can I use the new result fields? We now report MTU. We report more information on route. We even report socket path for socket-based devices if your plug in particular implements that. It's like, when can I use these new result fields? That sort of gets a little bit more complicated. And it's good that we have all the right people in this room to talk about this, right? CNI is one part of an ecosystem. I use the word ecosystem a lot in this talk, which is like a little bit of a trope, but here we are anyways. If you look on the screen, there are there are eight boxes, right? And like, what do most people create? They create pods or they even create abstractions on top of a pod. Like, like, yes, okay, I see some of you are saying I only see five boxes. Like, yes, right. These are also boxes. The boxes in the bottom are what? They're types. They're APIs. They're the contract that people talk, right? So we have now added a new field here that still leaves us one, two, three, four, five, six things that need to take a change. And so this is a call out to the community, right? Like, it's unfortunate that the information that people want, which is to say more richer result types, is buried behind three APIs and five components. That's the role we have, right? Like, it's cool that we have pluggable components that are composable and that has a really, really serious advantage. It's a rich ecosystem, lots of networking providers. Like, that's how my salary is paid, but it also comes with the cost, right? Anytime you add an integration point, you've added a standard. And anytime you add a standard, you've added a point of friction. So an open question that I hope the hallway track will resolve since we're not going to resolve it right now is like, how do we better align the CNI with its end users like way over here? And like vice versa, right? How do we align the users of CNI, which is to say all these components off to the left, with CNI itself, right? And there's no obvious answer. There's no obvious answer here to this question. Probably the most interesting thing that might come out of this talk is how can we phrase this question better, right? So it's just CR. I matched this to the CR. I, that's one in the middle here, like more closely match the CNI. How can I use these shiny new fields? One of the things that is actively happening is the Kubernetes multi networking working group. And that's proposing, among many other things, a richer pod status, which is like, great, let's do it. Like, let's not be afraid to change the pod object. It only took us three years like, so great, we're going to ship one dot one sometime in the next week or two. What's next? Let's look forward. The first, let's talk, we'll talk specifics first. So V1.2 on almost certainly definite minor improvement, or not minor, improvement that's coming in is drop-in directories. So if you're experienced, CNI supports chaining, which is to say in the context of a single network, a single network interface, you can provide composable actions on that. A classic example is setting up bandwidth limitations. Or a more subtle example is setting up IP tables rules for side car containers for service meshes. And right now, in order to do this, it's like a bunch of JQ duct tape, which is ugly. System D has shown us the way. Drop-in directories are like an obvious improvement to this resource. If you're going to have configuration files, you should also have configuration directories. So there's a PR, after CNI one dot one will cut, we'll merge this PR, and one dot two will have drop-in directories. And we are now at system D 2010 level of usability. That's definite. Likely candidate for CNI one dot one, the issue is filed, the PR will follow shortly, is metadata. So arbitrary key value pairs as opposed to a strict result type. As it stands right now, the CNI result type is rigid. There's no fields allowed outside of it. We try and have a low barrier for adding fields, but that is an expensive operation. As I showed a few slides back, there's a lot of components that I have to uptake CNI in order to get past one little string from here to there. So we try to have a low barrier for adding fields. The barrier is not zero, though. And so there are legitimate use cases and cool proposals for adding arbitrary key value pairs to the result type. And I think we're going to have to do it. So this is a quite likely addition to version one dot two. As with all these things, if you have opinions about this, please come and yell at me afterwards, because I really do want to hear it. What's next? Okay, that's one dot two. Probably not much more is going to go into one dot two than that. What can we do beyond this? One is GRPC. This is very, it's simultaneously extremely interesting and also not that interesting. It's very interesting because it is now 2024 and we should probably not just exact things for a protocol, but also if the API is the exact same shape, then it's an implementation detail how you execute the plugin. For a few people, this would be very interesting for the average Kubernetes user. Hopefully this is not that interesting, maybe a bit easier to use. Another thing that I would like, we would have significant, seriously like community feedback on, is do we support dynamic reconfiguration? As it stands right now, the specification is extremely clear. You may only do one ad for a particular attachment. An ad is not expected to be item potent, and so that means that the runtime is responsible for generating all edges, as it were. But the spec says this, it's just words. We could change this. Is this something you're interested in? It has advantages and disadvantages. If so, please reach out, come to the community. We meet weekly and we are engaged in issues. If we don't do dynamic configuration, that offers up an opportunity for another really powerful verb, which is finalize. So CNI is quite explicit that there are multiple attachments to multiple networks in a single container or pod or pod sandbox pick your verb, pick your noun. There is a pretty strong use case for some sort of meta cleanup or meta finalization verb that is not allowed to create interfaces, but it is allowed to modify the state of the networking inside the container or on the host. This is some sort of finalize. What would be an example of finalization is, say, an absolute route resolution cleanup phase where you make sure the routing table is exactly what you would like it to be. Or you could have a, if you are also doing, if your chained plugin is manipulating things not in the context of an interface, say, for example, you are attaching a service mesh, then you would find this interesting because you want to finalize and add your IP tables rules at the absolute end and you don't care about the interfaces that were coming and you are not really manipulating an in progress interface creation. So finalize is a huge win and we probably should do it, but if we say at some point in time that the network is finished being configured, that closes out the possibility of doing dynamic reconfiguration. Given that Kubernetes pods are mostly immutable, except for all the mutability that they have, pods are mostly immutable. Finalize is probably the more natural fit, but it is a debate. It is an open, it is not for the community to decide. And then the last idea that we have been talking about that is extremely loose is some notion of config auto generation or another way of shaping this is some sort of registration. Plug in dynamic registration is, there is a precedent for this in Kubernetes, both devices and storage register themselves with the kubelet. C and I can work this way as well. The problem as it stands right now is that network configuration is explicitly not a part of Kubernetes. So what happens if you have multiple possible plugins? Then you have some sort of registration race. So the corner cases quickly fall into utter disarray. This is probably not going to happen unless we make as a community to decide that network attachment and network ownership is smiling and has he should. This is a huge community question to resolve. It is a possibility. That is why we are here to talk about possibilities. So Kubernetes does not define network configuration. The hallway track for this conference has been extremely dynamic and I hope that we resolve this because this is an exciting dynamic time. So that is actually it for a presentation. Some closing thoughts. C and I is a small part of a big world. It is a small part of a big conference. It is pretty cool that it has no single vendor driving it. You cannot buy C and I enterprise edition. Or you can buy like me, I guess. C and I is also extraordinarily simple. It is really easy to improve and I am honestly embarrassed to come up here and say that it has taken us three years to ship two new verbs. That is it. There is no enterprise C and I. There is no C and I enterprise edition. So things move at the pace of open source. So reach out, join the community, let's make things more fun, let's make things more exciting. Let's solve real problems. Reach out to us. We are on GitHub. There is a CNCF Slack. We have two channels. Not sure why. We have a weekly meeting which is open to everybody. I will make sure that details are posted on our homepage. C and I, CNI is all of us. There you go. Thank you very much.