 My name is Dan Williams and I am a member of Roshid's team who just presented before. So it's the networking services team. And this presentation is multi-networking Kubernetes containers with CNI. I'd like to give a special shout out to Doug Smith from whom a lot of the slides are adapted. Part of this was also presentation that we gave at KubeCon Seattle back in December. So thanks a lot to Doug for helping out with not just the main event of this presentation, which is Maltes, but also a lot of the slides too. So beyond Doug, I'd also like to thank Tomo Fung from Red Hat on the NFV partner engineering team. And then also Karal, who is from Intel and is also a member of the upstream Maltes community and has done a lot of work on the Maltes projects and the Plumbing Working Group as well. So and then of course there's the upstream Maltes community. It is an open-source project on GitHub and there's a lot of people who collaborate in that particular project. And if you're interested, join the community, help out. So with that said, we're going to go through a couple of things today. We're going to talk about exactly what all the acronyms that we are using here that I'm using today are. We're going to talk a little bit about the Network Plumbing Working Group, which is a group that we started upstream in Kubernetes and why that's relevant to this. And then we're also going to talk about how we take the stuff that the Network Plumbing Working Group developed and bring it into practice in Kubernetes. And then finally we'll talk a little bit about what's next, what's coming up in the Plumbing Working Group, CNI, Kubernetes networking itself. So some of the acronyms or other things that we're going to talk about today. Hopefully everybody knows what Kubernetes is. It's container orchestration system that's fairly popular today. And that's when you have container orchestration system like Kubernetes, you probably want the containers to talk to each other and you probably want the containers to talk to the outside world. And that's where something like CNI comes into play, which is the container network interface. And that is basically a specification and a set of reference plugins that allow network plugins to set up and tear down container networking. And that's going to be an integral part of the presentation today because the CNI interface is how all of these pieces fit together. Also POD, that's Kubernetes terminology for a set of related containers that share a network configuration, network namespace and setup. And then CRD, Custom Resource Definition, that's another Kubernetes term and that's basically just a way to describe an object in the Kubernetes API that anybody can create. It's not an official part of the Kubernetes API, but anybody can create one and that's one of the ways that Kubernetes enables extensibility of the system. And these only showed up fairly recently in the last year or two, but they've already created kind of a huge explosion in how third parties and other components of the Kubernetes ecosystem that are not officially part of Kubernetes projects interact with Kubernetes itself and the rest of the ecosystem. And finally, CRI. The CRI is an interface that Kubernetes developed that's kind of an abstraction layer between Kubernetes itself and what actually runs the containers and sets up the network namespace for the containers. One of those is you might have heard of Docker. There is a CRI for Docker, but up and coming is also the CRIO project and there is a CRI abstraction for CRIO as well. In fact, CRIO stands for Container Runtime Interface. I forget what the O stands for, but there are many talks about CRIO this weekend as well. So if you're interested in more of kind of the details and the guts of container networking, especially in Kubernetes land, check out those talks. So what's the general problem? Why are we even talking about this today? Well, the first problem is that Kubernetes really only has one network interface or one network that a container can be connected to. And this is perfectly fine for a whole ton of different use cases like, you know, a web server. If you're just running nginx, this is great. It works fine, however. That's not the only use case that a lot of people have. And over the past couple of years, we've been seeing a lot of use cases, a lot of customers of Red Hat, but also a lot of interest upstream in being able to have more flexible networking for your containers. Things that don't really fit well into that Kubernetes model, very high bandwidth, like media streaming applications that need to push tens of gigabytes per second, or gigabit per second out of a container. If you have specific latency requirements, a lot of the default networking plug-ins for Kubernetes don't really have strict guarantees or the ability to provide guarantees for these things. So, and segregated networks and legacy networks, those two are, if you have, for example, old legacy things like databases that aren't really a containerized model. Those might be over here on this network that might have a particular IP address. You need to talk to that. It's segregated due to privacy concerns or legal concerns. You want your container to be able to talk to it, but you can't hook that thing's network up to the rest of your container cluster. So, maybe you have a physically segregated network that you have to connect containers to to be able to talk to that resource. So, these are a couple of the things that don't fit quite as well into the Kubernetes networking model, microservices, et cetera, as, you know, just simple web servers or, you know, databases and web apps. So, what's the network plumbing working group? That is a group that we formed to kind of tackle some of these problems. We worked with a number of other upstream partners and groups in Intel as well. I'm trying to think of some of their ones well. Anyway, Red Hat helped form this group about a little over a year and a half ago. And its focus is on enabling some of these use cases that might require multiple network attachments per pod to enable these in Kubernetes, but at least initially in a way that does not modify the Kubernetes API officially. There had been discussions going on a lot in Kubernetes Network Special Interest Group around how to enable this and some POCs and things like that. But there's some resistance upstream to it, some, you know, for some good reasons. And so, the network plumbing working group was formed as a forum to be able to talk about these things, prototype them, and figure out what we need to do to maybe get some pieces of this upstream solve these problems before proposing something upstream that may or may not get rejected, might need a lot of work, et cetera. And it turned out there was a lot more work than we thought here. So the plumbing working group focused on creating a specification that bases itself off CNI that anybody could implement to provide multiple networks per pod. We did some POCs. We refined those. We learned a lot from the POCs. We developed a specification over about a year or so. We refined it. We did a first release mid to late last year and also been working on a reference implementation of that, which is Maltis, which we'll talk a little bit more about in depth quite soon. If you're interested in helping out with this group, joining this group, I have the link to the community right there. And that includes like leading times, the purpose, some of the things that are being worked on. And there's also in the slide deck a link to meeting recordings on YouTube. All of our meetings are public. All the meetings of Kubernetes network specialist group are also public. So it's a very inclusive community. Feel free to join. We like everybody's ideas and we'd love to have you join. I'm going to skip that slide for time reasons. So the spec view one, again, short-term solution. There are other groups that are exploring much longer-term solutions, like network service mesh, if any of you are familiar with that. But the network plumbing group was focused on what can we do to enable some of those cases that we talked about sooner rather than later without changing Kubernetes API, because that's very hard to do for various reasons that I won't necessarily get into unless you really want to know. And basically a lightweight standard that anybody could fairly easily implement. Beyond that, it does use CNI, but we found while developing the specification that there were other people who didn't want to use CNI plugins necessarily to do the specification. So we worked with those people and we tried to figure out ways to make sure the spec would work for plugins that didn't necessarily use CNI. We also want to coordinate with the resource management working group, and that's a working group that's focused on some things like scarce hardware. So if you have SRI or VNICs on your nodes, that's something that you only have a certain number of, and they only have a certain number of capabilities. And so you need to figure out, well, if I oversubscribe this node with pods that require that capability, can't do that, things will fail. You want to stop that before it actually happens. So we're working with them to try to figure out how we can best do some of the resource management on the nodes and make sure that we prevent these problems before they occur. So we'll quickly go over the specification. It has a couple of parts. The first one is an annotation. So in Kubernetes, you define everything through usually YAML files, and so you'll add an annotation to the pod object when you create it that says, I want to attach this pod to network A, or in this case, network foobar. When that happens and the pod is created, the node will actually go off and it will attach that pod to the cluster-wide default network, which is the normal Kubernetes network, but then also to foobar. And then the implementation, for example, Maltes, will take all the information about MAC address, IP address, other characteristics, and publish that back to the Kubernetes API. Currently, the only thing in the CUBE API that gets reported about a pod is its IP address, and it turns out that that wasn't actually sufficient for a lot of the cases. So you can kind of see here that there are a number of pieces of information that get published. Another thing you can see is that pods can have multiple IP addresses. That's something that Kubernetes upstream itself only is really starting to deal with, and that was only because of IPv4 and IPv6 dual stack. So we kind of tried to incorporate those kinds of things into the specification already so that it would be compatible with future versions of Kubernetes. So you can see that, so that's the second part. The third part is that the specification defines a custom resource definition, which we talked about earlier. And the custom resource definition just says, this is what my network needs to be created for the pods. Here are the properties that this network should use when a pod is connected to that network. And you do that through the CUBE API. There are some additional components. For example, you can think of, you might not want every single pod in your cluster to attach to a given network, so we need to make sure we have access control for these networks. And there's also admission controllers for validation. An admission controller is simply a component in Kubernetes that allows validation, allows access control of things before they get added to Kubernetes itself. So it kind of routes your request through the admission controller, if the admission controller says yes, allows that to be added to the Kubernetes object store and ecosystem. There's also some upcoming stuff to help other implementations. So, let's talk about multis. We call multis a CNI meta-plugin, because essentially what it is is a shim between Kubernetes and a number of other network plugins. And it kind of multiplexes things, which is kind of where the name multis comes from. So it allows you to attach more than one network to any given pod in Kubernetes, and it understands the network plumbing working group specification, which allows you to do all these kinds of things. So, again, the problem, just to recap, each pod only has one network interface in normal Kubernetes. That's not particularly dynamic. You only get one thing. We need a little bit more flexibility. Let's help with this flexibility, or help with this, and provide the flexibility. Well, you define the CRDs that define all of your networks for the cluster. Multis looks at those, it reads those, it figures out when your pod is born on a particular node which networks it needs to go attach that pod to. It looks up those network definitions, and it actually makes that happen, which we'll go into in a little bit more. So in this example, you can kind of see how multis will attach two different networks to the same pod. And you get, you know, Mac, VLNAG. The second network is going to be any CNI plug-in. It doesn't really matter. So it's fairly open, it's fairly easy to specify what kind of networks you want beyond the default one. So, key concepts. The specification, okay, let's back up a second. Kubernetes requires a cluster-wide default network. It has certain guarantees or certain things that it expects out of a network plug-in and so the specification calls that the default cluster-wide network. And that provides the backwards compatibility between what multis does with multiple networks and what Kubernetes expects. It always attaches the default or the pod to the default cluster-wide network, but then all of the additional ones are secondary sidebar networks. And what that means is that they're always going to be additional. You have the default one always, or more of these secondary networks. The secondary networks don't have quite the same guarantees as the default cluster-wide network. For example, you don't have microservices on those. You don't have any kind of network policy on those networks. We're going to work on adding that in the future and exploring how to do that. But at the moment, these secondary networks are very targeted and very focused. Custom resource definitions. Basically what happens is you say this is a description of my object. In our case, that would be these secondary networks. And you tell Kubernetes what this particular object looks like, how to define it, how to validate it. You add that to the Kubernetes API and then anybody later can create objects using that kind of type, using that description. For the Networkline Working Group specification, you can kind of see the example of what the pod annotation looks like to select multiple networks. Right here, you can kind of see this is an annotation that's kind of defined by the Networking Group. And it has a name for each network. And so you can say, okay, well, I want to attach this pod to the control plane network. And I want to attach the pod to the data plane network. So these names, which are the pod specification, map down here to the actual object that you have defined for that network. And that object has a couple of properties as well. And this is basically the CNI configuration for that network that describes how you're actually going to attach pods to that network. This is a little bit more detailed definition of how this works. This is the object that we've been talking about the network attachment definition. And this is what you create the CRD for. So the CRD tells Kubernetes how to interpret this particular object when you've added this object to the CUBE API. So you add this once, and then every single node on the system is able to see the configuration for CNI and to be able to create pods that attach to this network. So, how do you start a pod with one of these additional interfaces? Pretty easy. You use an annotation, and you say, this is the network that I want to attach to. This name maps back what you're just looking at that describes that MAC feel-in network. So that would be here. And you can, there's a couple of formats for this annotation. You can use the short format which is a lot more user-friendly, and that just says the name. But there is also another format that allows you to describe things like what's the MAC address that I want this interface to have? What's the IP address that I want this interface to have? What is the network interface name inside the pod that you should have? So that it's not completely random and your application inside the pod can expect a certain network of your face name. So then, of course, after you attach this pod to a number of different networks, how do you even get those results back? The specification defines and multisimplements a way to publish this information back to the Kubernetes API so that you can inspect it from your other applications that you want. And once you see the information here, you'll see the secondary network interface and that's the MAC vlan1. And I guess we'll show the status in a second. But now we have demo time. So, and we'll just do a really quick demo of how this works. So you can see here I have a small Kubernetes or small OpenShift cluster and it has two nodes in it at the moment. There's a master and there's a second node right there. And this is just showing that multis is running in that cluster and managing the network configuration. Yep. Is that better? Okay. Good point. So, what we're going to do first is we're actually going to create a pod and we're just going to use nginx all and it's really simple. So, we create the nginx pod and just run it. Create itself for a second there. Sorry about the wait. I actually had pulled this image before but apparently that's not the case anymore. Anyway, we will come back to that and hopefully it will be where it needs to be. So while we wait for that what is next for the network plumbing working group? We have some minor specification updates. Obviously, not everything is perfect the first time around. So we found some changes that we need to make. We found some errors in specification. There were a couple of small problems that we had to address. For example, what if you want to specify multiple static IPs? We had to add multiple static IPs but with a network prefix. So if you want to do like a slash 24 slash, you know, 16 whatever you want your static IP to be, we found that that wasn't in the specification. We added that and we also found that that wasn't possible with CNI due to some of the conventions that CNI had and so we had to also update CNI. So there's kind of been a cross, you know, pollination I guess between CNI and the plumbing working group. We work pretty well together. Also some of us on the plumbing working group are maintainers of CNI so it's very easy to make these changes back and forth. Some of the other minor spec updates adding some of the capabilities that Kubernetes allows and making sure that those were expressed in the specification. Kubernetes allows things like port mapping, bandwidth QoS type stuff and it pushes those through into the network plugin but we also need to make sure that the specification allowed passing those through to something like Maltis and to those subplugins. We also, as I talked about before because these secondary networks are not really full citizens yet and that can be a problem. So for example if you want to run a media streaming service but it needs high performance you might have a second network interface that's dedicated to media streaming but you want to have a service on that particular network so that clients don't have to connect to a particular IP address they can just use a domain name and Kubernetes to figure out which pods it goes to. That's not currently possible because of the network plumbing groups because the plumbing working group is attempting to not change the Kubernetes API yet. So what we need to do is we look into PoCs and do a little research to figure out what's possible there. One of the problems with that is that if you have a second network but you expose all of these kinds of things like the pods IP address, the service virtual IP, those things to the Kube API, how does something that's reading the Kube API and trying to talk to it know that it has to use this completely separate physical network to access the pod. So we have to solve those kinds of problems. We also have to take a look at network policy on these second interfaces because again the network policy is talking about can pod A talk to pod B can anything in the network name can anything in the project over here talk to the project name space over here which at least an open shift is one of the ways that we implement multi-tenancy. That's not so easy because if the pod is on two networks at once how do you know what can talk to each other how do you know that these things over here are supposed to be able to access the network. What happens if you can't actually talk between the two networks on the physical level. So that's an area of research that we're trying to work on. And also dynamic interface attachments. Those are right now Kubernetes expects that when you start a pod it has your cluster by default network and through Maltis and the specification you get these additional networks but you can't add and remove them on demand because Kubernetes really does not expect that. Well it turns out that some people want this. There's a lot of interest upstream in being able to change the pod definition after it started and have those networks automatically attach and detach. And because we have this shim for example Maltis in between Kubernetes and the pods themselves this shim can sit there watch the Kubernetes API and decide oh hey I noticed that this network is now present on the pod specification let's add it to the pod. Or for example removing it. That's a use case for the particular use case that somebody is very interested in is dynamic routing and so they're kind of building an architecture where some of the routing logic is actually in the pods but to be able to do that you might need to add network interfaces remove them from the pod to be able to dynamically update that system. So we're going to work on that that's actually not really that hard because it's just doing the same exact operation just at a different time. So that's going to come up as well. And because not many people have done this type of thing before we're going to try to figure out if there are implications for Kubernetes there might be we'll see a lot of this stuff you really don't know until you try it and not a lot of people have tried this kind of thing before. So for CNI itself there are two parts to CNI the first one is the specification that anybody can implement and there's also a set of reference plugins for CNI. So next up for CNI we're going to release a new specification for CNI in the next couple of weeks or maybe a month or so and as part of that it adds things like check support which is network health checking previously Kubernetes has not really had the ability to say hey does this pod actually is its network actually healthy does this network actually work. It has a higher level health checking where it actually queries the service inside the pod so if it's a web server it'll actually query the web server and say hey does this web server still healthy but there's not a way to say is the network itself that the pod is attached to actually working. So that's something that we add to CNI and then eventually we'll also add to Kubernetes itself to call that functionality of CNI and when the network is unhealthy Kubernetes will kill the pod and we start it somewhere else or maybe the same note doesn't matter. Finally cash results in the helper library so CNI has kind of a helper library that Kubernetes or any other runtime can use and that currently what happens is Kubernetes calls the add request for the pod it gets back the IP address throws everything else away that doesn't work that well in some of the cases for example when you want to check the network health again so we added support for caching that result from the pod network setup so that it could be used later that you get more information that Kubernetes could use later because again right now it only stores the IP address and that's not really sufficient we also have some more reference plugins there's a firewall plugin that works with IP tables also works with firewall D that will help in some cases you need to punch roads through the firewall to do certain things the firewall plugin will allow that fairly easily there's also a new source based routing plugin that helps with some of the VRF which is I think virtual routing that was contributed upstream and recently merged MULTUS itself what's next for MULTUS we want to like I said before try to figure out how these secondary networks are actually going to work with services and network policy so there's going to be some POCs going on right now about that we also want to do enhanced security currently the way that access control works is if this pod is part of a namespace you restrict the network to the network definition to that namespace too and if you're not in the same namespace you can't actually add the network that's not sufficient so we're also going to investigate how we can make that more fine grain so you can give specific users perhaps or specific cluster rules access to certain networks there's also going to be refinements network plumbing working group specification I mean like I said it's kind of the small fixes we have to make sure that the MULTUS reference implementation is updated for those small fixes we also want a conformance test framework for the specification because MULTUS is only one implementation but there actually are others out there and so we want to develop a conformance test so that we know that certain plugins will actually implement the specification and implement it correctly but that also works well for MULTUS because we've found in the past that MULTUS itself didn't correctly implement the specification so it would be useful all around and then again continue working with the device management group on things like SRIOV if you only have the ability to have 32 virtual functions on your NIC well don't start 33 pods that require a virtual function to be inside the pods network namespace because clearly that's not going to work and unhappiness results the other thing we might want to make MULTUS a library because the functionality isn't something that's like earth-shatteringly complex so if we make it a library it could potentially be integrated into some of the Kubernetes container run times like CRIO and then you wouldn't necessarily need this shim because CRIO would automatically understand by default that if it sees that this pod spec has a couple of networks that it should be attached to just go off and do that basically fold the shim layer into the actual container run time itself it's totally possible not sure if that's actually going to happen it or not we're just kind of exploring and thinking that maybe that's the direction we could go and the last thing to talk about is network service mesh that's an upcoming attempt at solving this problem in a much grander way if you are interested in network service mesh happy to talk a little bit more about it in this particular presentation so let's actually go back and check out the demo if we can see if that got where it needs to be nope, it sure didn't so unfortunately we will not have the quick demo but if you're interested in any of these topics we would love to have your help love to have your input I've put the link for the programming working group community right there we have meetings every other week and we love any kind of help or input that anybody has so with that and minus the demo I'd like to open up the questions Jerry you said that there are some problems with getting the work upstream what are the problems right yes, Jerry asked I said that there were some problems with getting some of these features upstream into Kubernetes what are those problems and why after two years or more are these sorts of things upstream I don't know I'm assuming most people are familiar with networking in general in this room at least at a basic level well it's very complex and so one of the problems is that because everybody has something different that they need out of networking because all of the ways by those needs are different which interface type do you need to use do you need to use a particular vendors hardware and software combination to get your cluster networking working what methods do you need to use routing or is it going to be like layer 3 or do you need to use layer 2 all these kinds of things Kubernetes really does not want to be in the business of defining a certain set of capabilities that networks should have for your containers so they kind of want, because of the complexity to just wash their hands of it and push the kinds of things you need to do off to custom resource definitions like we've described here push it off to the network plugins themselves that's kind of why the CNI layer was added to Kubernetes a couple of years ago was to get Kubernetes out of the business of defining the properties of the network then just move it all into a simple add this container to the network, remove this container from the network call so all that stuff is pushed down in and the problem is that when you encode those kinds of things like those kinds of ideas about what a network is and how a network works into the Kubernetes API it's API, there's guarantees there's stability guarantees there's backwards compatibility guarantees and because it's so complex you don't want to have to deal with those kinds of things and formalize that stuff into the Kube API so we're kind of left with how do we make these things happen outside Kubernetes and maybe take some of the things that we learn and bring those back into Kubernetes but in a much more generic way and one of the thoughts there is what if you describe the things that your application needs out of Kubernetes like, okay does this application need a ton of bandwidth maybe what is the minimum bandwidth requirement for this application or what is the minimum QoS guarantee or what are the minimum isolation guarantees that you need should this product network talk to these other networks or should it not talk to that network and then based on those kinds of generic properties or requirements of the container maybe the Kubernetes ecosystem would figure out which actual back end network to attach the pod to without having to say this needs IP address you know, 10, 1, 1, 3 on this network with this particular MAC address on this particular card that can deliver a 40 gig per second so that's kind of one of the reasons why how we're going to approach that problem but yes it's a very long road and everybody needs something different from networking I'm sure you're not as well so, yes developing like trying to make second day networks first class citizens do you have in mind also networks that want to stay second class like the two networks yeah, they could I don't think that those things are mutually exclusive you know you don't have to define services on those secondary networks if you don't want to it's just that we want to allow that possibility to happen because there are some use cases that want that possibly sorry, the question was these the networks that multis attaches beyond the default network are secondary when we talk about trying to make those secondary networks first class citizens are we also accounting for the fact that maybe some of these networks don't want to be part of the part of Kubernetes in general or use the constructs that Kubernetes gives to networks by default so is that going to answer the question yes any other questions so the first question was how do we interact as like network multi-community C&I etc how do we interact with the Kubernetes network special interest group and what's the cooperation there and what's the maybe timeline for getting these kinds of improvements upstream into Kubernetes the second question was have we played with the hardware side of things like SRIV and that kind of stuff so first question I'm also I'm a co-chair of the Kubernetes network so I'm in both meetings and there's a lot of other people that are also in both meetings back and forth so there's a large degree of cross-pollination between those two groups just based on the members being similar in a lot of cases other than that we regularly bring specific issues back to SIG network so that we can talk about the way or get feedback on those proposals those ideas that's also a forum that a lot of us on the plumbing working group use to kind of keep track of what's happening in Kubernetes in general I'll give you one example there's been a lot of discussion back and forth about how should Kubernetes deal with IPv6 and so it was decided that yes Kubernetes should deal with IPv6 because it's kind of important well that means that pods might need multiple IP addresses because you'll have a v4 and you'll have a v6 or one or more so there was input back and forth we want to make sure that as a plumbing working group representative as I'm standing over here as a plumbing working group you want to make sure that when Kubernetes makes changes to its API they don't adversely impact you but standing over here as a SIG network representative I want to make sure that what gets added to Kubernetes is generic enough that anybody can use it and that it's worthwhile for anybody and so it's kind of taking those things and trying to make sure they happen it's a little bit challenging but it actually works out fairly well and we've been able to in the IPv4 v6 case kind of chart that path and make sure that it will be useful for plumbing working group and others but also useful in general for Kubernetes so I guess that's one specific example of kind of the cross-pollination that we have for the second example for the second question SRIOV for example Kural from Intel is working pretty heavily on SRIOV they have a device plugin so without getting too far down the rat hole of device plugins device plugins are what the they're basically built for managing hardware resources on a node so things like SRIOV InfiniBand things that aren't normal network interfaces that have some finite resource a device plugin is what knows how to configure that particular piece of hardware, knows how much of that hardware exists and that's what Kubernetes talks to to actually bring that hardware up at pod creation time so the device plugin is something that Kural and others at Intel have been working on and they have software that will actually handle SRIOV parts and interact with Maltis to make sure that you can do SRIOV with Maltis in the Kubernetes cluster and that's actually it should be available already and it works there's some Git repos at this link right here not the community link but if you just take off the community part and now we're pulling the working group there is a SRIOV device plugin, there's an SRIOV admission controller and if you grab those things they also publish images as well because all these things actually run as containers in the Kubernetes ecosystem and so you can use those today it may only work with Intel cards at the moment but there are others I think that Melanox is also working on device plugins for their stuff NVIDIA also has a device plugin for more of the Infinivan side too so does that answer the question? Yes, on the end so when you have the service running to a bunch of pods which have several several interfaces how is the end of the list calculated? So currently the endpoint list only includes IP addresses from the cluster wide default network so you do not have endpoints on the second networks yet and the problem there is you have to keep the guarantees of Kubernetes API compatibility sorry the question again was how are the endpoints when you have services and you have pods on multiple networks how are the endpoints calculated what does that endpoint list look like it does not include any of the network interface IPs on secondary interfaces at this time because something reading the Kubernetes API sees a list of the endpoints and assumes currently that it can reach every single one of those endpoints but if some of those endpoints are on a separate or like physically separate network or even logically separate network something reading the Kube APIs can be able to necessarily talk to those now you can get around this through like proxies between the different networks or like some other kind of connection between those networks but at the moment that is not possible we are not going to try to jump into that mud pit at the moment but that is something that we are looking at trying to solve how to solve there are some ideas around a fully connected cluster where you have essentially two separate networks but every single node is connected to both of those networks that is a use case that is a lot more easily solved than if some machines are hooked up to this physical network and some machines are hooked up to this physical network and they can't actually talk that to each other so not quite yet but hopefully over the next year or so that might happen yes so I missed the middle part of your question could you repeat that again sorry it's a little noisy that for coming out streaming they serve for example a right of course that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not that they are not important range. That currently is hard to configure. Are we thinking about making that easier? That's one of the issues, but instead of TCP or UDP, maybe we'll have to support for the kind of protocols, low level. Yeah, so also about protocols, not just TCP and UDP, but other protocols. SCTP support was recently added with the QBAPI. That may or may not be interesting to you. And so that's one example. We know there have also been requests, at least on the open shift side, to have port ranges. So again, another example there. No, Kubernetes does not make that easily available. But we certainly want to make that easier. We know that those use cases exist. I don't think there's a plan specifically for it. But if this is something that is interesting to you and that you need, I'd say get involved in SIG network or network-only working group. Let's figure out what that use case is and let's figure out how to address that upstream. Okay, last question. Can you explain a little bit about the SCTP support? On the SCTP? Yeah, yes. Okay, the question was, can I talk a little bit more about SCTP support in Kubernetes? Kubernetes only really cared about TCP and UDP as protocols in the API. It was recently updated, like the Kubernetes API objects were updated for SCTP support. But of course that requires that you implement that support in the proxy layer and also potentially your network plugins. So just because Qube allows it now as part of the API, there's kind of a little bit of lag between when the network plugins and the proxy and stuff actually end up supporting that. I believe Qube proxy does support SCTP now, but not every network plugin actually uses Qube proxy. Yes, exactly. So yeah, those plugins, yes. So it's going to be a little bit before some of those plugins support it. Some plugins might not actually ever support it. But if there's certain plugins you're interested in, you might need to contact that project or that vendor or something and ask them to add support for it. I think it was like the Kubernetes 112 release when it was added. So that was only mid-last year. So it's still pretty recent. Any other questions? All right. Thank you very much and let's see if we...