 Awesome. Thanks for coming, everyone. We're just going to run through some introductions. My name is Andrew Stoikis. I'm a member of the Office of the CTO at Red Hat and currently maintain the SIG Network Policy API repo housing the new admin network policy API. Hi, I'm Bowie. I've been contributing to SIG Network for the past five years working at Google. And I'm Rob and I've also been working at Google contributing to SIG Network for a few years, but it's been quite as long. And you can blame or thank me for end point slice, ingress getting to GA and some topology stuff. But more recently I've been doing a lot of gateway API work. So I'm one of the maintainers of the gateway API project. Hey, everyone. I'm Surya. I work at Red Hat on the OpenShift networking team. I'm relatively new to SIG Network and Cuban. It is not the veterans as these folks here are, but I am trying to get involved in admin network policy and keep proxy. So yeah, hit me out if you want to know anything about these things. Cool. Well, I am going to be going through the first part of this and that's going to be an overview of the API. So we have not much time. I think we've got a 35 minute slot and we're going to try and do both an intro and deep dive, which means we really can't get that deep in any one area. But to start, we're going to go through an overview of the APIs that networking SIG Network owns. And to start with, let's talk about service. How many of you have a service in your cluster? Okay, that should be all great. Great. Okay, so services really the foundation. I think the central thing inside SIG Network resources, they enable, you know, basically grouping pods together behind some kind of network concept. And you can see here that that's done with a service selector. So in that case, pods labeled app equals store are going to be selected by this service. Services are usually assigned IP addresses that they can be reached on and requests to those addresses will be routed to one of the associated pods. Now most of you may have, may be familiar with the concept of a service type and there's really four service types here and three of them are very closely related. You know, we've got a cluster IP service type, which is a subset of a node port service type, which is a subset of a load balancer service type. And then way over here on the other side, you've got external name, which is just it's completely other thing. It's basically a C name that does some DNS resolution for you. Now reaching a service is fairly straightforward. Each service is assigned one or more cluster IPs, depending on if you're using multiple IP families. So in this case, you can take that cluster IP and make a request to it and you'll be routed to one of the pods backing that service. Now we also have a DNS specification in Kubernetes that allows you to instead of knowing the IP of the service, just know the name of that service and make a curl request using that name. So store the namespace it's in and then service cluster local. Now within Kubernetes, we go one step further and we do some automatic lookups for you. So instead of typing all that out, if you're making a request from within the same name space, you can just curl store and we'll do a few lookups and eventually translate that to store prod service cluster local. Now endpoints and endpoints slices are really how this works behind the scenes. So remember that service is the thing that selects pods but we need something like behind the scenes to actually track all the IPs and ports associated with those pods and make them make sense. So the endpoints resource is the one that's existed since very, very early on in Kubernetes and it's just one big Kubernetes resource that tracks all the IPs and ports for the pods in your service. With endpoint slices, that's a newer resource and that basically is sharded endpoints. So there became a time where services started to get really big. We saw services with say 10,000 endpoints and that just didn't work very well trying to shove all that information into a single Kubernetes resource. So instead to try and address that kind of scale, we split it out into several endpoint slices per service. Most new features in Kubernetes are going to endpoint slices so although the endpoints API is stable and will continue to exist forever, as far as I know, the endpoint slice API has new features and is more broadly used at this point. So things like dual stack, topology aware, routing and terminating endpoints have all been unique to endpoint slices. Now Gateway and Ingress, something that I'm very familiar with, is, you know, first, Ingress is an API that many of you may be familiar with. Just quickly, how many of you have used the Ingress API? Okay, that's a lot. Great, okay. So the Ingress API is really L7 load balancing configuration inside your Kubernetes cluster. It's existed for nearly five years, maybe more than five years. That's forever in Kubernetes, or at least from my perspective. And it allows you to forward to a service, configure path matching and do some very, very basic TLS configuration. It's been stable for a long time, and it's great, but it has a very small feature set. With Gateway API, we're trying to do the next generation to enable much more expressive and portable configuration in this API and much, much more. There's been several talks on Gateway API here. I can't go into everything here, but really Gateway API represents the next generation of Kubernetes load balancing and routing APIs. It's designed to be extensive and expressive, and really one of the key things here was a role-oriented resource model. If you're familiar with Ingress, there's a single Ingress resource, and that just didn't scale to every workload in the cluster. Some organizations may want to configure their load balancing infrastructure separately from their routing infrastructure or their applications, and this is designed with that in mind to split those up into different resources. The Gateway class, if you're familiar with Ingress class, the Storage class is nearly identical to that. H2E route is nearly identical to Ingress resource, and Gateway is really a new resource, a new concept in Kubernetes that represents the entry point to your system, a cloud load balancer, an instance of a proxy in your cluster. There's lots more that we don't have time to get into right now, but I'll just show you really quickly the difference between an Ingress resource and an HP route resource. Now, if you start at the very top here, you're going to see that an Ingress class name in Ingress meant that the engine X Ingress should implement that Ingress resource. In HP route, it's a similar concept, but we have a parent refs field, and that points up to a Gateway, and that Gateway says that I'm going to attach this HP route to the Gateway named engine X, and you could attach to any number of gateways, and they'd all implement the same HP route. Both of these do exactly the same thing. They do a prefix match on slash login, and they forward traffic to the auth service on port 8080. There's a ton of work going on in Gateway API. It's a SIG network sub-project. I'm always, always looking for more people to get involved. We have so many things going on. We actually have two meetings every week, so there is no shortage of opportunities to get involved and help us out. Focus area is right now. This API is right now most stable with Ingress, but we're taking the same concept and applying it to Mesh. We have lots of exciting work going on there. There was a talk earlier this morning showing how this, it's called Gamma, the sub-project within Gateway API that's focused on Mesh, and that's really moving forward quickly. So that's meeting every Tuesday. Then the main project as a whole that focuses on Ingress and everything else meets every Monday, and we have so much work to get to, whether it's Mesh, L4 routing, L7, moving that up to GA from Beta. We're working a lot on conformance tests. There's so much to get involved with. Just this week, we introduced a new tool called Ingress to Gateway that allows you to take Ingress resources in your cluster and convert them to Gateway API resources. Lots of places to jump in and help out if you're interested. And as always, we appreciate contributors from all backgrounds. You can find us on our website. That's what the QR code goes to, or on Slack or on GitHub. If you're a user, we really appreciate you showing up to the meeting. So not just the maintainers. So next up we have network policies. How many of you have used network policies or known it? Well, yeah, that's expected, because it's been around for a really long time, five years, I think. It's a stable core V1 policy. And it's a powerful API that actually allows app owners to be able to regulate their network traffic. So you can specify constructs like I want my back-end pods to be reachable only from my front-end pods, or I want my databases to be reachable only from my back-end pods. So it really allows the app owners to be able to define how they want their multi-tiered apps communication to look like. It can also be for security reasons. It can also be for enforcing different levels of policies on your apps. So it's mainly developed for app owners. And as you can see at the example over here, so you can express such kinds of constructs in an easy way using the network policy API. And a sample YAML for a network policy object would look like this with kind network policy. They are namespaced scoped. So you have to define the namespace in which you want to define the policy in. And in the specification, you can specify a pod selector, which is a mandatory field. And using match labels, you can say which pods in a specific namespace you want the policy to be applied to. And usually you can see ingress or egress types as the type for a policy. And they're independent of each other. So in our case, we wanted to find a rule that says my back ends should only receive traffic from my front ends, right? So you can express that saying, oh, I'm going to apply the policy on my back end. And if the ingress traffic is coming from my front end, which I've expressed using match labels and namespace lectors, I allow it. Looks simple, but one catch here is that they're implicit. The API is quite implicit in nature. So you have a baseline default deny. So everything works until you create that policy and then nothing works, right? You have that default deny rule that gets created and the allow rule that you're specifying here gets created on top of that. So it's an allow list mechanism. Something to be careful about if you're using network policies for the first time. And like I said, the ingress and egress are separate. So if you have an object that defines only ingress rules, egress is unrestricted. So the default deny only applies to the type of rules that you have in your object over here. So this is namespace scoped. It's defined for namespace owners, app owners. So you might be wondering as a cluster admin, how can I enforce policies on a more cluster scoped level or cluster wide level? And that's where admin network policies come into play. It's a relatively new API that's under active development. And this is defined for cluster admins. It's on a cluster scope. So you might have use cases like what I've shown in the diagram here. You have a security sensitive namespace. You want to be able to express that all other namespaces in the cluster should not be able to talk to this namespace. And that can be achieved using A&P. Here's how a typical A&P object would look like. Kind admin network policy. In the spec, you can define a priority field. This is a bit different from network policy. So every object has a priority. And all the rules inside the object will get the same priority. You can have more than one rule in the object. That's similar to network policies. And a subject here is the objects on which you want to apply the policy on. So it can be namespaces. It can be pods. And they're expressed using match labels. Because it's the same. And you must be familiar with this because it has policies. What's really unique here when compared to network policies is that they're explicit in nature. So if you look at the ingress rule, we are defining the action deny. So we are explicitly saying that I want to deny or drop traffic that's coming from namespaces that are not myself. So in this case, the sensitive NS namespace, any namespace that's not from sensitive namespace, all that traffic will get dropped. And it's not any implicit thing that's happening behind the scene. It's more, we are defining it in the actions. So it's helpful for the admins because it's like a traditional firewall. You literally get what you are, what you're asking for. The API has two kinds of objects. Admin network policy, which you saw over here. And there's a new one called baseline admin network policy, which is a bit unique. I will not go into the details of that, but please do check out our documentation for more details. So yeah, let me end this section of policies by just saying that it's a new API. It's V1 Alpha 1 right now. And we are supporting east-west traffic for this version of the API. The north-south traffic is very much in play. It's a work in progress. And if you have use cases, for example, you want all the namespaces in your cluster to be able to receive traffic coming from the monitoring namespace because you have metric pods that want to scrape everything, right? And you don't want the namespace owners in the cluster to be able to override these rules. That's when you have to use A&Ps because A&Ps are the top in the chain when you evaluate them. And none of the namespace owners or app owners, they cannot override these rules. Or you want to do tenant isolation, for example. Or maybe you want to just be an admin and say, if my traffic is matching a specific set of pods, I don't know what to do with them. So in that case, I'm going to just pass that over to network policies. So you can also express that in A&P. It's an interesting use case. So A&P interacts with A&P in that manner. You can just delegate the power to the app owners in some scenarios if that's what you like. So what are the next steps here? Implementations. So that's where we're at. We're trying to get some initial implementations out of this API. We have a SIG network policy API Slack channel. We also have bi-weekly community meetings happening. So please join them. We need feedback from all of you. So if that's something new that you'd like to see in addition to what we already have, or there's some use case that we have not considered that you might want to see, please reach out to us. And we would love to include that in our V2 alpha 1 of the API. V1 alpha 2 of the API. So the next area, multi-network. I get to present all the cool new things, I guess. Because this is a really new area. We have a new SIG as well. So this effort is mainly just focused on trying to be able to express more complex networks than the traditional pod networking that we have in Kubernetes. You might want to be able to express things like, I want my workloads to be able to connect to isolated secondary networks through performance-efficient interfaces like SRIOV. Or you might want to say, I want my apps to be able to talk to on-prem networks. So in order to express these, we are trying to come up with an API. It's very much in the design phase. It's a completely new area. So I have a QR code here, which points to the design doc that is in play right now. And we also have a SIG network, multi-network. That's a lot of networks, but I think that's the goal of this effort. A Slack channel, so please go join there. We also have bi-weekly community meetings here. So yeah, welcome. We would love to have contributions from all fields. I'll hand it over to Andrew. Sweet. Thanks, Saria. So we've talked a lot about APIs. Like, we love APIs. It's a lot of fun to develop an API, but it can get a little bit exhausting for developers past a certain point. So let's talk about some of the actual networking components that SIG network owns currently. So the first one is Kube Proxy. Who here uses Kube Proxy? Awesome. A lot of people. So we'd like to see. So most folks might know this, but Kube Proxy is basically an entry controller that essentially allows us to take the services and endpoints API that Rob talked about earlier and convert that into per-node data path networking rules. So it is a per-node agent, and it basically allows us to just direct traffic on a node basis. It is pretty stable. It's been in tree for a long time, and we have a pretty dedicated group of maintainers who make sure it stays up and running. So let's talk a little bit more about it. So like I said before, it's implemented in core Kubernetes. And we have, well, we had three different modes. We had IP tables, IPvS, and user space. User space has been deprecated, and we are currently only maintaining IP tables and IPvS. So this diagram kind of shows a typical workflow that Kube Proxy handles. As you can see, a user comes in and creates a service. Awesome. So what does Kube Proxy do? It sees that service, and it makes sure that it sets up the, in this case, correct IP tables rules to direct the cluster IP of the service all the way to the back end of the pod. And these are kind of a really abbreviated list of a really complicated IP tables list that Kube Proxy is maintaining on every single node. And obviously it's really important because we have a lot of people using it. So super stable. We want to keep it that way. But at the same time, we also want to keep an eye on the future. And so we've been really thinking what's next, right? How do we implement new back ends? How do we do some exciting new things? So enter ka-ping, KPNG, Kube Proxy next generation, whatever you want to call it. We like ka-ping just because it sounds like we're pinging off something. It's kind of where we're going next. So basically ka-ping provides two major value ads today. It's very early in its iteration cycle. It's being done outside of core Kubernetes in the Kubernetes SIGS organization. And it provides a fundamental separation between the Kubernetes API and back end implementations. So this allows new contributors to come in and write these back ends really easily and really quickly, which is really exciting. And I know a bunch of folks have wanted to do that for a while. Today it also provides an extremely flexible deployment model. So as you can see in this diagram, in this specific deployment model, rather than having a per node agent, we have a single KPNG daemon that basically serves an API that the back ends implement and then can write whatever networking rules they want to. So as I said before, this is still all kind of out of tree. So development's really fast paced. And we're still trying to figuring out exactly what's going to fit back in tree and if we want it to fit back in tree. But we know that the next generation of back ends are going to be done via KPNG and not really in the standard Qt proxy methodology that we've been following before. So these are some of the current back ends that are already available. Super exciting, super new. As I said before, user space was deprecated from the entry Qt proxy, but we've picked it up here. And KPNG can be used as kind of a drop in replacement. Like we have no inclination that we're going to remove Qt proxy anytime soon or ever. Kind of like end points, right? But we want to allow folks to kind of move forward as well. So this is where we've come to. So today we have an IP tables back end, NFT, IPVS, user space as I mentioned, Windows. SIG Windows is actually helping train maintainers to maintain the Windows back end for copying at the moment. We also have a really basic EBPF POC to see kind of going off some psyllium principles. So huge shout out to psyllium for giving us some ideas there. And way more to come from everyone here, right? We want to hear what kind of back ends y'all are looking for. And we want to see what you think is going to happen next. That's kind of the goal. Another thing I really want to talk about with Kaping that has been really fun for me personally is it's a extremely new project. So contribution is not weighed down by a ton of core Kubernetes bureaucracy and stuff. It's moving fast. We have a lot of contributors who maybe have never pushed any code before. And they're getting to kind of work into this ecosystem. And the community has been amazing. So please, please, please get involved. Let's hear from you. These slides are going to be attached to our SCED link. So you can definitely find these links there. But I attached three just to get started. Dan Winship, who's in the crowd, wrote a really good document on service proxying in general. I would start there. It's a great place to start. And then you can also get involved in both Q-proxy, which is a little more stable, so there's less good first issues. But dive in. Let's get involved. We can all work there together. And then Kaping, there is a lot of good first issues. So check those out, get involved, reach out, and yeah, let's have fun and build together. Okay. Now last but not least, we have some features that have been in development and are reaching either the beta or the GA stage. So first we have topology aware hints. I think everyone, probably if you are on a cloud provider has experienced this problem, where you have a deployment that spans multiple zones. And then when you create a service from them, all of those endpoints kind of get swizzled into just a single endpoint or a bunch of endpoint slices. What topology aware hints does is actually use the topology, which is like the zone information to kind of separate these so you can control where your traffic is going. I know that in a lot of cloud providers there's a charge when you send traffic across zones. So this feature lets you keep your traffic within zones as much as possible. One thing you might be wondering is why is it called hints? It's that there is a trade-off between reliability and sort of localizing your traffic. In some ways the hints gives the system a bit of flexibility in terms of how to make that trade-off for you. Rob, what is the status? It's currently beta, but GA? Okay, so you should be able to use it because it's a beta feature, but hopefully you'll graduate to GA soon and you'll be able to use it in all of your distributions. The next feature I think is like a pretty critical one. So originally the behavior of Kubernetes and Kube Proxy is that if you have external traffic policy local when the pod turns into a terminating state the Kube Proxy will basically treat the pod because the pod disappears from the endpoints and will say, hey, the pod is not there and then you may end up receiving traffic during this interval before the pod actually disappears that the traffic is dropped. We noticed that well, if the pod is still running on the node we should probably still send it traffic if we're receiving traffic to it. So we should add an extra state called terminating so that the pod has some amount of time while it's still running to serve traffic. You might as well give it to the pod. If it rejects it then it will be dropped anyways. So this is what this proposal does. It adds this terminating state which means that while your pods are shutting down if some traffic lands on the node we will deliver it instead of simply just dropping it. We give the pod just an extra chance to kind of serve the traffic before dropping it. And I think a lot of people who have used external traffic policy local have experienced sort of this black hole state. For example, if they have like a long quiescence time between the pod going away and the endpoints getting updated. Another feature that's coming is network policy status. So network policy up until recently did not have a status field and what we're finding is that as we're adding features to network policy that some providers may not actually implement everything. Some features are optional. So what this does is it allows you to indicate for example your implementation to indicate that hey you're using a feature that I may not necessarily support or I can give you more information about what's going on with your network policy. I think the primary driver of this with the port range feature which may or may not be supported right now with all the implementations and this lets you when you use a configuration that uses port ranges to say hey I actually don't support this. Just be aware. Finally the last feature that is reaching the next stage of graduation is internal traffic policy. So I think there's a very very typical use case where you have a daemon set that you deployed everywhere and you want it to receive the local traffic from the node. Internal traffic policy lets you basically say hey for this service send it to this node local daemon rather than spraying it across your cluster. As you may know if you've been around for a while that originally this was covered by the first version of the topology API proposal but since then we've kind of said hey this is a very distinct use case we should kind of target it with a targeted API versus just like a super generic one. Finally I think this is the end of our presentation so thank you so much for coming and I think if you take away one thing so if all the slides suddenly disappear in your mind we really really appreciate feedback from the users and participation in the SIG it really helps us kind of shape what we're looking at, are we looking at the right thing are we looking at the wrong things so all of those QR codes you know you can visit them if you're interested in any of the APIs or features show up on the Slack or show up at the SIG meeting make your voice heard. Thank you. Thanks to all my presenters. So we will be running with microphones for questions. Here. The mics. Whoa. Sorry. And to conclude I will drop the mic. Any questions? Bring it on people. Hi. So I have a a question about topology awareness so since it was introduced to replace the previous feature but right now it's really in a completely unusable state in some topologies because it actually decides for the user what an imbalance is within the cluster and does not have the user an opportunity to say whether that's okay or not so let's say for example that I have a workload in my cluster I have an instance group or node group that's running on spot and I don't have many control about the zones where it runs but I don't care because the load balance the services that I want to load balance are not affected by those ASGs so the feature will simply never put hints on my services regardless of whether they are running there or not. There was some change in 1.24 so I can exclude the control play nodes by placing a filter on the label but there's no way that I can configure a label in any other nodes actually excluded from that as well so are you planning to make that a little bit more configurable for when it reaches the stable release because right now most of the clusters that I do have in production it just cannot be used unless I scale the pods serving the service with a factor of 30 or 40 so that the calculation on the hard-coded value is skewed. That's a really good question we've had lots and lots of conversations about topology we've gone back and forth with a few different approaches it's really hard to get right and I appreciate your feedback we're seeing a lot that the current approach really struggles without a large number of endpoints and we've one of the common feature requests we've had is can you just prefer the same zone if there's any endpoints in that same zone can you route it there that seems to work for a number of use cases but the reason the default approach doesn't do that is because it's very scary the idea that you can overload traffic to a specific zone if there's all your traffic is originating from one zone you only have one endpoint there all the new endpoints are getting scheduled this is really a cross-sig problem where we also need to work with auto-scaling, scheduling, etc to help make this a little bit easier so that if all your traffic is coming from one place new pods spin up in that place it's a complex thing we are working on improvements and really really value additional use cases and if it's not working like this I'm trying to gather all the use cases that we're missing right now and figure out how we represent that how we solve those problems so just a quick thing to add the defaults are perfectly sensible so it makes sense all the decisions they can actually put them in place I think the only thing that's lacking is that it's not a default it's a hard requirement because there's no way to configure it or to change the behavior you're right it's always this balance of trying to do what's best for many use cases and not give too many knobs we were hoping we can do this without any knobs and I think we probably need some but trying to limit the number we provide we're trying to make this as as minimal configuration as possible while still working for most use cases that's a tough balance but understanding those use cases helps a lot do you have a github issue that talks what you're looking for probably not that would be great more questions who's got some when I create a load balancer and my backend for this load balancer inside the same cluster there will be shortcut traffic but I want to go through the load balancer how can I do it I think this depends on the CNI and environment as I recall Antonio that's implementation specific in Qproxy you get that behavior so the traffic shortcuts because I mean when a pod in a cluster want to reach the service you can decide or I send it back and the load balancer send it again to the cluster or you can say I know that the service in the cluster and I send it directly the problem is that there is no solution there are some load balancers that doesn't work and this was a decision for Qproxy right why do you want that behavior because Qproxy will load balancer itself for example I want to terminate HTTPS I don't want shortcut to my service which has no termination there is a a cap or something that was open that started this work and then got sort of abandoned I picked it up but I've been a little busy to finish it so it is sort of something in progress the way we've described it is a IP mode for load balancers so you can say this IP mode is a virtual IP which means you really do have to short-circuit it or this mode is a proxy which you don't want to short-circuit it so work in progress but not done with the network policy stuff I'm really excited to see the kind of like cluster-wide network policies coming from a more network security background one of the things I really struggled with with the API generally is like getting a complete understanding about all the policies that apply to something and like trying to understand and I feel like that's only going to get more complicated now that I've got different tiers of policy and you were talking about how like the cluster ones rank first which I guess makes sense but then you need like exceptions for specific things and it's going to become a mess so like do you guys kind of expect that to be something that like external tools tried to solve is that something that you guys think about kind of like what's your thinking about that that's a really really great question so we actually did a talk fully on admin network policy as part of the contributor summit so please go check that out I'll answer some of those questions the short gist of it is yeah we're making it really complicated right we're going to have less users with these admin policies so that'll be easier and how they interact with existing network policy and how those two new objects admin our policy and baseline admin network policy interact with each other is complicated so as part of our cap and as part of our charter we've put in we are going to be building developer tooling like it is key we are going to be building conformance testing and developer tooling like we don't want this to roll out and just be confusing for everyone because that was one of our main problems with network policy in the first place no hate it was an amazing API for developers so it's definitely on the roadmap please come and help we need help Hello, related to A&P currently we have some applications or operators which are cluster scoped namespace scoped right cluster scoped can listen to like all the namespaces it can talk to all the namespaces and admin A&P policies you are saying namespace like they can talk to each other right are we so it will allow you it's very flexible so admin network policy is a new object that will allow you to select everything in the cluster as a whole or pods in a namespace and you'll also still be able to use network policy like you always have you'll just be able to define unique levels of enforcement right so for those applications no need of cluster scope just a regular we can just deploy and then apply these A&Ps so that the operator running on namespace A can reach out to all the namespaces yes you should be able to express something that will be slowly going away then yep and we have a list of use cases on our website which the QR code was for and in our talk from contributor summit was a list of use cases for v1 alpha 2 so get involved again we need to hear from you just to be clear it's not making network policy go away it's answering a different question so the example that I like to use the best I have a cluster and I have two tenants there's Koch and there's Pepsi right Koch has a bunch of namespaces Pepsi has a bunch of namespaces admin network policy lets me say all the namespaces for tenant Koch can talk to themselves and all the namespaces for tenant Pepsi can talk to themselves but they can never talk to each other no matter what the user does and then within those namespaces they can use network policy to define the correct behavior of their applications so there's a two different questions two different roles hi so I have a quick question on koo proxy and koo proxy ng and just basically about the improvements that come with it so from my understanding it looks really promising in terms of you know new back ends that can happen but what about existing back ends so let's say I'm running koo proxy with ip tables I'm happy with ip tables is there any reason for me to run koo proxy ng right now with ip tables it provides a more flexible deployment model today if you want so koo proxy is a daemon set right there it's a for node agent and in comping today you can run it you can run one central comping agent that talks to many nodes so it allows you in really high scale scenarios to take some pressure off the api server that would be the main selling point I would say okay thank you you won't have all of the koo proxies actually connecting to the api server as well and watching all of the endpoints and the endpoint slices even if the endpoint slices they solve this problem so you have just one control plane and that's all yep I also had a question about kping kping ng whatever you want to call it we're in process I like kping yeah it's the new kube ctl discussion how does it relate to is there any relation between say supporting ebpf and then maybe network policy getting some enhancements out of that like being able to write layer 7 policies or something I don't think that kping architecture necessarily relates to a new type of network policy that is another thing if you want to work on a new network policy come talk to us again we need help it's more about making backends easier to write so one day it could apply to network policy backends maybe but not necessarily about new policy types hi I don't have a use case for this but does gateway class support arbitrary layer 4 protocols Rob go ahead so gateway for l4 is very much still an experimental but by definition we support tcp and udp today one of the key things with this api you may have noticed that we had a few different route types by default but anyone can plug and play their entirely own custom protocol as a different route type using the same pattern and the rest of the api just plugs in perfectly if you want entirely new route types and protocols that would work what kind of l4 protocols were you thinking of I don't know I really don't have a use case for this yet but isn't there some stuff with something to do with htp htp3 where you can even replace the layer 4 I don't know there's something that's still in my mind there are a number of tunneling protocols so right now in terms of the upstream project we have tls udp and tcp on the l4 actually we're trying hard to get experience on those so we can graduate them from experimental to beta and then of course htp is the most common one so that's the one we know the most about so that's the one that's moved the furthest along but if you have you're trying to think of mqtt you're free to kind of implement that sctp okay tpsp sctp purely selfishly I'm hoping the l4 gateways make the service api less important because a lot of the service api is crafty and old and broken and should be taken out for a short time and now we're done with questions actually so we're out of time sorry if you want to stick around then come talk stick around there's a lot of signet folks here we'll hang out for a little while if you want to talk thanks for everybody thank you