 Well, welcome. Thank you everyone who made it in here and sorry to those that didn't we are streaming online as well Today we're talking about Gateway API and our path to beta and GA. I'm Rob and this is Nick Good day everyone. We are both Gateway API maintainers Before we get any further. I want to ask a few questions first off how many of you have used Kubernetes services to create load balancers anyone? Yeah expected that would be most of you Anyone use Kubernetes ingresses before? Okay, great great. Oh, it's great. How many of you have tried out Gateway API? Okay, a few of you hopefully. Yeah. Yeah, I mean that's good But hopefully by the end of this talk we'll have even more of you wanting to try it out So let's work on that So for those of you who haven't tried it out yet, you may be wondering what is this thing? Well, I look at it as the next generation of Kubernetes load balancing and routing APIs Why would we build a new API right? This is a very familiar problem. We already have an ingress API We already have a service API. Why start with a new one? Well, the ingress API simply wasn't able to express advanced configurations that so many people wanted This the the failures of that API led to lots of custom annotations and other things which worked They were the best option available but we wanted to provide a better API in Kubernetes that didn't require all those annotations and For those of you familiar with the service API You've seen new fields just get stuffed in over and over again and the combination of those fields may not make sense and it's very hard to Test to work with to implement all those things. So a new API Hopefully offers a solution to this we can unify load balancing configuration across L4 Which was the service API and L7 ingress? We can allow custom components to easily be added on plugged into core resources So extensibility is a huge part of this API We recognize that anything we build in the core Kubernetes API system is not going to be sufficient for every Use case so we want to make it easy to extend and finally we want to enable some additional use cases We're not sure what all these look like yet But there's been some discussions around mesh around egress around policy lots of things that this API could potentially be Used for don't forget my nuts ideas about GRE tunnels VPNs Yeah, lots lots of fun ideas coming. Yeah. Yeah. Yeah So with that let's talk about our timeline and how we got to where we are The last kubcon I was that in person was kubcon San Diego And you see some of us there many of us Ended up becoming maintainers Danians in the audience. I think maybe some others as well The initial work really began at kubcon San Diego We had some initial brainstorming sessions and then it took us a year and we got v1 alpha 1 out the door We had so many different prototypes and many different structures of the API that we experimented with a Few more minor releases some additions and then kubcon EU it was all virtual But kubcon EU we presented our first overview of Gateway API at a kubcon Again, it was all virtual but that was around a year ago and this talk is really going to focus on everything that's changed since that point so in the past year In the past year v1 alpha 2 was released That was a huge change for us We saw it as really the the beginning of our stable API I know it still said alpha, but we thought this was the API we could take to beta and that largely ended up being true Since then we've really been focusing on conformance tests some new experimental features some patch releases Just stabilizing what we have and we're finally at a point where we're confident to go to beta so we'll be talking about that but let's take a few steps back and Describe the API itself If you haven't seen Gateway API before it all starts with a gateway class A gateway class Describes a kind of load balancing or proxy infrastructure So for example, I work for Google and for GKE we're going to bundle with GKE clusters a XLB gateway class and an ILB gateway class for l7 and more to come This is a way to describe load balancing infrastructure every different implementation of Gateway API will have different gateway classes that they'll support Second we have gateways these represent some kind of entry point to your system It could be a load balancer it could be some kind of proxy configuration, but they Represent some kind of entry point to your system But really the huge part about this API even though it's called Gateway API The most important important part is the routes We have a lot of different routes and the routes are where the vast majority of the functionality comes from so in this case I'm showing HP route because that's our most stable most full-featured route that has a lot of advanced configuration in it But you could also attach a TCP route TLS UDP GRPC is coming soon We've got a lot of different routes for different protocols and they define different routing configuration that is protocol specific So you attach HP to your l7 load balancer You might have an l4 load balancer to that you attach TCP routing configuration to Yeah, so one thing I would you want to say there is um with the I think the distinction between them is HTTP routers for terminated HTTP traffic TLS routers for stuff where you're passing through a TLS and then TCP and UDP are for like Proxying TCP and UDP traffic not routing proxying the distinction is important. Yeah, good point. Yeah, good point Okay, so this is a huge feature list Vo dot 5.0 refers to our upcoming release. I'd say hopefully a couple weeks from now. I There's a lot in here. I'm not going to cover all of it But suffice it to say many of these things have not been seen in other Kubernetes APIs This is an official Kubernetes API that supports all of this and more So this is a big thing for Kubernetes But you may be asking you look at that slide you see all these things and you say how on earth can this be portable? You know, we're we're trying to build a portable API. This is a Kubernetes API How can you promise all those features and then say it's portable? Well, unlike ingress unlike previous Kubernetes APIs, we've built in this concept of conformance levels So we start with a conformance level that's called core and that's something that we expect every Implementation to handle in a consistent way and support a good example of this is with HCP prefix path matching Just about everyone can support that basic capability Now we also have an extended level of conformance and that is something that we expect to be supported consistently When it can be supported, but we recognize not everyone not every implementation can support something like header modification A lot of implementations can but not quite everyone. So that's an example of an extended feature And finally we have something called custom. This is not that common in API, but there exists a thing called regex I think that is the only thing this exists for but If you're familiar with regular expressions, you're familiar that there are lots of different variations of regular expression implementations So we can't write conformance test that expect this specific path this specific regex We match the same way when the underlying implementation may support a different flavor of regex So we say okay, this is a concept that is widely understood, but we recognize there may be some variation in implementation here So those are our conformance levels With all that background, let's spend a little bit of time talking about what's changed in the past year and there's a lot First we went to V1 alpha 2 as I mentioned, but maybe most important We went to Gateway Networking Cate's IO API group. What that means is we're an official Kubernetes API That's a big distinction previously. We were just an experimental Kubernetes API this status as in the Cate's API group means that Not only are we Gateway API maintainers reviewing every step of the API change So are upstream API reviewers. So this goes through Nick myself and a few other Gateway API maintainers and then it goes up to Tim and some other upstream API reviewers That's actually the state we're in right now for vo.5.0. We're in that final set of review Hopefully we can get a release out soon I'm really proud of this we have more than 70 people that have already contributed to this API and that number keeps on going up And again, if you're interested, we are always looking for more contributors maintainers and maintainers yes Right now, I think this number is a little out of date, but we have around 15 implementations You may have heard earlier on in kubecon. Envoy Gateway was announced. That's based on Gateway API There's a lot of very cool implementations and I think more on the way and What I'm excited about we we already have conformance tests in place Yeah, that is cool. Yeah, some of the graphics so I wish you could see my screen I'm not trying to do a contour pitch. This has like 12 logos on my screen. It's a second click. Whoa Yeah, cool. Okay. There they are. Yes Anyways Many implementations are fully passing conformance tests now We're really excited about it the fact that we already have a pretty robust conformance suite Means that I think we're going to have a pre consistent experience across implementations Okay, let's talk about what's changed We did a thing with route gateway binding that we hope will simplify things for any of you that may have looked at v1 alpha One before the idea was that gateways select routes So if you've used kubernetes, you're familiar with the idea of a label selector So you say I want to select routes. I want to select routes that have a specific label What we ended up seeing is that Users would just have a creative way of doing this of they'd say I want to select routes that have this gateway label on it That says prod web gateway. So basically gateway name a weird way of attaching to a gateway So we tried to simplify that a little bit by instead saying hey the routes Reference the gateway. So the routes have this parent refs thing And that points up to the gateway and the gateway just says here are the namespaces. I trust routes from And actually that's kind of an important detail a gateway Can trust routes from multiple namespaces if you've ever wanted to have your load balancing infrastructure in one namespace And your routing configuration in another namespace. That's possible with this api Yes, thank you Thank you So we're really excited about that another neat thing about that parent refs field Right now that that is used to refer to gateways There are some other interesting use cases here There's a gap. We you know, if you've heard of kubernetes enhancement proposals We have gateway enhancement proposals and we have one right now that's talking about route inclusion Where that parent could refer to another parent? And another route We'll just see this is still in proposal phase, but there's a lot of really interesting stuff happening here All right Let's move on to reference policy. We're talking about crossing namespace boundaries This is a slightly different way to do it. This is for some very specific instances If you've ever had this idea in your head that I have my routing configuration here And I want to forward to a back end in some other namespace So I have all my routing configuration in a routing namespace, but I've got my service. I've got my app in my app namespace This is what that's intended for or if you have your tls certificates in a separate secure namespace But you want to allow them to be referenced from somewhere else This allows for that so reference policy at its core is a resource that exists in a namespace that I own And it says I trust references From this other namespace. So in this case this reference policy says I trust references from htb routes in the prod namespace to services in my namespace This this pattern is something like we ran by sigoth sigarch sig apm machinery We run it through. I think it's a good pattern, but we're very much interested in feedback here I should Hold out this little note here. We've called this reference policy But there's a decent chance it's going to get renamed Because we have this whole other concept that's a little bit newer called policy And policy is such a generic term. We're thinking we may rename it to something like reference grant So don't get tied to that name this concept. I think it's here to stay the name may change Okay, moving on. Let's talk about policy attachment. This is that thing I mentioned the policy. That's a really broad scope We really want this api to be extensible. We want users to be able to plug in and Build new stuff on top of our api as much as we want to include as much as we can in the api We recognize there are things that just don't fit One of the things we really struggled with in our api is something as simple as timeouts You would not believe the variation timeout configuration, whether you're talking to nginx Envoy some cloud they they all have different versions of timeouts that they support and trying to build portable configuration for that was very challenging So instead what we're suggesting for these kinds of things is that implementations or groups of implementations decide on a common policy that works for them So in this case, you might have a retry policy that you could attach to a route And a health check policy that you could attach to a specific service There's a lot in here, but this is really defining a common pattern for extending the api Yeah, yeah, there's a lot in here. I can I just add one more thing in I would probably say at the moment This is a a framework to be able to build extensions to the api. It's not extensions yet, right? Like we expect we need people to build some examples of this We haven't even built the example of the timeout one fully yet, right? So this is very early But like the idea here is that you can have settings that can be overridden or Or defaulted and stuff like that that allow you to Make settings that sort of a higher or a lower level and yeah, there's a whole document It's a very long it took us a long time to write I encourage you to read it if you're interested in this at all But yeah, it's because it's quite complicated and there's some serious subtleties That can be a bit tricky to explain that we just don't do not have time to explain Yeah, I'll just give a concrete example for gk. We're building an lb policy That includes some of our low balancer specific policy Configuration that just doesn't fit in the api itself and I imagine other vendors will have similar things All right, so let's talk about graduating to beta. That's a big part of this talk We finally are making it to beta But only part of it so We have defined a graduation criteria that involves a number of things One of them is that we wanted to have a robust conformance test suite in place for the resources that graduate To we want them to be widely implemented and widely used And we wanted to feel confident about the state of the apis that they weren't going to need any kind of breaking changes We feel like gateway class gateway and htp route all meet that criteria Unfortunately, some other resources are not quite there yet reference policy tcp route tls route and utp route We're working to get them there, but they're not quite there But we are expecting that gateway class htp route and gateway are going to make it to beta in the next couple weeks There's one more thing I I've kind of haven't mentioned this much but this api is built on crd's We live outside of the kubernetes release cycle. Nick's going to talk a lot more about what that means But one key thing is That that means we don't have feature gates if you've used upstream kubernetes before you may be familiar with a concept called feature gates where when you add a new field It starts an alpha you can opt in with a very specific feature gate and then it gets to beta and then it gets ga We we don't have that concept. We have crd's and after talking it through with a few different sigs This is what we came up with Stable channel is what you'd expect it's the resources that have graduated beta And all feet all fields that are not considered experimental But experimental is kind of that playground where we start with new things. We have alpha resources We have new fields that we're adding that we think are promising But we're not quite confident enough to graduate to to stable quite yet So everything starts and experimental Some stuff graduates the stable and some stuff disappears from experimental So if you want the most stable experience just install these stable crd's But if you want a new feature if you want to test out some new thing You can install an experimental set of crd's, which will always be a super set of what's in stable I guess Can you go back to that? Yeah, sliver's sake. So there's a couple of things I wanted to add to that that I just thought of so notably You probably noticed that the reference policy is still in alpha As of the the next version. This is all as of the 050 Reference policy is still an alpha Which means that you won't be able to make cross namespace back end references Or refer to secrets that are not in the same namespace as a htp route Those both require reference policy to be done safely because reference policy won't be alpha If you're only using stable you just won't be able to do it. There's no mechanism to do it, right? So just wanted to call that out It's still like the gateway api with the the resources we move to beta is still perfectly functional Is just that it's a bit less. You can't do some more complicated stuff We are hoping to get reference policy in really soon, but like that's going to be another round of api review and we didn't want to block That on having to figure out a new name Like, you know, I'm sure all of you have tried to name something before it's really hard It's even more hard when you're implementing an open spec that a million people are going to have different people Have to implement the bike shedding is crazy So yeah, so it's really important that we get that right, but at the same time We didn't want to block some resources going to beta for that. Yeah Yeah, I want I should call out reference policy is is far and away the closest to graduating of the ones that aren't yet It already has conformance tests in place. It's we're in a pretty good spot. We just Didn't quite make the cut. Yeah, but because of the rename thing. Yeah. Yeah. And so, yeah, let's swap out. So yeah, look Rob mentioned, you know, we're really the first official kubernetes api to be built with cids We're the first ones to be doing this api review process You know, we the sea with crds as opposed to As opposed to, you know, just doing it in the in the core and having doing using feature gates That's why we've got all of the sort of fancy versioning and the performance levels and a bunch of other stuff Is we're trying to make sure that this feels as close as we can manage To the experience of using upstream, right? But it's delivered with crd. So there's just stuff that's got to be different. So Here's some of the some some of the things that are good, right? Let's go pros and cons First we're not tied to the kubernetes release cycle, right? We can cut a release anytime we like We don't have to wait for the next round of you know, of everything and be and also more importantly be tying up upstream resources talking about these apis when they're not we don't need to be um You don't need to know all of kubernetes because where It's it's a set of crds and associated webhook like it You don't need to know as much about the rest of kubernetes to be able to contribute. So, you know, like we said Please come contribute I would hope you should all be pretty familiar with how you go about installing crds and using them and all that sort of stuff and We are making sure the webhook validation is required for you to pass conformance Well, it will be required for you to pass conformance. Um, the Currently, we don't have the performance test yet But we will be adding conformance tests that will test that the the bits we are checking for in the webhook Get rejected You know and because there's some stuff in the webhook that we're using the webhook to protect you against You know, serious safety errors and some security problems and stuff like that. So it's really important that that gets implemented And but yeah The big part here is that we're part of the kubernetes aio api group That's intended to signal to all of you that this is a real thing. This is an upstream api This is safe to use once it's you know once it has reached your desired stability level It's it's safe to use like it's not going anywhere. It's not some company doing it This is an upstream thing. However It's had some you know, it's made some things really hard Um, we're the first person we're the first people to do this So no, there's no prior art. No one else has done this before We've got to figure out all of this stuff and hopefully do it in a way that other people will be able to copy Yeah, and and make the life easier for other people Um, you we've had to reinvent all the functionality from the upstream release engineering about feature flags and You know, how we how we actually do our features and cut bundles and all that sort of stuff um Yeah, because we don't have feature flags The api version that is the v1 alpha 1 it's just too coarse grained for what we're trying to do So we've had to add the stuff about conformance levels on a port per field basis And the stable and experimental tracks in order to be able to slice that up as finely as we need to And yeah, lastly Designing an api to cover this enormous set of use cases is really really hard. Um, and so it's taking us You know, it's taking us two years just to get even close to beta for some resources You know, like and so but the reason we've taken so long is we really want to make sure we're getting this right You know, like we only get one shot at doing this again You know, like we don't want to end up with a situation like ingress where Where it's just not specific enough and so we're erring on the side of over specifying rather than under specifying Okay, so I want to run through a few quick examples here. I will try and keep this super quick So we have some time for questions Okay, this is a simple hdp route, right? So Most of this should look pretty similar to you if you've done ingress before, you know, like you've got a host name You've got some rules. You've got some matches You've got various types of paths and stuff and then you've got backends, right? So this is basically ingress Pretty much with some extra stuff. It's some extra magic. Maybe it's a better way to say it Okay, so that's and that extra magic is stuff like this where it's the advanced bit This is just the rule section from that object But you can see so you can do header matching query matching path matching method matching and these are ended together So that means that this my service too is going to get only things that have the exact header set The exact query program set the path the path exactly right and the method and the method is get right So that's an and set So that means that you can get hyper specific about what goes to where Um, a lot of this stuff was possible with ingress via Via different implementations and they all had either their own crd I mean, that's what I did on that's what we did on contour or you had to do like a million annotations On your ingress and hope that you know, they were they were relatively portable So the whole point here is this this config Is portable right like any any implementation that implements htp route and influence the extended bits of htp route will work You should be able to pick this up and move it across to another implementation and it'll just work tm Yeah, well should but that's what the conformance tests are for right like the whole idea is if they are both conformant Implementations then they will pass the test and you can be confident that everything will work so Also, we got filters so some of these So request mirror is actually a uh extended htp filter Request header modifier is a I think it's extended to that's extended to yeah I think there is one of the other ones is core. Yeah redirect. Yeah redirect is core. That's right. Yeah, so Um, so yeah, this one is a pretty simple thing. You can you know You can modify headers. You can add set or remove headers You can send a mirror of the traffic off to another service Um, you and if none of those get hit then you go to the to the standard back end Um, okay, so this is again what rob was talking about This is how this is a little bit more detail about how the cross namespace thing I wanted to talk a little bit more about this because this cross namespace thing is a super tricky problem to solve So many things in kubernetes are built around the idea that namespaces are isolated creating references between Uh namespaces that are not extremely that you're not extremely careful with is a huge massive Gaping security hole right like uh, this is why like Contour doesn't support external name services for this reason because it allows you to work around namespace isolation There are other there've been other problems with like endpoints and stuff like that that'll allow you to work around namespace isolation So we wanted to be super careful here that that we're being very careful to keep these namespaces isolated Except in so far as there has been agreement between the two parties that it should be allowed And that's what this is the aim to do the person who owns the gateway has said I will allow routes to attach from this selector now. They could be just like I don't care It's going to be all you know any any namespace no problems. It's all fine But they have the option of doing that right and then the the hdp route has the uh, you Specific is the owner of the hdp route is the one who selects which gateway they want to attach to And we chose not to do a label selector or something like that there So there is a definite action and you as the hdp route owner have to pick which one it is a list So if you want to have your hdp route owned by two gateways That's fine. So if you're moving your hdp route between gateways Say you want to try out like fancy new gateway implementation You just pointed at two gateways and both of those implementations should take the same config and end up with the same result, right? So but the the important part here is it's a it's a two-party transaction, right? Like one party says I'll allow it one party says I want to do it, right? And so that's the same with the reference policy and that's why That's why we have it, right? So in the case that you're doing like the back-end ref thing You know we do have the field in there that you can do a namespace Unless you have a reference policy the your implementation must reject across namespace reference like this and that is because The otherwise I could use this To refer to super secure service that I just happened to know about in some other namespace and but a bing butter boom I am now, you know Representing the the billing system out of line on you know some other 100% not the billing system owned a hp route, right? So So that's that's the idea here is there's a two again. There's a two-way handshake We've got the hp route owner has said yeah I want to point this to a service But then the service owner has to say that is okay I allow that by saying services are allowed to be pointed to by hp routes in the prod namespace Notably you don't have to specify both of these you can only specify You can say all hp routes in any namespace or only or any object in particular namespaces, right? Or you can say all all routes in all namespaces right like or all things So what I the two use cases we talked about um is the um Is as it says back end ref but the other one is the classic You want to keep your key pairs in namespace that only the people who own the key pairs to www.mycompany.com Shouldn't you know should be able to see the private key for that for that? So obviously and so like that's that's what this is is all aimed at doing obviously like I said This is not going to be an 050 because it won't be beta it will be in the experimental track Okay, so We talked we talked a little bit about the graduation criteria. This is what they are We've got the conformance test coverage. We have multiple conformant implementations That is multiple implementations that are passing the conformance tests and There's you know, we there's multiple implementations of actually being used like it's no point us saying hey You pass the performance test if no one's using it And that we need to leave it for at least six months as a beta api before we go to ga I think six months is probably going to turn out to be a little Aggressive we're probably going to want to leave it for longer than that But what we're really aiming to do is just to make sure 100% sure that there's nothing more we need to add Because once we go to ga that is our commitment to all of you that this thing is never going to change ever like ga is forever like And so, you know the We really want to make sure if it's going to be forever. It's really what we want so Well, this is all the stuff we're working on More conformance tests improved like improved ux through fixing up the status right now Um, you know the status is a little tricky to understand. It can be a bit weird There's some edge cases that aren't handled very well Um, so we're really trying to just make that really consistent and really straightforward for you all to understand My my personal goal is that if you are a htp route owner You should only need to look at your htp route to know if something if everything is working Like you shouldn't need to go and find out what gateway you're pointing to and check that the gateway is okay You should just be able to look at the gateway at your route and that's the that's where you need That's the only thing you need to know We've got some new features as rob mentioned route delegation That's you we did some work on this in contour and other people have done similar ideas That's the idea where you can break up your route configuration and have a route sort of Allow another route to attach so in the same way that a gateway Allows routes to attach your route can include instead of a back end and a loud route stands itself It's the sort of the rough design for now Something like that and the idea there is that just lets you Break apart things that have different teams own different parts of the config In the same way that a different team could own the route to the gateway. Exactly the same way grpc route is being implemented the get The design for this has been we approved it. Um, yeah, we're in the process of implementing it Yeah, we really want to do more layer 4 stuff. So those of you who are doing tcp stuff and udp stuff Um, I'd really yeah, we really really need use cases and to talk about it Most implementations that we're talking to at the moment are proxies not Like load balances like not true load balances So they're actually terminating sessions and restarting them and that has big implications for like tcp route and udp route and how viable they are Um, I would really like to talk to people who are okay with that and people who are not okay that and what they want to see Um, so we really desperately need use cases for those layer 4 things. That's one of the reasons They're still in alpha. We just don't have enough information about what's really needed there um Yeah, and we're looking at some more wild use cases like Having a mesh it's a having a mesh object instead of a gateway object So instead of just in that case your routes are kind of describing what happens at east west instead of what happens north south Right. So and so the mesh object then would describe like how everything fits together Not how you get in and out of the cluster Um egress use cases. Um, we've started talking about this. Um, we know people need it Um, but I'm worried that the constructs we have at the moment don't fit well for egress So I really want to spend some time with people who need this to talk about the use cases again There's that thing. Oh, yeah, we need more use cases always need more use cases um and uh cluster ip It's a real stretch call, but it was once mentioned I had I had to add it to the slide But it was once mentioned that maybe we could have a gateway class cluster ip Right, we'll see. Uh, just throwing it out there in case that interests anyone Yeah, so if you do want any of this stuff that you do want Go come to us talk to us about it log an issue And we can talk about it like all pretty most things are on the table at the moment because it's so so early days And if you've got use cases you need addressed Then we would love to do them Um, lastly, yeah Please get involved just like I've been saying um at the weekly community meetings are on monday specific time That's us specific time. I am in australia. So it's my tuesday morning Um, so, uh, yeah, I can't I can't do the time zone math in my head. I'm sorry to tell you what time it would be in central european time but the uh, yeah We do one one meeting a week at the moment. Um, if you need if you all need it We can talk about like standing up alternative meetings and stuff like that right now We just don't have enough people to need it And yeah contributors from l backgrounds welcome like I said we desperately need use cases So if you are interested in contributing to an upstream thing that you're probably going to end up using Please please come and tell us about what you're trying to do if you are an implementer implementing this api Please come and tell us of what your experience about was like Implementing it like we need to know Like what have we missed what's hard to implement like what was easy like the the plus ones are nice too Um, yeah, this is our website gateway api.cigas.kates.io. We are at uh, Cig network gateway api in uh, kubernetes slack, uh, and on github right there That's it. Do we have any time for questions? How am I doing? Two minutes two minutes? I knew you were on on track Do we have any uh, virtual questions? Can we get the uh Yep Can we get the um Tech crew no They don't seem to be in there. Okay. We'll work on that. Oh, so cool. All right. There we go. Yep. Yeah. Okay. Perfect. Thank you All right, so questions in the room And do wait for the mic because we want them on the live stream. Yeah. Yeah, so I think yeah cool I don't sure I am uh, understand This is will be a part of kubernetes if I have a kubernetes cluster in the latest version Does actp out include and if yes Does it come to replace ingress? Does the the future plane is that ingress will be deprecated and there is no Resource name ingress in kubernetes. That's that's a great question. Uh Ingress is around to stay ingress is the simple api. It works for a large portion of the community This is the advanced api ingress will be around indefinitely It is a ga api what nick said earlier ga is forever Gateway api is a new api It as as nick mentioned, this is something that is a crd. It's not dependent on the latest version of kubernetes You could take these crds and install them in a 1.16 cluster But so it is definitely not included as part of your kubernetes install You will need to you will need to install the crd manifest plus the webhook To be able to get the functionality Does there is a use case that I should use ingress and not actp out because I don't I don't see If you if you don't need yeah, if you don't need um the advanced functionality that we talked about with htp route Yes, you absolutely should use ingress Yeah, I uh That's okay. That's okay. I I want just to use A a api gateway not use ingress any any more anytime That's great. Yeah good to hear. Yeah. Yeah, so um, so The I think the the reason to use the gateway api is that you have advanced functionality that is not addressed by ingress You know and so I mean we're hoping that That we can make the api so good that you don't want to use ingress right like that It's simple enough to use and easy enough to use that you can just uh, you know That you can that you want to use the gateway api instead. Yeah, thank you. Yeah, no, right. Thanks And we're at time. So if you want to leave, please do so quietly, but we'll continue taking a few questions Hi, thank you for the conference. So uh question is about Have two questions about ingress. I find it Very interesting use case but also Very challenging to attract the traffic from the cni because it's also cni dependent. That's one question And the second question is could you elaborate on what you meant with cluster ip please? Uh, so I didn't hear the second part. Yeah, I I didn't get what you meant with cluster ip What what what is the use case you are looking at? Yeah, I I can cover that one This this is way pie in the sky theoretical. We haven't we don't know what it would look like It's just hey could could we use these api constructs to represent the cluster ip routing we have today with service And maybe service becomes some a little simpler. I I don't know this is like years in the future theoretical Yeah, yeah Yeah, and so so you were saying about that you're you're worried about getting traffic from the cni with the ingress Yeah, the traffic from the cni right so um, usually we're actually kind of yeah I think we actually kind of expect that Um, people will eventually use like a sort of stacked gateway system Like if you have an ingress controller that is taking htp routes. It may actually use a Uh gateway like another gateway that has tcp routes to sort of expose the to do the functionality that a load balancer A service of type load balancer does today And so right right now if you install a ingress controller a lot of the time It's actually using a service of type load balancer under the hood to create a load balancer to expose the thing That's actually doing the ingress for you and we actually anticipate that that'll stick around I mean that will still be there, but eventually people will probably end up using like an ingress level gateway and Sort of layer four level gateway Does that answer your question? Okay, okay I think we had Uh, so we've got another question over here and while I get to you Uh, the only question online is where can I get the power point from so I think We'll upload it. Uh, we will upload it to to this right after yeah Hello, um So yeah, you're the first project to use crds. Um Are you like Obviously you've had like the teething problems and like the first project problems Um, do you see this is going to become like a more common thing? I quite like the idea of the sort of modular Yeah, it's a more modular element of implementing things in kubernetes Like getting involved in it so yet No, go ahead. Sorry Sorry the the admin network policy api is one that is Following in our footsteps. Uh, so another project under sig network that is working towards a crd based api That's that's a great question. There there are definitely It's not all perfect yet. Uh, but we are trying to blaze a path that makes it easier for others to come We are trying to pull things out of core and make the more module exactly like what you're saying I say yeah, I mean there's a there's an ongoing effort to pull things out of the tree for the main kubernetes and to make it so You know, ideally everything in kubernetes should end like main kubernetes should be ga and not changing very often That will make all of our lives easier when it comes time to upgrade, right? Like the more we can pull out of the tree the better awful it will be Yeah, awesome. I believe there was one more question Okay, yes, that'll be our last question for now So you talked about on boy, right? Uh, so the only way is going to work is on why with istio the istio control plane will translate gate vp Either do something on one understand and on why will then implement? Yeah, so istio so istio is one of the implementations that supports this istio takes the gateway The gateway api objects and turns them into config that it passes to its envoys It's not modified to understand No, so on boy itself doesn't need to be changed because on boys configured with the xds the xds apis and so Anything that you can write as a envoy gateway is all about Take writing a separate thing that doesn't need istio that can take Kubernetes gateway apis and turn them into envoy config and configure your own voice other people You know other implementations ingress controllers use nginx You know other load balances like I think hf proxy and middle of be talked about Supporting the gateway api the idea is that anybody can use these constructs to sort of Describe any sort any level of that functionality and it will work And then if you move to someone else who supports a similar level of functionality Then you can just switch your config over and it should just work Yes, that's the whole idea of the api. Yes is here already supports the gateway api. Yes Actually, I said that would be the last one, but we got one more online. So I'll throw it in here Do you use a reference implementation like kubernetes nginx ingress? so We don't we do not have a reference implementation We don't have any plans to have a a single standard implementation Uh, there are you know, there may be you know, like nginx ingress There may be one that is widely supported in the community in the future that that kind of becomes the de facto one that we reference But that does not exist today Yeah, and we do not we do not ever plan to annoy into like the one true gateway api implementation The the idea is the conformance tests are the source of truth if you pass the conformance tests You're a conformant gateway api implementation no matter who you are And we are planning to have like a bit of a feedback loop where you will be able to like take your implementation Do something with the perform run the conformance tests pass them back to us And we will be able to have a canonical list of these implementations pass the conformance test on this date at this level And blah blah blah blah blah the same as you do for upstream performance You know you get like a sticker that says you are an up a conformant community. It's just right same idea Alrighty, thank you