 Hey everyone, welcome. Thank you everybody for making it. My name is Rob Scott. I work at Google on GK Networking, and this is... Hi, I'm Nick Young. I work on ICL-Valent on Selenium Service Mesh among other things. And we are both Gateway API maintainers, and we were both here four years ago, well actually in San Diego when this project started. And we're really excited to talk about the path that got us from there to here, which is Gateway API hitting GA, and becoming what we think is one of the most collaborative APIs in Kubernetes history. And a little bit of housekeeping. As I see more folks coming in, there are still seats spread out through the audience. Those of you in the audience, if you could shuffle in, if you can. We're all friends here. And yeah, everyone try and find a spot as we keep going. Yeah, for sure. So let's first talk about how we got here. And the recurring theme throughout this presentation is that it took a lot of help. The community behind this has been amazing, and many of the community members are here today. So thank you to everybody who's helped get us to this point. To go back in time, this really all starts with the Ingress API. Just show of hands, how many have you have used Ingress before? OK, yeah, very popular API. That's great to see. Basically everyone, basically everybody. So that's great. But what you may not remember is a while back, Ingress was just kind of stuck in beta. It became what we called a PERMA beta API. It had been in beta for around five years at that point, which in Kubernetes years is basically forever. And so we needed to move on. There was this push that we needed to take this API and other beta APIs onto GA. And this became just a little bit of an experiment for what Gateway API or Ingress V2 would become. So we learned a few things. One, it's really difficult to collaborate on an API if you're building it inside Kubernetes entry. There's a lot of release process and cycle that makes it very hard to work together on something. Ultimately, changes come in from one PR, from one person. You can collaborate on a fork, but that is what we tried, and it was very challenging. And also, we really only had around five people showing up to meetings and a couple of us that actually got the API from V1 beta 1 to V1. Now along that way, the feedback loop was really painfully long. Now you can imagine that once you get something into Kubernetes, it takes anywhere between four and eight months for an implementation to pick it up and do something with it. And then from there, users have to install that latest version of the implementation, test it out, and give feedback. So that feedback loop resulted in really difficult to get any kind of meaningful feedback. Then finally, conformance tests. We tried. We said, well, hey, we don't have any conformance tests. Ingress implementations are doing everything that they could possibly could. And there's not really true portability. What if we add some conformance tests? We added them way too late. Unfortunately, despite our best efforts, they were mostly ignored. And there was not the portability we wanted. So fast forward a bit, and this is the timeline we're going to be working through of Gateway API. As we kind of mentioned, Gateway API has been a project that's existed for around four years. Didn't always have the Name Gateway API. We'll get there. But 2019 at KubeCon San Diego is where it began. And we're going to walk through this timeline and go into a little bit more detail of how we got to where we are today. So the beginning really started at KubeCon San Diego. Now you may recognize some of those faces. Some of them are here today. Bowie is the person who proposed this API. We'll walk through his proposal a little bit. You can see Nick in that picture. I'm not in that picture. I promise I was there, though. He did use water. Yeah, Danian right over there. There's a few people that were there from the beginning and many of them became maintainers of the project earlier on. But with that said, this was the very first brainstorming session for Ingress V2 at the time, but the next generation of Ingress. Bowie's proposal here was that this API would not just be Ingress V2, but the next generation of service type load balancer. Pretty broad goals here. And we're getting there. But you may also see these goals. If you have heard any gateway API talk, looked at our docs, you may recognize these words. Generic, expressive, extensible, role-oriented. Still on our maiden page for docs, these goals have stuck with the project since the very beginning. Now, let me pull a couple slides from Bowie's initial proposal here. First up, he proposed that we have this kind of three-layer architecture. So you have an infrastructure layer, you have a routing layer, and you have a backend layer. Then, similarly, he said, well, what if these resources were roughly called gateway? Virtual host, that didn't really stick, or some backends. So again, you're kind of seeing some really strong similarities to the API we ended up with today at that very beginning. And one of the key ideas here is that this is going to be a role-oriented API. Ingress was all stuck in a single resource, which made it really difficult to scale and distribute. You couldn't say, okay, this role in my organization handles TLS, this other role handles routing, everyone had, it was a single resource. The other, maybe most important part of Bowie's proposal was that of support levels. And the idea was that, unlike Ingress, where everybody had to support everything, we'd have a core API, which every implementation would support to the full extent, but we'd also have extended features, which meant that we could add features that not quite everyone could support. And then finally, we'd have a custom API and we'd really build extension points throughout the API so that people could extend and build on top of this API their own custom things instead of being stuck with the annotations we all know and maybe don't love from Ingress. So that gets us out of 2019 that KubeCon San Diego was really the end of the year. And so 2020 left us with a whole lot of questions. We formed a working group, we had weekly meetings, and we just had loads and loads of questions. What even is the scope of this API? When you're building a new API, that's a pretty key question. What are the roles we want to support? How does that even translate to the resources? What belongs in each resource? How do these resources even connect? This is the first time we've really had this kind of multi-layer API like this. How do we balance the simplicity we want with all the advanced use cases we want to support? How do we even release a CRD-based API? If this is gonna be a CRD-based API, that's kind of a new concept for an official Kubernetes API implemented by many different implementations. And then related to that, who on earth is going to implement this? You know, this is this brand new experimental thing. How do you convince somebody to implement something that you know is going to change? Right, you need people to sign up to implement and then not just implement, but then get feedback from users. So we needed a lot of people to buy into this idea, and fortunately, that happened. So let's talk about the resource model we launched with with V1 Alpha 1 in 2020. This looks really similar to what we have today, but you're gonna see that alongside every resource, we had an associated role, right? So we have an infrastructure provider, a cluster operator, and an application developer. This resource model and these roles have stayed intact throughout the lifetime of this API. There is a bit of a difference though in the sense that in this model, you see gateway selects HP routes. So the gateway's pointed to the routes. That changed after some very useful feedback from users in that V1 Alpha 1 period. And Nick will talk more about what that changed to shortly. Then also, how on earth we come up with gateway? Well, you'll see this a lot. We did polls and polls. People came up with ideas. Those are the names we could have had, but we ended up with gateway because that's what people liked the most. And the rest is history. We're not renaming again. Pretty here first. Yeah, no way, no how. Yeah. So with that, hand over to Nick. Yeah, so in 2021, the problems that we'd had with the, that had surfaced with gateway selecting routes really sort of came back and the feedback we got was very strong. So we had the first implementations, Contra, GKE, Istio, Kong, K&A, even traffic, all sort of saying, hey, when users are using this, the actual application developers want to be able to be the ones who own their routes. And when the gateway had to select the routes, then it wasn't the case. The infrastructure people needed to know what routes were going to be there ahead of time because they needed to select it. And so that was the gateway to route binding that we refactored was we refactored it so that the routes are the ones that select the gateway. So that puts the control back into the hands of the application developer, who at the end of the day is the sort of the corpus owner here. And so the other stuff that we changed here was that we added in the GEPs, gateway enhancement proposals, lifting substantially from upstream in GEPs. But the intent there was to sort of give us a record and have a bit more of a formal process to actually get these things through. Reference policy was another thing that came in this year, which is about making cross-namespace references more safe. One of the key realizations we had about the reference between gateway and route was that often it does need to cross-namespaces. And that if you're going to cross-namespaces in Kubernetes, the only safe way to do that is for both sides of the handshake to agree. So there actually needs to be a two-way handshake, not just a one-directional thing. So that reference policy is a way to extend that to other resources that aren't part of Gateway API. We renamed the project from service APIs to Gateway API. The use of the word service was a bit confusing. We decided that Gateway was sort of the resource that everything spun around, so it was best to keep the name straightforward. And we also added policy attachment. Now, so this is what the reference model ended up looking like. The gateways have gateway classes and HTTP routes have parent refs, which are in the call implementation gateways. So, but they can be other kinds, more important, most importantly for later. So policy attachment is a way to attach structured config to various points in this hierarchy that we've created that then allows that config to sort of flow down and we defined a lot of stuff about how this could work. Now, I mean, this pattern has significantly changed as well as we've been going on, but I think the key idea here is that as part of extensibility, we wanted to have this extra thing that you could use so that if there was something that you wanted to do that you couldn't fit into the core things, you didn't have to rely on annotations. There were other ways that you could do those things. Yeah, and then 2022, we felt confident that we got this sort of core parts of the design right. So we moved Gateway, Gateway class and HTTP route to beta and part of that was that we started working on conformance tests to really meet this goal of portability. We wanted to make sure that every implementation for the core and extended features behaves in the same way and the only way you can guarantee that is to have tests that say you must part, you must behave in this way. We also at this time had Gamma, had Gamma start, props to the Gamma folks for really driving a lot of the stuff here about having Gateway serve mesh use cases. We renamed reference policy to reference grant. We realized that calling something reference policy when we had a pattern for making other things called policy was probably pretty confusing. So we changed that name and we from some great work from some of the GRPC team at Google, we had GRPC route in Brazil introduced as well. And so yeah, the Gamma folks really poked us and reminded us that a lot of what we were building is very portable to both between both Northwest and North South and East West routing. And so we had kept the routing APIs very intentionally separate and that worked really well for being able to handle mesh use cases as well. So yeah, and SMI transitioned to Gamma alongside the Gateway API beta announcement and as today we have Istio, Kuma and Linkedee all support the experimental Gamma prototype. So yeah, we poked us to all of them. The other thing we introduced in 2022 was the release channels. Now, this is one of the questions that Rob mentioned before about how do you distribute an API like this and have people be able to be confident about how this thing works and how we can add new fields and remove them in the absence of feature gates. In the core Kubernetes, if you add a new field you also have to add a feature gate that defaults off that people can enable to try out this new functionality without it being enabled for everybody. An important part of further stabilizing Kubernetes has been formalizing this transition process of how feature gates get set to default and then eventually get removed when we're confident the code is stable. So we wanted a way to approximate that with CRDs. And the way that we came up with, I think this was mainly you Rob, was having this idea of channels of CRDs. So even today in Gateway API, there are two versions of the CRDs, the experimental CRDs and the standard CRDs. The experimental CRDs include resources and fields that are experimental. The stable ones include only resources that are stable and fields that are stable. Importantly, you can have stable resources with experimental fields. So as today, since we have got to GTA, you can have in the Gateway, there are experimental fields like the new infrastructure field that if you install the standard versions of the CRDs are not present. We've carefully designed the CRDs though so that if you switch from standard to experimental or even hopefully from experimental standard, maybe, maybe, things shouldn't blow up too bad. And so the, I mean, that's all you can do when you're doing a lot of these really hairy API changes is you can do your best to make sure that things should work as you anticipate, but as we all know, it's never 100%. So yeah, and then this last year, I'm gonna hand over Rob. Cool, yeah, this has been a huge year for Gateway API. As you likely all know, we made it to V1, we made it to GA, it's been huge. We used to, at these presentations, have a wall of logos of implementations of the API. I've given up, there's too many at this point. If you have a networking implementation that you care about, chances are they already support Gateway API or they're working on it. We have 26 implementations of the API, more and more growing every week it feels like now. I've lost track, but thank you to the community for stepping up and really buying in on this API. Also huge this year, Service Mesh graduated to Experimental over the summer. That means that, like Nick mentioned, Istio, LinkerD, Kuma, I think some others are already conformant with the Mesh spec for Gateway API. And then we've really been focusing on the UX here, right? So Ingress to Gateway is an attempt to make it easier to migrate from the Ingress API to Gateway, we have a new tool to help with that. We're seeing more and more people making that transition, making that upgrade. So we're trying to make it a little bit easier. And then also, by very nature, working with CRDs, there are some unfortunate bits like describe output is not as helpful as we'd like. There are some other things that we're trying to smooth over in terms of Gateway UX. So we've built a new tool, Gateway Cuddle, to make it a little bit easier to work with this API. So if you're interested in more about moving from Ingress to Gateway, there's a recent blog post on Kubernetes.io that shows everything about that tool and walks through what you might need to do to migrate or upgrade from the Ingress API to Gateway API. Really, really great contributors there that have owned that and pushed it forward to release just a couple of weeks ago. And then I have to at least mention that we have a logo now. And we had an open issue for a while and many different options were proposed, but the one that really stuck started from this sketch. And I have to call them out. There's Pierre-Louis right in the audience there. There you go. This sketch came from him. And it really stuck. So that translated it into, what do you know, another poll. And this is the result of that poll. We had a few options. My wife actually came together and drew out some of these options from that sketch. And there were some other interesting options like a black hole that didn't end up getting selected, but was also popular. I kind of felt that maybe our logo being a black hole was slightly a bad message to send personally. So our final logo is this right here. And we are trying to both symbolize that this is still very much a Kubernetes project, a Kubernetes API. So it looks very similar to the Kubernetes logo, but we're trying to show that this is a routing API, that's the arrows, north, south for Ingress and east, west for Mesh. So that's the meaning of our gateway API logo. But the key thing to all of this API is we wouldn't be here without the amazing set of contributors that made this a reality. Unlike any Kubernetes API I'm familiar with, we had hundreds of contributors that made this API what it is today. We have not been able to include absolutely everyone, but we had 170 plus people that just pushed a commit. There are so many more ways to contribute, whether it's through a logo, whether it's through commenting on issues, providing feedback, so many different people contributed. Maybe just a show of hands. Anyone in this room contribute to gateway API, push a commit, join a meeting, few people. Yeah, thanks everybody, round of applause please. And then of course I want to call out some of the key contributors that really took this to the next level. I want to call out Shane right here. He's active maintainer. Danny and also in the front row with Emerace maintainer. Lots of people that took us to where we are. There's also the Gamma Leads, which pushed many forward. I think I see a few of you here, hands up Gamma Leads. Anyone? Okay, a few in the all back row. I don't know. All kids at the back. Yeah. And then so many more, I'll hand it off. Yeah, so we also made a big effort this year to split out some of our roles to try and, you know, we were struggling to keep up and we're worried about velocity stalling. So we've made a big effort to split out some of our roles. We've got conformance approvers and reviewers now. So yeah, if you are one of the conformance approvers or reviewers, hands up please. Yeah, thanks very much everybody. Round of applause for them. Really working hard on making sure that all the conformance tests keep moving. And we also added two new GEP reviewers, Candice and Grant. And yeah, either of Candice and Grant here, I think so. No, okay. But yeah, thanks a lot. I know I personally have really appreciated, you know, having some other folks do the initial parts of GEP reviews because we have a lot of GEPs in flight. Some might say too many. I'm looking at you Shane. That you know, and it's been really hard to keep the velocity up with only the three of us. And so having extra people has been really, really helpful. Yeah, thanks. Applause for everybody who has really helped out. Thanks. Yeah, and we've got a few, we've got the contributors for Ingress to Gateway. Yeah, hands up if you are here, Ingress to Gateway people. Yeah. Okay, awesome. Matee is here. Thanks Matee. And Gaurav has been doing great work on Gateway Kettle as well. Yeah, and we're now at 30 contributing organizations. These logos are the top 10 contributors by CNCF DevStats. Yeah, thanks to all the organizations for allowing us to work on this API and make it happen. Yeah, and we really, really appreciate all the resources all these folks have put in. So, I think a pretty important question for us to answer is, what does GA actually mean for a CRD-based API? So this slide sort of summarizes a lot of it. Now it's, V1 clients have the same guarantees as any other V1 incubinities. They are stable. The APIs will not change in any breaking ways for the foreseeable future. The APIs will not be removed for the foreseeable future. If any of that changes, there will be a long deprecation period and lots and lots of communication about it, but I do not see that happening. Implementations themselves can make their own calls about when they will call their implementation stable, but now they have a stable API to build off of. Yeah, as I said, we're only gonna make compatible changes going forward. There's been a lot of work done in Core Kubernetes about exactly what constitutes compatible changes. And we are, I mean, I have those two things on my bookmarks bar because I read them multiple times per day. The new fields will be added in that experimental channel that I talked about before. New objects will start with a V1 Alpha 2 version in the experimental channel and objects will graduate from V1 Alpha 2 to V1 directly with no beta. Rob's gonna talk more about that when they graduate to the standard channel. Yeah, so there's been quite a few updates and clarifications in 1.0. One of the big ones was listener isolation. Thanks, Arco, for helping call this one out. That, yeah, we've made it clear that at most one listener should match a request and only a route's routes attached to that listener should be used for routing. Some implementations don't do this yet, but we are starting as recommendation only, but we're going to get a supported feature in conformance tests for this. And I personally would like to see us move to this bare master eventually. Also, we did a bunch of stuff about ensuring that parent refs are unique. This is really a bit of a subtle point, but turned out to have really big implications for how conformance tests work. I think next one, we're over one on Rob's favorite topics, cell validation. All right, so show of hands, how many of you like working with validating webhooks? Okay, a few brave people out there. I appreciate that. We did not, so we did everything we could to move away from them as quickly as we could. And so, thankfully, Garov came through and added cell validation to our CRDs in the 0.8 release of Gateway API which came through this summer. That was a huge change for us. It enabled, yes. Round of applause for Garov on that one. And this has been such a great addition to the CRD API in Kubernetes, the ability to add complex validation logic directly to the CRDs instead of needing a validating webhook. This means we don't need to maintain validating webhook through GA in 0.8. It was still included in 1.0 that was just released. It's only optional and not recommended. And beyond that, it's gone. So, goodbye, validating webhook. We won't miss you. Then also, I have to call out for a long time we've been doing docs on our own on our own website because Kubernetes IO docs are actually versioned with Kubernetes and Gateway API has its own versioning, its own release schedule, et cetera. But thanks to Danian, also front row right there. Yeah, thanks Danian. Who took this and said, hey, we should have some docs on Kubernetes IO so you can discover this right beside Ingress and Service API. Those added maybe a week ago, but thank you for getting those in. So, if you're ever looking for Gateway docs, they're much easier to find now. Now, it's not a good KubeCon talk without at least one controversial thing. So, our opinion is that beta just is not worth it. So, we're getting rid of it. At least in Gateway API, beta is going away. So, we took a hard look at this. Where many of us in the community are very familiar with the pain that Ingress beta to GA caused. It was very painful, I'm sorry. We are trying to avoid that going forward. So, one way to do that is just not have beta. So, we've gone to two levels of versioning. Here's a long list, I can't go through everything here, but there's significant costs for maintainers, implementers, and users when it comes to every new API version you add. And anytime you're migrating between API versions, maybe some of you remember the Ingress transition, it can be especially painful. So, with that in mind, we've said, well maybe beta just isn't worth it, so we should get rid of it. So, the value of beta API version just does not add up. So, we're going all in on our release channel concept instead. So, what that means is, if you want to try something new, maybe not in production, use the experimental channel. You can get all the new features with all our CRDs just by using that. On the other hand, if you want something production ready, use our standard channel, and you'll get standard channel production ready CRDs. So, that's it. Only two levels, we don't think that intermediate level added enough value to justify the cost, so we're getting rid of it. There's a lot more nuance here, as you might expect. So, we've been talking about this in the community for around a year now. If you want to see the original proposal doc, QR code there, there's a lot more to this, but high level beta is going away in Gateway API. Now, we're not doing this lightly. We have a very stringent graduation criteria for anything that goes from experimental to standard. That means everything needs to have full conformance test coverage. We need to have multiple implementations that show that they're passing those conformance tests. We need to see widespread implementation and usage of those features, and at least six months of soak time in the API before it can even get to experimental. And then, maybe most critically, no significant changes. That means that before you graduate to GA, or to standard channel, it means that your API has to sit unchanged for a period of three months and at least one release. And then finally, the last line of defense here. It means that approval from sub-project owners, Nick, myself, Shane, others. Yeah, we have to approve it, but then we also have another line of API reviewers that also have to approve it. So we really are trying to make sure that everything that makes it through the standard channel really is stable and production ready. Okay, yeah, so let's talk about some of the, these are the new experimental features that have come out in 1.0. Candice has really pushed through some great work here on introducing back-end TLS policy. And this is the first policy that we're actually included in the base manifests. This policy attaches to a service and says to gateway API controllers, when you're connecting to this service, you should use these TLS details for connecting to it. So usually a gateway has some sort of proxy that is actually performing the gateway duties. And this tells that gateway, hey, your gateway should be a TLS client when connecting to this back-end. So this enables use cases that sometimes call re-encrypt or back, you know, TLS to the back-end. And so it's been a pretty big deal to finally get this in. It's been a big thing that we've been missing. We have passed through in TLS route. It's been experimental, but this one lets you do terminate TLS and then do TLS again from the gateway to the back-ends. So yeah, thanks very much for this Candice. We've also had HTTP route timeouts added. This lets you specify two types of timeouts at the HTTP route level. This again is an experimental field. We had to, in order to do this, we needed to introduce a duration format that's a subset of the go duration format. But this is now is a standard part of gateway API as well. And so yeah, the intent here is just that now, once implementations hit this one up, then you can specify both a request timeout, which is roughly the total timeout for the request to happen to go to the back-end and come back from the gateway, and a timeout for the request to get to the back-end in back-end request. So thanks to Frank and Simone for pushing this one through. Last one I'm gonna talk about is infra labels. Thanks to John for getting this one to happen. The idea here is to specify labels or annotations that are then propagated down to any resources that gateway controls. One of the key use cases for this is that if your gateway say provisions of service type load balancer in order to make the actual traffic flow, now you are gonna be able to put annotations on that. So if you are running in one of the cloud providers and you need to put annotations on a service to make it select a type of load balancer or other config, this functionality is gonna enable those use cases. Last one, Dave, thanks very much for this man. The, Dave has done a lot of work to nail down the interaction with the back-end, the app protocol field on the service. This will then let you say, hey, this for, if you select HTC or web sockets on your service to say this port does web sockets, this means that this is the way that gateway API implementations can actually be aware of that and make sure that protocol works. Yeah, okay, and then, yeah, you wanna take, get a little up? Sure, yeah, I can do, gateway cuddle is, as I mentioned earlier, trying to get around some of the UX issues we've encountered and gotten feedback from with gateway API. Right now, when you're working with KubeCuddle and CRDs, there's no way to customize the describe output and it's near impossible to describe nested content and if you're familiar with gateway API we have our share of nested content. So unfortunately, a lot of people interacting with gateway API are using KubeCuddle git-o-yaml, our favorite tool, but that's not a great UX. So gateway cuddle is designed to be both a standalone CLI and a KubeCuddle plugin and it has many lofty goals here. One, there's that policy attachment model Nick called out earlier. This is our go-to model for extending the API many implementations are building policies that attach to different parts of gateway API to extend it. This plugin already can compute the effective policy and say, look at this implementation specific policy it's attached to your resource here and it's having this effect on it. Similarly, we're hoping to improve git and describe output and surface potential problems that we might see via status or else wise just a little bit easier to work with gateway API. Now, there's a lot more coming in gateway API after 1.0. You may say, well, 1.0 GA, we must be done. Now we're just getting started. There's so much more ahead. As I just mentioned, we're trying to improve the UX, trying to improve the upgrade flow, but we also have so many more things in progress. We have 16 gaps that are either in an experimental or provisional state. That's a absolutely massive number and there's more ideas that are coming in. So we need to get a handle on this and we're saying before we add more things to experimental, we need to remove some of the things we have in there. That means they can either graduate up to standard or they can go away. We may remove them, they may fail. I don't know, but the key thing is we need to make room an experimental channel so it doesn't get too bloated and big. Then finally, we're focusing on predictable and collaborative prioritization. We want everyone in the community to have an understanding of what's coming next and how we're making decisions. So an idea for new features, what's the demand? What's the complexity of the change and how many implementations are gonna support it immediately? So that can help us prioritize what's coming next. We can gauge the incoming demand and complexity of any new feature. Now, there are absolutely tons of opportunities to get involved in this API. We already have so many great contributors but we can always, always use more. 100%. Yes, examples ingress to gateway, gateway cuddle are great opportunities to jump in. Our docs have grown organically, you could say. Yeah, that's the best way to describe it, organic growth. So, we could definitely use some help there as well. Choose your own adventure is also an option. Confine our website, you can find us in Slack. We're also having office hours with maintainers and anyone else. Every day at KubeCon at 2 p.m., join the Slack and we'll announce where we are an hour ahead of time, basically. Yeah, when we find where we're gonna be. Yeah, basically. So, with that said, thank you so much everybody. Yeah, thanks very much everyone. We do have like less than a minute left so we could probably do one question if anyone wants but if everyone's good, then we're good too. Thanks very much.