 Hi everybody. Welcome to Gateway API Beyond GA. My name is Nick Young. I am a staff engineer at Isovalent and a maintainer on Gateway API. And I've got some other folks here on the stage. Mattia? Yeah, hello. I'm Mattia Lavaca. I'm a software engineer working at Kong in the Kubernetes team. And I'm a Gateway API contributor. Hey, everyone. I'm Surya and I'm an engineer working on the OpenShift networking team at Red Hat. I contribute to the network policy API project. Happy to be here at KubeCon today with all of you. Yeah, and I'm Leo. I'm Surya at Google, specifically working on GCE, Google Compute Engine. So let me down the stack. And I'm also English to Gateway maintainer and Gateway API contributor. Yeah, that's you there, man. Okay, so let's get started. So what we're going to do today is we're going to run through some Gateway API updates. Then these folks have got some cool projects that we've been working on to tell you about. And then I'll round it out at the end. Okay, so let's get started. So we are hard at work at v1.1. We released 1.0 just over six months ago. And this release, we've been working on lots of quality of life and tooling improvements, conformance profiles, which Mattia and Surya are going to tell you about, some user experience improvements, which Leo is going to tell you about. And the other one is policy attachment update, which is me. But we've also made a few changes to the GEP process. So for those of you who don't know, GEPs are the Gateway enhancement proposals. They are how Gateway API handles feature requests. We have a process for this. It's reasonably complicated. But we have added a new state, Memorandum. This is a bit more like a more traditional RFC. Our current GEP process has us run through a series of stages that eventually ends up with the GEP graduating to standard, which means that it is now ready to use and is fully implemented in the API. The Memorandum GEP is a way to record that the community has talked about something and we're now in agreement. And so that agreement is recorded. Also, GEP files now includes some metadata so we can do build some tooling around them. And more importantly, for those of you who have been watching closely, we've added a limitation on how many GEPs can be in experimental. Experimental is the phase before standard, which is when we are prototyping the thing, implementations have started implementing it, but we're not 100% sure if we've got everything right. So we currently have 17 GEPs in experimental. We don't want to add any more until we start moving some through to standard so we can keep the stability level of the API overall rising. Okay, so we're hoping to have three GEPs transition to standard in V110, conformance profiles, GRPC route and route port matching. Additionally, we've got following GEPs, changing levels, Gateway Kettle. That one's done already. That has moved to a Memorandum state. We'll talk a bit more about that later. Session persistence behavior. We're going to move that from the provisional state to implementable. That means that people, implementations will actually be able to start doing prototyping session persistence. And same for client certificate verification for gateway listeners. That is when you want to request your Gateway, requires a client certificate from the end user client. Please check out the GEP board for more info. That QR code is our publicly available GEP board that keeps track of which GEPs are at which stage. Okay, let's hit some specifics. This GRPC route is, as I said, we're going to move this to standard as part of 1.1. This is an additional top of HTTP route that lets you route GRPC via method and a few other GRPC-specific things, very handy for if you're doing a lot of GRPC. Performance tests emerge. We've got five implementations passing those tests, so it's ready to graduate to standard. The only thing we've got left to do is to actually make the change to move the API from V1 Alpha 2 to V1. So it'll still be optional, and so implementations don't have to support it, but if they do, there's a required behavior. Okay, route port matching. This allows you to use a route as an additional field for any other type of routes parent ref. So if you haven't used gateway API before, routes, whether they be HTTP routes, GRPC routes, TCP, UDP routes, they all attach to a gateway using a parent ref. What we're saying here is that you can use, you can put a port in the parent ref, which will mean that that route will only attach to listeners in the gateway that match the same port. Okay, so it's just another nice sort of convenience way to make sure that your config is not going anywhere you don't mean it to. Performance test PR is open. Two implementations already support it, and it's going to extended support. One of the other things that we've started adding more recently is this idea of feature names. That's going to be relevant later in some of the stuff the other is going to talk about. This one describes a gateway kettle. So we are building a tool, gateway kettle, which will be a command line tool that lets, well, is a command line tool that lets you introspect gateway API resources and knows a lot more about the gateway API resource model than straight queue kettle, and will therefore be able to tell you more about the resources and how they interrelate. It knows about policy attachment, it knows about how routes attach to gateways, and a bunch of other stuff like that. Really cool, great initiative. Gaurav is absolutely killing it here. And so this one, the Memorandum Gep is merged, and the next steps are just keep building and iterating on gateway kettle, which is already happening. So if you're interested in this, this is a good place to get started with gateway API contribution. Session persistence. This one has been going a long time. It's been a pretty contentious one. We had one PR, I think, that got up to nearly 400 comments before it got merged. We've had a couple of other PRs with multiple hundreds comments, but it is in progress for moving to implementable. We're going to do this, we're going to be adding a back end LB policy object, which is a direct attach policy, that attaches to a service. What that does is it directs implementations to say, this is how you route traffic to this service. And so that lets you set things like session persistence behavior. And so that will be able to say, the session persistence behavior for this service is this, no matter where or what type of route it's attached to. So we are also going to add the ability to configure these inside route rules. But that one is still coming. Grant Spence has been leading this one. It has been amazing work. Like you said, when you get that much discussion, actually pulling people together and getting them into the same direction is really tough. And so Grant has been doing amazing work here. There's only one line left in the checklist before it can move to implementable. And that's to finish designing the route rule API. So yeah, look out for this one to move, hopefully to move to implementable in the next few weeks. I think I forgot to actually say out loud for the recording that 1.1 we're hoping to release very soon after you've gone. Okay, client certificate verification for gateways. As I said before, this is the ability to configure a gateway listener to require a client certificate for when people connect to it. So if you're terminating TLS, this says you can't terminate TLS, you can't connect to this listener unless you have a client certificate that's also valid trusted by the same CA chain. So proposal itself isn't controversial, but what we found in a classic computer science problem, we are struggling with naming. And so we just want to, but we want to make sure that we get all of these namings right so that we can then make sure that we have consistent naming for all the TLS things across the whole API. So that's delayed things a bit. This is mainly waiting on review of bandwidth. That's me and others. Okay. Okay. The big one, this is the big one I've been working on. It's one of the reasons I haven't had review bandwidth. So policy attachment is a pattern that describes how you attach meta resources to other resources. Those meta resources change the behavior of the resources they attach to somehow. This has been described for a long time in a GEP 713, meta resources and policy attachment. That GEP is being moved to a memorandum and all of the implementation details are split out into two specific GEPs with two different types of policy attachment. GEP 2648 is direct policy attachment and GEP 2649 is inherited policy attachment. Direct policy attachment is much simpler, like by design. The whole point of that is that we want to get some usages of this pattern to standard as quickly as we can. Inherited is a way, way harder problem to solve. It's really hard with policy, with inherited policy that can flow across multiple objects to have a standard way for users to be able to know that their object has been affected by policy. If you think about it, if you're a user and someone else has put something on the gateway and you've got a route and there's conflict change that you don't know about, that is a really bad user experience. We really have a lot of work to do there. It's happening. I especially wanted to shout out the folks in Quadrant who are doing amazing work with policy. This is a really big PR, so the review is taking a while. I think I have changes outstanding that I haven't been able to get to because it's a coupon, but this should hopefully merge real soon too. If you want more features, and who doesn't want more features in Gateway API, we currently have 34 gaps open with plenty of requested features not yet covered by one. Common features we hear about, cause, authentication, rate limiting, they all have gaps open, but those ones are currently waiting for someone to come and pick them up. If you really want to see these things, we are like we being the maintainers and the other leadership, we are fully committed with what we've got on our plate at the moment. The more people want this, you know, this is open source, PRs welcomed, contributions gratefully accepted. Please follow the process though and start a discussion. We've had to, you know, don't go and do a whole week PR because there's a good chance that you've missed something if you haven't sort of been involved in the conversations already. Start a discussion, talk about what you would like to see, gather use cases and then hit the process. That is the gap overview page that has the overview of the process with all of the different states. I have specifically not addressed all the different states because we don't have that much time and I am running out of time, but yeah, please use the QR codes, have a look at those links and yeah, we would really love to see more movement on more gaps, but we need more people. With that, I think there we are over to you two. Thank you, Nick. So he said a high bar for the rest of us, but if you noticed my personal profile is a little bit of a misfit for this talk compared to the other three profiles here, because I work for the network policy API project. That's where I spend a lot of time on, not the gateway API, but I assure you that I'm here to talk about conformance profiles. And this profile is very much a fit because this is one of those areas where both our communities have collaborated and worked together to solve a common pain point. So let's look into why we wanted to do conformance profiles in the first place, right? So we've had plenty of pain points in the past as API developers, API maintainers, and building a community and staying in touch with the implementers and the end users. And we wanted to have a collaborative way to do this. Both these projects actually have core APIs, like the network policy API and the ingress API, which are stable, have been around for a really long time. We have extensive list of implementations out there. And as the API maintainers, we probably don't even know the full list of implementations, because we never really had a proper tracking mechanism to begin with. Without a tracking mechanism, there was also a lack of consistent feedback loop between the implementers of those APIs and the API designers and maintainers. And feedback is really important, especially in the alpha and beta stages, because that's when you can iteratively work on it. And with the constructive feedback, make changes to the API before it stabilizes, because at that stage, it's pretty much done. It's pretty hard to make changes without breaking someone. So lack of feedback also means that we were unable to give conformance badges, acknowledge those implementations, have done the hard work and become conformant to our APIs. So these were the three pain points. So the fact that we didn't have a way to track implementations, we didn't have enough feedback coming up from them, and we didn't also have a way to acknowledge their conformance. So these three pain points are what we were trying to solve. And the Ingress API, which has been around for a long time, all of you probably know about it already, had a conformance workflow, but this became defunct because we couldn't get collaboration enough implementations to make use of it. And it's somewhat the same with the similar to the network policy API, because we have end to end tests running in the core Kubernetes KK repo, but it runs on a specific implementation, which can lead to implementation bias. So in general, we were really having a whole missing gap where we probably didn't have a generic enough conformance test report framework, which can actually solve all these key problems for us. So that's where conformance profiles come in. It's pretty cool, giving all the credit to the Gateway API folks right here, because the network policy API project basically just adapted and then adopted this to our needs. So conformance tests, every API has a set of features, set of fields, we want to have an extensive set of tests that test this, right? This is pretty basic. So we have a generic test suit that we can just pull in and run your tests on. So if you are an implementation, and if you are conformant to a feature, it probably means that you pass all the tests in this suit profiles is in one word, just an abstraction layer on top. So a profile is just a way to group a set of conformance tests together. And as an implementer or an end user, you can subscribe or opt in into these profiles and just run it to see, Oh, what implementations are? What are the implementations actually supporting? Right? Like what are the features that are being supported by the profiles that are being supported and so on and so forth. Examples of profiles are HTTP mesh on the Gateway side, admin network policy on the network policy API side. And Mattia will talk more about in-depth around what profiles are. Yeah. So the API basically defines multiple sets of features. There are the core and extended and implementation specific features. And both the core and extended features are tested by exhaustive conformance sweet test. So basically each feature is a collector of conformance tests. And each profile can be seen as a collector of features. Some features can be core, some others can be extended. And everything is about implementation support, right? So the level of support for an implementation. So an implementation can be core conformant with the Gateway API, with a specific profile of the Gateway API. And this means that all the tests related to the core feature must be successfully run by the implementation itself. While the extended conformance for an implementation is related to a specific feature. So let's say that an implementation X wants to be, wants to claim conformance with specific extended features. Well, that means that all the conformance tests for that specific feature are successfully run by that implementation. And the conformance test suite produces the conformance report. So the conformance report is basically the output of this process. Here is the first part of such a report. This is just a regular Kubernetes API. So there is some metadata. Then in the snippet you can see there is some implementation details. So you can see the contact, organization, project, basically basic informations about the implementation. And as I said, the implementation is intended to run the conformance test suite. The conformance report is created by the suite. And the conformance report has to be uploaded on the Gateway API GitHub repository by the implementation itself. And the Gateway API gives a conformance badge to the implementation. On the right, there is a screenshot of the folder structure that we have for the reports in the Gateway API repository. For example, there is the version one, one zero zero of the Gateway API. And as you can see, there is a bunch of implementations that provided the conformance reports for such a version. So basically we can see that the conformance reports are broken down by a version, Gateway API version and implementation version. Here is the second part of the conformance report. And it's the important one. This is about profiles. The profiles is just as lies of many different profiles. For example, here we have only one profile named HTTP, which is about HTTP features. And the main part of this profile is the core. So the core, as I said before, is related to core features. So for example, in these regards, this implementation is successfully running all the 29 conformance tests related to the HTTP core features. On the right, there is another tool that is very important for the user who wants to pick a specific Gateway API implementation, because everything is about the level of support. So I, as a user, want to select the Gateway API that is best for me. So here is an example related to the Kong ingress controller. The Kong ingress controller for Gateway API version one has submitted two different reports for two different versions. There is a table of contents in the readme. Nothing very interesting. Just some pointers to reach easily the reports. And there is the two reproduced section. The reproduced section is very important, because as I said, the implementation has to upload the conformance report. And to enforce some trust between the implementation and the Gateway API community, the implementation itself has to fill in this section with the details on how to reproduce the same conformance report. So you, as a user, can just go to this page and run the same steps needed to create the some conformance report uploaded by implementation. So like Matia mentioned, we also adopted the same conformance report CRD to the Gateway network policy API project. And this is how it looks like on our side, where it's basically the similar thing. We have our own profiles. And whenever we have new implementations, we encourage them to do this, you know, run our conformance tests and then upload their report into our repository, which is the network policy API one. But the key takeaway, and I think what I really like about this feature is the fact that there's a lot of collaboration here, right? Like maybe encourage implementers also to open PRs and, you know, contribute to the community. And it's not just a bunch of us doing APIs on our own. So that I think is like really valuable for us here. So what's in it for an implementer? Like we said, how do you leverage this framework that we have here for you? So that's where you can use, there are two methods in which the implementers can make use of this framework. One of them is using the Golang library, which you can just import and then have this Golang structs that you see here on the screen where you can put in your implementation details, what version of your plugin and what profiles do you want to opt in? That's the most important part because you might be supporting some of them, not supporting the others, support all the information in just run the suit and it generates the report for you, which you can then upload. And Mattia, we'll talk about the second half of this. Yeah. So let's say that you are an implementation, you don't want to use the Golang library because for whatever reason, maybe your project isn't written in Gol. And so the GetWPi provides CLI as well. CLI is just a wrapper around the conformance test machinery. And everything can be easily configured by just setting the proper command line flags. So there is also this possibility. And this is the second part of the conformance report that I started talking about before. And this is the extended section. Here is the differentiator between the implementations because an implementation can pick and implement specific extended features. And here is the place where you can discover which features are supported by which implementation. So this is a very useful tool for a user to understand which is the best implementation that fits their needs. Likewise, right? So I'm not going to repeat what Mattia said, but what I want to highlight here is how similar the reports are looking, right? Like for the extended tests. So you can know which CNI you want to pick for your policy API implementation, because you know exactly what that's CNI supporting, what it's not supporting, right? And from that we can, as an end user, benefit a lot. And what's in it for the community, right? So for the API, maintainers, this is really important because they know exactly what their implementations are supporting and what features are being used, what features are not being used. And that means we can maybe decide to drop some of them before they graduate to the standard channel, right? Because we just don't have enough implementations. So this framework of profiles and testing has become an important way to give that signal back to the API maintainers. And the other aspect is you saw how similar both the frameworks were for both these projects, which means we just duplicated it. But we could in fact just have a common library and put it all out there, right? So Gendrick, thinking bigger here, it could be beneficial to anybody doing CRDs and APIs out there. So we encourage you to reach out to us if you have a use case where you're writing APIs and you're stuck on or not, I need to do the testing. How do I do it? You can just work with us and we can do a new library project, which is generic enough and everybody can just adopt it, you know, according to your needs. So yeah, what's next? First point, most likely what Surya just talked about. So make this project as common library that any project in the Kubernetes landscape may use. Second one is improvements on the UI UX level. So we would like to have a mechanism to automatically parse all the conformance reports uploaded by the implementations and fill in support metrics so a user can have a quick grasp on the implementation and the supported features to choose the best for them. There is also an ongoing effort on this and yeah, we see. The third one is automation. Everything is manual today and many steps could be automated somehow and yeah, this is also something that we would like to improve on the conformance test suite. And the last one is improvements on the conformance pages. At the beginning, I say that conformance badge is given by the get the API to the implementation when the implementation provides a new conformance report. And yeah, we could do something better in regards to this. So this is something that we would like to work on. Right. So I'm going to talk about the user experience sites or the user experience efforts plus the GA release last October. So two main focus areas I'm going to talk about. One is the migration experience. So everything in regards to the user experience in the migration process from ingress and other APIs to gateway. And the other one is discoverability. So on the migration side, we have ingress to gateway ingress to gateway is essentially your body to kick off the migration process. It's a simple CLI tool to just read ingress and implementation specific configuration from a cluster or file and output corresponding gateway API resources. And just to note about a name, it's called ingress to gateway, but it doesn't necessarily mean ingress API only to gateway. So it's basically suggest ingress configuration. So it also supports CRDs and custom resources. And that leads me to the second bullet or the third one, which is this extensible. Every implementation can add their support to the tool. And it's relatively easy. So if anyone in the crowd is like an implementer, I encourage you to add the support. And we're getting constant feature requests from users that it will be useful for them. And here are a few implementation that already supports the tool. So we have Istio, ingress, ingress engine x, kong, and Apache API six. Now, our second focus area is discoverability. And within it, I want to start by talking about supported features again. So we essentially adding a new supported feature field to the gateway class status to improve the discoverability and accessibility for the features, the implementation support within the cluster. So as you can see here, we also have semantics for those features. So for example, this implementation publishes supports HP route. So HP route means I support all core features of HTTP route. And they can optionally also publish extended and implementation specific features supports by just providing the full name. But we have details in months and get 2162 if you're interested. And this gateway is already experimental for a while. And I think we have like two or three implementations already supports it. We're still in a soak time right now, like six months, right? So we're waiting to graduate it soon. And essentially, this elaborates on the conformance reports by providing direct and machine readable access to read those features from within the cluster. It also serves as the foundation for automation to one when unsupported features are used. And it clearly defines which conformance tests are applicable to run, right? So the conformance standards, we're just talking about, they need to know what tests they want to run. So the plan is that each implementation that claim to support a set of features will pair those features from the gateway class and find the corresponding conformance test they need to run and later use the results to report in the conformance reports page. Now, the second focus point in discoverability is related resources. We are also looking to improve the discoverability of related resources to the implementation in the cluster. And this includes any CRD that the implementation cares about, including policies that Nick talked about earlier. And to better present the motivation, I want to present it in a form of two questions. As a user, how do I find what policies and extension filters I care about? And additionally, how do I discover other implementation-related resources? So it can be like parameters ref, I don't remember, like in the gateway class. So, right. And as Gateway API order, how and where should we put all this information? So the plan is to add another field to the gateway class status, something like resource depository or related resources. And again, we're debating about a name. But this essentially will include every resource that the implementation cared about. And as a user, that I might care about when I'm debugging or looking to solve any gateway API-related issues. And by doing this, we essentially make Gateway class a one-stop shop for discovering implementation related information. Lastly, I want to talk a few bullets about Gateway Cuddle. Gateway Cuddle, as also Nick pointed out earlier, is the name suggested tailored command-line tool to interact with Gateway API resources. And the main motivation behind it is to empower users to comprehend the resource relationship in the cluster. So the API is very, very modular, right? We have getways, we have HB routes, and the HB routes are attached to getways. We have policies. Policies can target getways, can target routes. We have extensions. So it's pretty, pretty hard to comprehend what happens in the cluster and to comprehend those relationships. And more importantly, to comprehend how the relationships affect the traffic flow or the request flow in the cluster. So we want to make gateway deployments easier to configure, easier to troubleshoot, and easier to optimize for. Now, why not Kube-Cuddle? Because Kube-Cuddle support wasn't enough for us. And mainly limitations are complex JSON paths, lookups, and cross-resource lookups. Lastly, if you want to get involved, there is this GEP 2722, which GOR from Google is leading. And it outlines the plan and a lot of other opportunities to get involved. So you can just count it now and I'll over to Nick to conclude. That's what I need. Thanks, mate. Okay. So I just wanted to take a couple of minutes to let you all sort of see who is the sort of leadership of Gateway API. I mean, obviously an important part of Gateway API is everyone here who is users or is interested in the API, but in sort of a more concrete way. These folks here are responsible for making Gateway API actually happen. So there's the three maintainers, me, Shane, and Rob. We have two GEP reviewers, Candice and Grant. We have leads for the Gamma Initiative, John, Keith, and Flynn. We've got conformance approvers. Mateo is one of them. And Arco and Sanjay, and then conformance reviewers, Leo, Michael, and Kharov is currently in charge of Gateway Kettle. So I just wanted to do a really big shout out and thank you to everyone involved here. Thank you very much. Being leadership in open source can be hard work, but getting a bit of applause can really help out. So thank you. So if you would like to get involved, there's tons of opportunities. Like I said, heaps of GEPs need work. We've got ingress of Gateway needs help. Gateway Kettle needs help. We've got a million docs updates need to be done. There's lots of other stuff. And so please come and check out to the website there, gatewayapi.sig.kates.io. That has a community page that lists the meetings and all the other channels that you can get involved, but the big ones are SIG Network, Gateway API, and Kubernetes Slack, and of course the repo itself. So with that, thanks. And if anyone has some questions, we have a few minutes. So please feel free. There's a microphone just here to run up and grab the mic. Otherwise, feel free to catch any of us around. Thank you very much, everybody. There's a question. Oh, there's a question. Yeah, sorry, man. Yeah. Hi. Thanks very much for the update. Unlike the last couple of updates, I noticed there was a conspicuous absence of anything about Gamma. Is that because it's not a priority at the moment or I'm just reading too much into it? So there was actually another talk at QCon by the Gamma leads about what's going on with Gamma. So yeah, we didn't want to duplicate the content. Gamma is so just to be 100% clear for the recording. Gamma is absolutely still happening. It is still moving forward. The Gamma leads are pushing things forward there, and that is definitely still going to be and is a thing. So great. Sorry. Thanks. Yeah, no worries. Great. Hello. Thank you for your talk. I'm just wondering when the CLI will be available or is it a testable right now? There is. Which one though? The Gateway Kettle? Yes, it's a good way. Gateway Kettle. So Gateway Kettle is available to download from the repo and compile yourself right now. We don't have a formal release process yet. That's one of the to-do list items. But yeah, so you can download and use it today. Thank you. No problems. Okay, look, that's the end of our questions. Thanks everyone. No, one more? Sorry. It's not a question. Just a bit of feedback. Yeah. So you mentioned session persistence. Yes. And put me completely honouring foot because I, from an application perspective, it's more about persisting your session like an HTTP session. So yeah, so that's what this is about. It's about routing your traffic to the correct back end. Yeah, like session affinity you mean. There is another talk about session persistence and session affinity. So they are related but slightly distinct concepts. This is one of the things that we had 400 comments on the PR about. But just to be sure, so if you say persistence, you don't mean to literally save it to, let's say, disk. That's what you mean. No, no, no, no, no. We mean using the same routing the same. And the cookie. Yeah, stickiness is a better answer. Right. My question or my feedback would be, I think lots of people would be put on the wrong foot like I was. Okay, thank you. Great feedback. Yeah, well, we'll remember but yeah, if you. I know naming is, what was it? The second thing hard is to do. Yeah, like everyone always says, right, there's two hard problems in computer science. Naming things, Cation validation and off by one errors. Okay. Thank you very much, everybody. It's been great.