 Great, everybody. Welcome to another OpenShift Commons briefing, and today, as we like to do on Mondays, we take previews, upstream projects, and really thrilled to have Danian Hansen back. He's now a principal software engineer at OpenShift, and he's going to talk about a feature that's coming in OpenShift 4.8, the Gateway API, and a little bit of background, I think, on Contour as well, a project that it springs from or is related to. So I'm going to let Danian introduce himself, walk us through this, maybe do a demo or two, and then we'll have a live conversation in Q&A at the end. So ask your questions in the chat, and we'll get rockin' and rollin', so take it away, Danian. Thanks, Diane. As Diane mentioned, my name's Danian Hansen. I'm a principal software engineer with OpenShift, and I'm going to spend some time today going through a dev preview feature that's going to be coming in OpenShift 4.8, and the feature's called Gateway API. A little background on Gateway API is, so if we look at what we have today, we have the ingress resource for Kubernetes, and the ingress resource, it's been around for a while, actually recently, GA, but there's been challenges with the ingress resource. So if you're familiar with OpenShift, the route resource is primarily used. We support ingress and route, but what we actually do is translate ingress to a route resource, but the route resource was actually even created before the ingress resource, because we at OpenShift needed a way to express how to route traffic into the cluster, and so we created the route resource, but the ingress resource came around, was created as a simple way to provide ingress, and as it evolved, what we started to see was that it just wasn't expressive enough to meet complex routing use cases, and what started to happen was implementers started exposing some of this additional configuration through annotations that's become pretty difficult to manage, and so that's kind of where we're at with the ingress resource, and then we also look at the service resource, and it's kind of become a dumping ground for all sorts of kind of Kubernetes service modeling, and so it's becoming quite bloated, and if we look here, this is actually the picture from when we first got together. It looks pretty strange these days. Well, I guess maybe not so much anymore, depending where you're from, but it looks kind of strange with the pandemic. I was like, wow, this was I think about four or five months. This was like November of 2019 that we got together at the KubeCon North America in San Diego to really start talking through this, and then formalizing a group. We created a working group to come up with a solution, and what we called it at the time was service APIs, and service APIs stuck around until just about, I think about three months ago, where we renamed the Project Gateway API. But after the group was formed, it took us about a year to really get to the point where we felt comfortable cutting a release, and we cut the V1 Alpha 1 release back in November of 2020. And through this process, we at Red Hat made a decision that we were going to implement Gateway API in contour, as opposed to OpenShift Router. There's things that were happening in the industry with the Envoy uptake with Envoy, as well as now contour BNC and CF projects. OpenShift Router has been good to us, but we wanted to try and move forward with an implementation that had a diverse community that was established upstream, the CNCF community, and those were big drivers for us, and ultimately led us to using contour to implement Gateway API. So let's talk a little bit about the API itself. One of the first areas to really point out is that Gateway API is a collection of resources, and these resources are modeled off of how clusters are managed and operate. And so you have these different groups, a group that will provide the infrastructure, a group that operates the infrastructure, and then you have the users, and in our case, those users are developers that want to expose their applications. And so kind of on the left-hand side of the diagram there, you see those different personas and how they align to the resources that make up Gateway API. And so we have a Gateway class, which if you're familiar with storage classes or ingress class, it's just a way to define a set of configuration or capabilities in a Gateway API sense. Those capabilities are around expressing a Gateway. And so simple example that may help with establishing a mental model is if we have two different Gateway classes. One we call external, one that we call internal. And the external Gateway class creates an external cloud load balancer, while the internal Gateway class creates an internal cloud load balancer. Those are two very simple use cases, but hopefully that helps you understand what we're trying to accomplish with kind of classifying Gateways using the Gateway class resource. And we have the Gateway resource. And so the Gateway resource instantiates the infrastructure. So a Gateway class, which also in this diagram isn't represented, but the Gateway class will typically reference some kind of custom resource. And when we get to the demo, I'll show you that in more detail, but typically Gateway class is also going to reference some kind of custom resource that expresses all of the detailed kind of configuration. And so that's what that custom resource is used for. And that allows Gateway API to be portable, right? So we're not actually putting implementation specific configuration parameters in the Gateway class. But, you know, that custom resource as well as the Gateway class, those are think of those as just kind of configuration snippets that live in the cluster. And nothing really happens. There's no infrastructure provisioning that's going on and like that until a Gateway is instantiated, right? And so typically a cluster operator is going to create the Gateway and that will go ahead and cause the controller or the implementation to take action on the Gateway, right? And so it's going to go ahead and see that hey, cluster operator wants to create a Gateway. Let me validate it. All right, it's valid. Let me start acting on it and looking at the Gateway class and this custom CRD and start creating the infrastructure that's being requested, right? And you see further down this chain, you know, you have HTTP route. So there's different route types that are specced by Gateway API, right? And they're, you know, they're protocol specific. So there's TLS route, there's HTTP route. And then there's even layer four abstractions, the TCP route and UDP route. And so the route types is where our developers are going to be interfacing, right? And so we have these developers that each have created an HTTP route to expose one of their application services that reside in the cluster. So beyond just the Gateway API model and how it's designed around these roles, it's also designed to be extensible, right? I talked a little bit in the previous slide about this, you know, CRD that gets referenced by Gateway class. Well, that's not just the only reference that's provided for a way to expose implementation specific configuration. Throughout the API, we as a maintainer team and others really try to think through different use cases from the simple to the very complex and try to figure out where in these resources are the best way to expose additional customization while keeping the core protocol 100% portable, right? And so there's these different layers of functionality from core to extended to custom. And the key is that the core is 100% portable, right? So we can go create a Gateway Gateway class using 100% of the core API features and you can go between providers and implementations and everything's all good, right? More than likely, you're going to get to a point where you want to dip into some of the extended or custom features of that particular implementation. You just need to be mindful of what extended and custom features you're using if you do decide to move these resources around between implementations. And the key point here also on the slide is the gravitational pull towards core, right? So because all these different implementations from proxies, the load balancers have so many different capabilities, we can't put all of that in the core. And again, one of the reasons why we drove this kind of three tiered design. But as this market matures and more and more of these pieces of functionality and capability become more common across the industry, our hope is that we bring those features from custom and extended into core and really drive the value of the core features of the API. You know, I talked at the beginning of the presentation about ingress and the challenges of ingress being very simple and the way that we express additional functionality is through annotations and that becomes very challenging to support over a long period of time. And so that is one of the areas that we tackle with Gateway API is again finding that balance between making the core features portable while also making Gateway API extensible so that we can be expressive without having to use annotations. And now here is just kind of a simple example of using traffic splitting based on weights. And so if it's traffic splitting, mirroring, routing to different types of resources, not necessarily a service resource, it could be some kind of custom resource or a S3 bucket or any kind of resource, we can support. So we're not really kind of locking in the design to a specific type of resource for the back end, for example. And then I talked about portability, but here's current implementations. Again, these implementations are either in the works or at an alpha feature level, but I'm pretty impressed with the diversity of the community along with for being an alpha implementation. Again, V1 alpha one was cut just in November that we have some implementations that are really progressing here. So where are we at today? So we as Red Hat, we established a maintainership and Gateway API and contour communities that was really important for us to make sure that we're invested in these communities, not only for ourselves, but for OpenShift and for our customers. We developed an upstream operator for managing contour. So really important, as Sherburne knows here, for functionality within OpenShift, that functionality typically needs to be managed by an operator. And so we went ahead and worked with the contour community to establish an operator and very happy with the progress of that project. The operator is being released in synchronized release with contour. And I think this is now the third or fourth release that we've done that. We've got a roadmap. These are working really well with having that operator and not only having the operator, but again, having it upstream, living with with contour is very important to us. We added Gateway API support and contour and contour operator in V113, which was just about six weeks ago. We improved that support in V114 contour that was released just about a week and a half ago. We're working hard within that community to continue improving the support. We still have a ways to go, but I think where we're at, we're very happy with and again, we're working hard to keep moving that Gateway API support in a positive direction. And for OpenShift, kind of the heart of what we're talking about here today is we're actually providing a dev preview of Gateway API contour and a contour operator in V48. So we're really excited to provide this dev preview. Anyone interested in the preview, again, keep in mind it is dev preview, but we're really hoping to have users kind of kick the tires on the solution, give us feedback, work with us to help make the feature the very best feature that it can moving forward. And so we, to do this, we really want to be able to have that partnership with our customers and, yeah, looking forward to hearing from users. Let me take a few minutes. Let's run through a demo here. And while I do that, let me go ahead and throw this in the chat window for others to save and to do the chat window and do this. Diane, I threw a link to this guide and in Slack, if you don't mind posting it to the chat window here. I appreciate that. So what I'm sharing with you here, this is documentation that I've put together on Running Gateway API on OpenShift. You see that this is the version that I've tested on, which is a 4.8 nightly build using upstream 1.14.0 of Contour and the Contour operator. And again, just to stress, there's no, you know, OpenShift specific integration here. We're not forking anything from upstream. The dev preview is basically going to say, hey, here's how you take this upstream project and run it on OpenShift. And that is what we're kind of using as a baseline, which is also, you know, which is very good in the sense that, right, we're going to start this feature using upstream, not a fork. We're going to start using upstream operator upstream, Contour upstream Gateway API and then evolve the support from that. But that will always be our baseline to make sure that we're in lockstep with upstream and why, you know, we felt it was critical that all the work we've done up until this point is really about getting upstream right so it can be right in OpenShift. But take a look at this documentation. This this will be used for the official product documentation along with some other documentation that we'll develop. But this is essentially just a quick start, right? How to go ahead and get Gateway API up and running in my OpenShift cluster. So let's kind of walk through it here. The first thing that we're going to do is run Contour operator. I mean, I jump over to my terminal here and I have a cluster OpenShift 4a cluster running. I've configured my OC client to talk to the cluster and you see that all my cluster operators are recording the expected status conditions. So everything's looking good. Let's go ahead and provision the Contour operator. You see that we create a namespace for the Contour operator to run in. We install a bunch of CRDs. Some of these CRDs like Gateway classes is from the upstream Gateway API project. Other CRDs are from Contour operator. For example, Contour operator watches Contour custom resources and then performs some kind of action based on those Contour custom resources. And some of the CRDs are for Contour itself, HTTP routes, or I'm sorry, HTTP proxies, TL certificate delegations and such. We set up all the R back needed for the operator and Contour. We create a service for the operator's metric on metrics endpoint. And then we use a deployment resource to manage the operator. So let's see what the status of the deployment is for the Contour operator. All right, it's available. I'm going to go ahead and tail the logs too. And you see that the operator is available, that it's running and tells us what image of Contour that it will use along with what image of Envoy proxy it will be using starts the metrics server creates a metrics endpoint starts the controllers for the different resources that it's going to manage, right? So Gateway controller, the Contour controller, the Gateway class controller, you'll see there's no HTTP route or UDP route or TLS route controllers. That's because Contour. So Contour the controller will manage those resources. And I try to thank you. I didn't have it in the presentation. But I'm just going to go back here and talk for a second about Contour. Alright, so Contour is it's a control plane for Envoy, right? So in for Gateway API, or just using Contour itself, right? Contour is a control plane for managing Envoy proxies. Envoy proxies are the data plane. So when you create a Gateway, you create an Ingress or HTTP proxy is the custom resource that the Contour community created to get around the the Ingress resource limitations that I talked about at the beginning of our presentation. And, and so it's you know, Contour is going to watch any of those resources. And then it's going to go ahead and instantiate or manage your Envoy proxy fleet, which will, you know, essentially take those resource configurations, translate them into an Envoy configuration, and then Envoy will handle the proxy. Let's go back here to, all right, so this said we now have the operator up and running. No, the logs will keep that there. And let's, let's kind of go down through here. So I mentioned now for dev preview, we don't have really any OpenShift specific integration at this point. Take a look at issue 112, where we have that as an issue on the operator repo, where we'd like to create an abstraction that allows that allows Contour and Contour operator to perform management for certain platforms, right? And won't dive down into too much of the details, but look at the issue 112, if you'd like to know more. What we need to do here is we need to create or establish or associate the Contour and the Contour search gen service accounts with the non-routes SCC, right? So let me go ahead and do that for the Contour service account. And let's also do it for search gen. And the key point here is you see this schema for this command. All right, there we go. System service account, project, Contour, Contour search gen, and then project, Contour here. See the schema. And what we have here in the schema is this is this portion is the namespace. And then this is the name of the service account, right? You see Contour and Contour search gen, both in the project, Contour namespace. The key here is this is going to be the namespace of our gateway. So keep that in mind, wherever if you create your own gateway, and you put it in namespace foo, make sure that when you when you add the Contour and Contour search gen service accounts to the non-route SCC that you are specifying the name of the namespace of the gateway here. So that's good to go. Let's go ahead and provision our gateway. Now, say this is a gateway, but this is actually going to be multiple resources here. Let's take a look at what they are. Give it a second. We create, this is unchanged because we already have the Contour operator namespace, but we create the project Contour namespace. Remember, right up here, for our service accounts match this namespace. So we create the namespace that allows us to create these resources in, right? So this Contour will be created in the operator's namespace. The gateway class resource is a cluster scope resource. So it doesn't matter what namespace, right? And then the gateway itself is created in namespace project Contour and we'll dive into each of these resources in a little more detail here. But let's take a look. Let's see here. So the first thing we're going to look at here is this custom resource called a Contour. And our Contours is named Contour Gateway Sample. And again, this has been created in the Contour operator namespace. So in the same namespace as the operator, which is required. And we see that it's ready and that it's been admitted by the gateway class. Let's take a little bit closer look here. We'll dive into some of the details, right? And you see that it references that sample gateway class. So there's a bidirectional binding that occurs between this Contour custom resource and the gateway class that it's bound to. Because the gateway class we'll see here in a second. It actually references this Contour resource. So there's a bidirectional binding between the two resources. This field is actually ignored when gateway class is gateway class references specified. So we can skip that. But you see here's a lot of the details, right? That are not meant to be expressed through gateway API, right? Because again, different implementations will have different configuration settings and so forth. And so you see the network publishing field in the Contour custom resource allows us to specify the container ports and port numbers that Envoy will use. The type of load balancer. So we're going to create an AWS external load balancer and then the number of replicas that we're going to create for the Contour control plane. And it also gives us some status here as well. So let us know how many of the Contour and Envoy's are available along with some status conditions. So all this looks really good. I'll give you a little more background of what configuration we're expressing through the Contour custom resource. Let's take a look at this gateway class now and we'll just dive into the details of the gateway class. So remember, this Contour is referencing this gateway class and we're going to see now this gateway class references that contour. We just looked at it because this is where we go ahead and do that in the gateway class using the parameters ref field where we say, hey, for this gateway class, again, going back to like the earlier example, this could be our external gateway class instead of what did I call it? Contour gateway class sample or sample gateway class, right? So based on that configuration we saw in the Contour, this could very well be our external gateway class because we're, you know, any gateways of this class will create an external AWS load balancer. So that's kind of the kind of that workflow and the linkage between these different resources. The other key piece is the controller field, right? And so Contour operator is going to be looking at gateway classes and then one of the first things it's going to be doing is it's going to say, I see a gateway class. Let me see if it specifies the controller string that's required for me to manage this gateway class. And so we use this string for telling Contour to manage gateway classes. So this allows clusters that are similar to like ingress controllers, right? You could have a cluster with different ingress controllers. Same thing with gateway class and gateways, right? We may have multiple gateways, but we may want those to have different implementations and the controller field is what's used for that. And we see that this gateway class is admitted and that it's owned by the Contour operator. So looking good so far. Let's take a look at the gateway here, right? So a couple things here. We'll start from the top down. We see again this linkage, right? We use the gateway class name field to tell this gateway which gateway class it is part of. And then the gateway has multiple listeners, right? So these are like the network endpoints that the gateway will be listening on, right? And so it specifies what protocol will be listening on the ports. And then we get into this routes field and this is something to, you know, to really understand because the same lines of this linkage that we're seeing throughout the APIs, this route, this routes field is what allows us to link routes, right? So one of the next areas of the API that we'll dive into is the actual routes, right? The routing logic of how do we, now that, you know, traffic is hitting a gateway, how do we actually route that traffic to the back end resources like a service resource that we want, right? And so this routes field is going to express what type of routes that it should bind to, what namespaces, right? So do we, you know, we do only want to bind routes that are in the same namespace as the gateway? Do we want to allow routes across all namespaces or we can use selectors to be very specific and which routes we're binding to? So we got a lot of flexibility there to create that binding with routes, right? And so and the same logic here for our HTTPS listener as well. And then we have our status conditions, right? So we have status conditions. You see that the gateway is ready to serve routes and it's ready to go. So the next step is to actually create a route. And so let's go ahead and do that. You know, optionally too, you could see, let's take a look at the the infrastructure that was created by the gateway, right? So when we instantiated this gateway, the operator took action on that and did a bunch of stuff for us, right? It created a deployment to manage our control plane. It created a Damon set to establish our data plane and did a bunch of stuff, config maps and the service counts, all this kind of stuff to make contour and envoy all work in harmony. But let's go ahead and create a sample workload. We're going to use card and you could go ahead and get a little background on the card app if you'd like. But we'll go ahead and provision card, create a deployment, the service, and then you see that this is our HTTP route, right? So this is the route that the gateway is going to be binding to. Let's take a look at the status of these, the running things looking good. And you know what I wanted to do as well that I didn't show you before is the logs. There was a reason why I had the logs for the operator, right? And so as I mentioned, when the operator sees the contour custom resource, the gateway class resource, it's going to reconcile those. It's going to make sure that they're valid, that they're referencing one another, all that good stuff. And then again, the key resource is that gateway resource. And when it sees the gateway resource, it starts doing a whole bunch of work for us. The RBAC, you know, the RBAC for contour, the config map, the Damon set to manage the envoy, proxy, all this stuff, right? So I just wanted to show you that really quickly. But back to our example workload, we see the pods are running, front-ended those pods with the cluster IP service. And we've established or created an HTTP route. Go back to our documentation. And let's actually even take a look at what this route looks like. So again, with that theme of bi-directional relationships, we've got, we've specified for our gateways that we're going to allow same namespace. So if our gateway was not in the namespace project contour, which is the namespace of this HTTP route, we would not bind to the gateway, even if the gateway has a different configuration, right? Because both need to agree on the same configuration. But fortunately, we do, right? Because the gateway previously, let me see if it's back here still. Right? Here's a gateway configuration that says, hey, we're going to allow routes from the same namespace, right? And you see here, again, on the route side, we're saying, hey, allow gateways from the same namespace. And so with the route, we say, okay, what hostname that we're going to use for routing this traffic, right? So this is going to align with the host header. So they see any requests that hit the gateway with a host header local.project.com or I will match this route. So the gateway knows, okay, select this route not just because of its gateway policy here, but that this request is coming in with the appropriate hostname header. And then I've got some rules, right? I'm going to forward this request to end points associated to service name card on this particular port. And we have a match here, right? So before we do the forward, we're going to match not only the hostname, but what's the path, okay? So any requests that hit this hostname, the route path, really, you know, any subpaths, let's go ahead and forward to end points of service name card. And then we also have status conditions so we can see what gateways this route is bound to. This is the gateway that we've created and went through the details on. And then this route's admitted, right? Because it's valid. So it's past validation. It's associated to a gateway and it's admitted. So admitted true is that's the key status condition we're looking for. So everything's looking good so far. Let's go back here to our documentation. And let's go ahead and test connectivity through our gateway. And you see, because I don't have a DNS name created for this, I have to go ahead and supply this host header here, right? And so I get the gateway IP from the host name or depending on your load, your cloud provider, as I specify here in these directions, you may need to swap out hostname for IP. So this cluster is running on AWS which uses host names for load balancer ingress. But anyways, so we're going to test by curling using the host header that matches the host name of our HTTP route and hit that gateway address and we should get a 200 back. So we've established infrastructure using gateway APIs. Again, the gateway APIs, the implementation is contour and contour manages Envoy, so contour and Envoy. This is all upstream running on OpenShift and we have verified connectivity from my client here running on my laptop all the way through my OpenShift cluster and then on AWS to the card application. And what we can do is just verify that those requests did go through our Envoy proxy fleet. So look at the logs of Envoy daemon set. You see here that we found three pods and it's using one of them. So since we have three, we need to let's see here. So the request did not go through this particular pod in the daemon set. Let's go ahead and logs, let's change this to keo. All right, so 9MMVR, that would be this one. Let's try a different Envoy proxy in our fleet. Nope, didn't go through that one. Let's try this one. There it is. So there's a curl request that I sent and it went through this particular Envoy proxy. And last but not least, let's actually take a look at the deployment. Maybe we'll have to do the same thing, get into a particular pod of the card application and we can see that the request actually hitting. So we see here no request coming in, so we probably have to do the same thing here with our card endpoints, which we have these three. All right, the first one, which one is this that we looked at? So three pods using hns, so last three, hns, so let's try, let's try this one. Nope, let's try this one. There it is. So you see the get request. You may say to yourself, well what IP is this? So this you get to minus o wide, minus o wide. That's going to be the IP of the Envoy proxy that serviced the request. So we see that it came in on 223. There's 223, right? And when we verified, we looked at those different Envoy proxy logs, should be SRR75 that we saw the request come in. That's SRR75 right there. So that kind of stitches the whole request workflow through the infrastructure that was provisioned by Gateway API using Contour. So let's go ahead. We've got some time here. I'll go ahead and stop and hand it back to Diane and others that may have any questions. Well, I think that was really a great introduction to the Gateway API. So thanks very much at Calm, Cool, and Collected and all of your resource links there. So I'm thrilled with that. So anyone should be able to follow along with the quick start. There were a couple of questions in the chat and Mark Currie has joined us as well, who is the PM for some of this and a few other folks here that are working with Mika and others and I'll unmute you. In the chat, and I think you answered a lot of them, one of them was early on what are the alternatives to Contour and I pointed people to the Contour project Contour's FAQ and you did cover that. So I think we're good with that one. And then Noel was asking, could you explain how all of this fits in with Service Mesh and is Contour intended to replace Envoy? And I think you covered that one up. You might Mark's there if he wants to go in a little bit deeper on that. Let me, before Mark jumps in, let me just kind of talk about the Service Mesh and so in one of the slides I shared, you see the different implementations and one of those being Istio, right? And so if it's Istio, K-native, open shift with our route resource, I mean this is, these are the issues that Gateway API is trying to tackle, right? And why did the, you know, most of these projects started off their initiative with Ingress? I mean I remember early on with Istio, it started off using standard Kubernetes Ingress and then like pretty much all these projects they get to a point where it's like, ah, you know what, Ingress is just not expressive enough to meet my needs. I need to go create a custom resource and Istio goes out and creates their own Gateway and virtual service and I mean each project goes and does this, right? Even with contours, the HTTP proxy resource. So what Gateway API is trying to do is create this common abstraction, right? This common API.K-native, Istio, any implementation can use and the positive thing about that is, you know, we reached out to these different community leaders or project leaders. So we've worked very closely with Istio and why they're on that list of supported implementations, right? Same with K-native. You know, I talked about the route resource. It's why we're here at the table and so, you know, it's really meant to be kind of that, you know, I like what Mark has shared in the past is like, it unifies Ingress. So it's like, hey, I don't care if I want to provide Ingress into my cluster for standard Kubernetes service, right? Typically that would be done using Ingress, right? Or for OpenShift it would be the route, right? Or if it's Istio, okay, I'm going to create a Gateway and a virtual service or if it's K-native because I, yeah, it doesn't matter. We don't, you know, the cluster operator doesn't need to know 20 different APIs that basically provide Ingress. They can now start to say, hey, the path forward for most of these projects, if not all, is let's start converging on Gateway API. So we have this unified way of expressing Ingress, no matter, you know, what the backend is. If it's, again, a standard, you know, Kubernetes service, some custom backend, serverless, service mesh. Diane, you're on mute. I'm muted myself. I never do that. Mark, did you want to add anything to that or? No, I think Dane covered it very well. So we definitely want to unify, as Dane was saying, across some of the different layered products of OpenShift. So we have service mesh. We have OpenShift virtualization with three scale products. We want to provide a mechanism that's going to work equally as well across all of those to simplify the decision and configuration process as well as unifying efforts. So this, as Dane was saying, represents unifying efforts ultimately. That's probably the biggest outcome, I think, for me as a product manager, probably the biggest outcome of this. And it seems like a lot of the questions that are coming in are contour instead of this or contour with this or that. It's really about getting alignment across all of the gateway and the APIs, which I think is an amazing effort. And it is something that's been working through the SIG in CNCF as well. So this has been a pretty interesting collaboration, and hopefully it resolves, solves as everyone matures in these projects. So that's great. Yeah, users, they don't want to have to consider, you know, how do I get my traffic into the cluster if I have one type of traffic versus another. It should be simpler than that. And so this is one step in that direction. And this is all coming on the heels of the OpenShift 4.8 release, correct, if I'm wrong. And 4.8, we're targeting a developer preview. Like, so as Dane was saying earlier, this is something the users can kick the tires, so to speak on, and give that, and they can get a heads up of the direction that we're going. And if people want to get involved in this project, whether it's Contour or the Gateway API, where's the best place for them to connect with you guys? You know, it's on the Slack channel, so of course, you know, Contour, it is a CNCF project, but me, actually I just pop in link to Community here in the chat window, give me one second. So this is a really good resource to reference. Yep. And interestingly enough, Gateway API has the same resource as well. So those are both the community links for both projects, and yeah, we'd appreciate more new happy faces coming in. And, you know, even if it's just asking questions or, you know, we're always looking for use cases, right? I mean, we've got, for Gateway API, we've got maintainers from Red Hat myself, from Project Contour, from Kong, you know, all these different implementations, of course, Google as well. And so yeah, I mean, we'd appreciate, again, even if it's just, you know, questions or use cases, you know, and looking forward to getting more people involved. And did you perhaps, or anyone from the Contour community, get a talk at CNCF EU? Is there anything in the schedule there that we can look forward to more deep details? Yeah, Diane, thanks for bringing that up. I forgot to mention that. So, yeah, at the upcoming KubeCon, we've got we've got presentations and live meetings for both projects. So SIG network, look for the SIG network meeting along with presentations for Gateway API. And that's going to be really interesting because Rob Scott from Google is going to do a few different demonstrations to really show the different implementations. So he's going to highlight Contour with some of the other implementations and also demonstrate an advanced use case where we worked with the SIG multi cluster pretty early on. And so don't think about just the Gateway API in the context of a single cluster being able to do traffic splitting across two different service backends. I'll take that same idea but bring it up a level and say, well, what if I had multiple clusters that I wanted to load share requests coming in across those clusters? And so he's going to be demonstrating some of the advanced features there with the multi cluster traffic splitting. And then, yeah, Project Contour we've got, we recorded a briefing and then we have a couple, you know, meet the maintainer live sessions and definitely looking forward to having a nice open discussion with people who are interested. So I think that's probably besides the SIG network meetings and the community meetings, the next big event for probably everybody who's listening to this besides the Red Hat Summit April event and the 4.8 release cycle. KubeCon EU is probably the next juncture where I'm thinking if there's any new features or use cases or anything that's going to get showcased there. And please, if you're listening to this before or after, reach out to Danian, to the folks Mark and others. We'd love to get your feedback on using it in the developer preview for OpenShift and hear what you're doing and how you're using it. So definitely reach out to us and we look forward to getting an update post KubeCon on what new features and functionality gets added in as the project matures. So well done. And I know a lot of work went on in the background, a lot of collaboration across communities, across upstream projects. So this is really, you know, a really nice way to showcase all the amazing cross-community collaboration that goes on in the background on some of these CNCF projects and, you know, really a nice showcase. So thank you very much for today and we'll just do this again hopefully in another couple of months and see where we're at then and get your feedback. Especially if someone's using this in production, rolls it out in 4.8 and wants to talk about their experience. I'd love to hear that too as well. So that would be awesome. So thanks, Danian. It's always a pleasure and Mark awesome work shepherding this all through. Not seeing any other questions in the chat. So I think you all get 4 minutes back to your day. So go grab a cup of coffee and enjoy the rest of your week. Thanks, Dan. Thanks for having us. We look forward to coming back. All right, take care. Thanks, Dan.