 Welcome to the Contour project update. We're going to be talking about Project Contour, what's been happening recently, and particularly highlighting some of the really great community contributions that we've had lately. So let's get started. My name is Sanjay Bhatia. I'm one of the maintainers of Contour. I'm an engineer at VMware. And here we have. Yeah, my name's Nigel. I am a developer advocate at Intuit. And I focus on community things around Contour. All right, so let's get started off with just a little bit of background just for anyone who's new to this space. So what is Contour? Contour is a Kubernetes ingress controller that uses Envoy. If we're kind of not super familiar with Contour or not familiar with ingress controllers in general, here's a little diagram about generally how Contour is set up in a Kubernetes cluster. So we read configuration from Kubernetes resources. Envoy and Contour are set up to securely communicate over the XDS protocol. And typically web clients can be other types of traffic as well. Connect to a single IP address to get access to your back end services, typically using a service of type load balancer. And then the configuration that Contour sends to Envoy routes traffic to your apps based on different paths, different host names. And yeah, I'm sure people have seen in various talks in EnvoyCon here at KubeCon on Monday. But Envoy is a super powerful tool. It's kind of become like a de facto standard for proxying and service proxying in Kubernetes especially. So here's some reasons, I guess, why we use Envoy as a data plane. Super well maintained, well tested, performant, observable proxy that the Contour has been built around since its inception. And we're super happy to be involved with the Envoy community and build a product around it. Yeah, another reason that Contour is a great tool to use is that it's not really a new project. And it's something that is being used in production scale at VMware and other companies. It's been an incubating project since 2020. And it's got a robust feature set around it and active maintainers to help drive the project forward. So a little bit about the history, the first instance of Contour back in 2017. And as we've gone through time, the Gateway API conformance was a big moment for us in July of last year. And just last month, we had our V1.27 release. But yeah, we've been implementing a lot of API Gateway features and are more and more conformant with Gateway API as time passes. So it's been in GA since November of 2019, being used at scale in many production environments. And yeah, the last three minor releases are supported. And we've got nine months of support for each. And then a lot of back ports for CVEs that come through and high severity bug fixes. Some of the features that Contour has, you can see on the screen the path header query-based routing, TLS termination, a lot of other advanced features that are implemented in most API Gateway implementations. And yeah, earlier I alluded to the Contour reading resources from a Kubernetes cluster to allow you to configure your routing in Ingress. So Contour supports multiple configuration APIs. So our own CRD, HTTP proxy, we support Gateway API, as Nigel mentioned. So Gateway API is the next generation of Kubernetes service networking APIs. We're fully conformant with all the core Gateway API features and many of the extended features. And of course, we also support Kubernetes Ingress. The kind of a great way to get started with Ingress. And if that's what you need, you don't need anything more advanced than path routing and a little bit of TLS, then it's a great way to go. Yeah, so Contour is a community-driven project. And we want to talk about how you all can get involved and also just highlight some of the awesome things that members of our community have been doing. So all of the features that you see up here were proposed by community members and had implementations who were folks who were submitting feature requests. We've got a public roadmap. And we're open to hearing how you all want to move the project forward. And we wanted to call out a few of our contributors here. So the HTTP proxy IP filtering support, we've got the exporting request tracing date at open telemetry. The open telemetry request was made pretty early. And then, yeah, it was implemented by a community member. And, yeah, other features for HTTP proxy, Gateway API, GRPC, route support. So thank you to our contributors. And this was in v1.25, which we talked about back in KubeCon in Amsterdam in April. But in our most recent release, we have some other community contributions that we want to call out again, or we'll call out for the first time. A lot of Gateway API support from some of our community members. We've got improvements for listeners and clusters, and then more HTTP proxy. And then the big Kubernetes 1.28 update support for that. Yeah, I just wanted to have a great thank you to all the community members. We've seen a really great increase in the quality and the size of community contributions lately. And we're really excited about that. People taking features all the way from an issue that they need a new feature in contour to a full really well-written design document all the way to implementation going through the review process. So we've seen an uptick in that in the last year, I feel like. And so we're really happy with the direction the community is going, and hopefully being able to give people the latitude to do the work that they think is important for contour. And we're always open to these kinds of new changes, big changes, and supporting the community. So if anyone has feedback on how we can do things better, that would be beautiful to hear. But yeah, I just wanted to call out the community has been really great in pushing contour forward. But yeah, just highlighting some of the features that were driven in contour 1.26. A big one is in Gateway API support, previously we only supported two ports per gateway. So basically you could only configure an HTTP and HTTPS listener on a gateway. So your gateway listeners may look something like this. You have HTTP 1 on port 80, HTTPS on port 4 for 3, kind of standard HTTPS traffic. But now this is kind of a requirement for Gateway API conformance. But you can configure many ports on a Gateway API listener. So you can kind of interleave different things that you might want to support. So we also added support for TCP route, which is kind of a generic TCP proxying mechanism that Gateway API provides. It's kind of been a longstanding feature request in contour that people just want to be able to say, hey, give me a port. I want to afford TCP traffic through Envoy, get all the nice metrics and stats that Envoy can provide about that. So that was a big feature in contour 1.26. And this is a feature that was provided by a community member, HTTP route, regex matching, which is kind of an interesting thing that didn't exist before in contour. It was maybe a little bit surprising that matching on paths and headers with regexes was not supported. But this was kind of a community member how to use case that needed this feature, and they were able to show up and contribute this. So it was really great to see. Yeah, we can go all of these features are also documented in the contour release notes, which we kind of push contributors to write detailed release notes so that other users and these kinds of things can be self-documenting as well. So you can find more details about these features in the contour release notes on the GitHub page. Now to the most recent contour release, just calling out some of the contributors and what they've brought to the table in contour 127. So we had a pretty important bug fix from Soteris from Reddit and kind of a table stakes thing that hadn't been implemented before in endpoint slice support from Clayton as well. I'm sure people have also heard about the recent HTTP2 rapid reset CVE, so we implemented some fixes for that, including the envoy features that envoy provided to help mitigate that CVE. But quite an interesting contribution from a community member is the improved cache warm-up logic that we've been able to have as supported in contour. So it takes advantages of some of the improvements in Client Go that were developed recently. Being able to have events that, in former events and event handlers internal to the contour controllers, actually propagate when those events are fully handled by those controllers, by those event handlers. So we can more accurately start up the contour XDS server and start building configuration only once the existing state of the cluster has already been warmed into contours internal cache. Something that many controllers actually, for a long time, have probably been technically getting wrong in Kubernetes. So big shout out to really AK12 and also the, I believe it might have been John Howard and others who worked on contributing to Client Go to help fix this. So yeah. Again, we wanted to just really give a huge shout out to our community because this doesn't work without you. And we have something in all of our release notes for every PR that's merged for every contributor that submits bug fixes or features to the project. We give them a shout out in the release notes with a link to their PR. But we also just wanted to take a second to put some names of the screen of some of our other contributors to say thanks for all of the help with contour. And this list may not be fully fleshed out, but these are the ones scraped from the release notes for significant changes that warranted a release note. But there may be others and maybe other people in this room, I don't know, who contributed other things and just want to say thank you. All right. And yeah, just a quick chat about upcoming stuff in contour. So we can see the plans for the contour 1.28 release in the GitHub milestone. Should be able to see that on our GitHub repository. But here's just a highlight of some of the things that are currently planned for the release. We're planning on supporting Gateway API v1.0, many of you may have seen that announcement here at KubeCon and on GitHub as well. We're planning on implementing some of the more extended features in Gateway API, so HTTP timeouts, back end protocols, being able to configure what back end protocols using service fields and having that work properly with Gateway API, and also possibly most likely back end TLS policy. Any and all of these things are actually things that are open to the community to contribute, so you can get your feet wet with Gateway API, get your feet wet with contour. It'd be great to have community contributions for any and all of these things. We're planning also to migrate to using endpoint slices by default in this release, but still giving you an option to revert back to endpoint support if you absolutely need it if you encounter any issues. And yeah, here's some of that kind of future more general roadmap stuff that we'd like to talk about as well. Yeah. Again, like I was saying, this is a community driven project, and so we're continuing those initiatives. We are also working diligently to expand maintainership, and we have all of the explanation of that in our governance stock and the repository. You don't have to start writing code to contribute. We're looking for reviewers as well. We want to hear your opinions. We have community support and Slack, which is a good way if you are experienced in contour and you want to help other people out and reduce the support burden on the maintainers to get yourself on that path to maintainership, that would be awesome as well. We're also working on the website architecture to overhaul that, and then working with partners who are running in contour and production. If you are running it in prod and we don't know about it, please let us know. We want to work with you to help improve the efficiency, improve the observability, improve your experience with using contour, and then, again, continuing to be fully conforming with Gateway API, and then to get some parity between the HTTP proxy CRD and the Gateway API offerings. We have been doing ad hoc community meetings as an issue arises that benefits from time, you know, synchronously communicating. We hold meetings like that, and as they become more frequent, then we'll have a regular release cadence again. But one of the things that we ask is that if you want to chat with us, let us know. We're very happy to set up a meeting, and we're looking for a community help to put together learning path and content for general networking and networking with Kubernetes, especially so people can start understanding more about how Ingress works and what contour, Envoy, what Kubernetes even does. Yeah, so yeah, that's my big ask, please get involved. Some stats about contour. We've been around for a while. A lot of image pulls, a lot of releases. We've got Twitter, GitHub, a YouTube channel. So yeah, if you need to reach us, we're very easy to find. I am at Nigel and the Kubernetes Slack. Feel free to ping me. And I think I'm at Sanjay Bhatia. I forget what my handle is, but yeah. The contour channel on Slack, we love to have people ask questions and also answer each other's questions, right? We have a few maintainers, but it's really great to see when community members are answering each other's questions. That's one of the great things we want to see, so yeah. GitHub Slack, if you're interested in contributing, please drop us a line, yeah. Thanks so much for coming to the update for contour. If you have any questions, we have some time to take them. And if you have feedback, there's a QR code. Any questions? Hi, I'm Cannon Palms from Influx Data. We're using contour in production. One of our biggest pain points so far has been with the, at least as documented, the four phase upgrade process. I noticed that, so we only recently began using contour in production, but I noticed that there was an operator that was then deprecated that I imagine had, I don't know if it actually did solve some of these problems, but at least had a chance to solve some of these problems. Is the upgrade process as documented on the website still the best and sort of only way to get through these upgrades of, you know, apply the CRDs, then apply the search in job, then apply the new and so on and so forth, or is there a better way and is that something that we'd consider as a upcoming roadmap item for next year? Yeah, I think, so on the operator we, that kind of has fallen a little bit by the wayside just because the community contributors to that kind of stepped away. So we had, that's kind of been deprecated just also because the contour gateway provisioner component is kind of, I guess, replacing that, you could say, for dynamically provisioning contour. So for upgrades in general, I believe, yes, so installing contour as a standalone with the YAML as documented, that is currently the recommended approach, I guess, but with the contour gateway provisioner, we have kind of like you've alluded to the chance with the operator to improve that experience and kind of do a bit more operator-like work with that component, and that can be used, not just with gateway API, that can be used with just HB Proxy or Ingress as well if you're interested in that. So yeah, I think that's definitely something we can work on improving on the gateway provisioner component for sure, and yeah, if you're interested, let us know and we can try to prioritize that. Yeah, what I would say is let's have a discussion about if you open an issue in the repository and then we can talk to other folks about their upgrade, like PANES. I know this is a big conversation in the Kubernetes space right now, especially with working group LTS spending back up, but yeah, if there are any sort of friction that you have with upgrading, we, yeah, let us know and we'll see what we can do about improving that process, but yeah, please open an issue and we'll have a discussion from there. Thank you. Hey folks, first off, thanks for all the work you do on Contour. We use it heavily at Reddit. It serves production traffic every day. We love it. You're doing a great job. One feature I was kind of curious about if you get much feedback on or desire to expand that I've used for years now is the ability to get a dot output of the internal graph in Contour and then, you know, I think the docs probably have you like pipe it through Graphbiz still to get some kind of visualization. It's really cool and really helpful in trying to understand where links might be missing in regards to like downstream or I guess in proxy terms, upstream dependencies. Do y'all get like much desire or feedback on desire to like expand that out? Have you seen cool stuff in the community that tries to like scrape that data and provide more live up to date visualizations? Just kind of curious what you've seen. I don't believe that I've seen too much personally with people doing a lot with that, but I think it's an area that would be interesting to explore because it is a bit, it's not the easiest I would say at the moment to debug why your routing may not be working, right? We have status on HTTP proxy. We have status on the Gateway API resources so we can kind of show you, okay, your service is misconfigured. You've typoed this cert, secret name or something like that, but the kind of logic for precedence rules of how routes are ordered in on-vote configuration, you kind of have to go look at the on-vote config dump at the moment or kind of parse out the DAG output yourself. I think that's an area actually. We talked with some folks from Reddit who are interested in kind of changing or turning different options on for how routes are sorted in the Contra DAG and the output to on-vote. So I think that sort of operating, that would be something that we'd be interested in just to make Contra easy to operate, easy to reason about, and also be able to give different kind of personas the ability to reason about what's going on with the routing, right? Like we could have a Contra operator, the administrator of Contra, they have an RBAC access. They have access to the Contra pods, possibly all this stuff, but an individual HTTP proxy owner or an individual site owner doesn't really have that access to debug that. So that's definitely an area we can improve and yeah, we'd love to work with y'all on that. Yeah, and if you have any recommended visualization tools or any recommended workflows, if you let us know, then definitely want to take a look at that. I think that could be fun. Yeah, and definitely not, if you're not super familiar with Kubernetes controllers, but you're interested in other areas, right? This is kind of one of those things where you can get involved with a community without having a high level understanding of how Contra works, how Ingrid works will help, but this is an area where you can kind of get your feet wet but starting with a different technology maybe than a Kubernetes controller, so yeah. Yeah, thanks for the question, Josh. Do we have any, oh yeah. Yeah, this is more of a basic question. I'm stuck on an older version of Kubernetes in order to use the Gateway API with Contra, I need least Contra 122 and then Kubernetes 124, is that right? Yeah, I forget the exact version compatibility of Gateway API nowadays of the latest releases, but I believe like the latest releases of Gateway API are supported up to Kubernetes 125, but we have a, we can send you or show you a link to kind of the compatibility matrix that we have on our website. I should show kind of the compatible versions of things. All right, thank you. Yeah, all right, is that all? Thank you all so much for coming. Yeah, oh wait, we've got one more. Sorry, you clapped prematurely. I just wanted for it to be on record, thank you to the contributor who added endpoint slide support. It's awesome, thank you. Yeah, thank you to, I think it was Clayton that read it and did that, so thank you very much. Yeah. Okay, you don't have to clap again, but thanks so much for coming. No, no, no, no. Yeah, thank you all for attending and hope to talk to you. Yeah, enjoy the end of KubeCon, take care y'all, bye.