 Hey everyone, welcome to the Contour Maintainer track. Today we have all the Contour Maintainers here doing a presentation on Contour. So let's just go around the room first and introduce ourselves. I'm Alex Xu, I'm a Product Manager at VMware in the Cloud Native Business Unit. I hand it off to Nick. Hi everybody, my name's Nick Young. I am the Technical Lead on Contour and I'm a Staff Engineer at VMware in the same Business Unit as Alex. I would use Steve Kay. Yeah, hi everyone, I'm Steve Chris. I'm an Engineer at VMware and I'm a Maintainer on Contour. And I'll pass it over to Steve Sloka. Hi everyone, Steve Sloka. I've been working on Contour for a while. I'm also an Engineer at VMware. Off to you, Sanjay. Hi everyone, my name is Sanjay. I'm a member of the Contour Maintainer team and an Engineer at VMware as well. All right, yeah, thanks everyone. So I'll start with what is Contour? So Contour is a Kubernetes Ingress Controller. If you have no experience with Ingress, this is basically a way to bring external traffic into your Kubernetes cluster. So Ingress is part of the Kubernetes API, but in order to use Ingress, to leverage Ingress, you need an Ingress Controller, which is a piece of software that you'd have to install into your cluster and it controls how traffic hits your cluster from the outside and how it gets routed to different services with it. So Contour works by packaging and deploying Envoy, which is another open source project, it's a really popular one. And that functions as a reverse proxy and a layer seven low balancer deployed within your Kubernetes cluster. So basically any Contour is a control plan for Envoy and Envoy is the reverse proxy, which exposes any HTTP or HTTPS routes from the outside to your upstream services within the cluster. Project Contour supports Taylor's termination pass-through. We have various low balancing options for controlling this traffic or balancing this traffic between your different backends as you scale up and down. We can take the request header and do some basic manipulation and then use that as the input for determining exactly which backend to send the traffic to. We support authenticated requests, unauthenticated requests where you can add rate limiting to your requests. There's also sticky sessions or we call it session affinity and much, much more. And there's lots of telemetry and tracing capabilities on top. Next slide, please. So this is a basic overview of the project. Since inception started in Heptio in 2017, it was donated to CNCF at incubation level. If you don't know what that is, basically every project in CNCF goes through several stages, right? They go from sandbox to incubation to donation. And this stage that you're at is basically an indication of the maturity level of the project from a community health perspective, right, in terms of user adoption, in terms of external contributions and usage or deployments in the wild, right? Production deployments of contour. So we've done a couple of these talks before and I just wanna cover what we've been doing since the last talk about a couple of months ago. So a couple of highlights here. The first one is that we're still doing really well in terms of supporting Gateway API. Gateway API is an upstream project from SIG Network. That's about doing service networking in Kubernetes. And contour is probably the furthest along in terms of adopting Gateway API and also helping design some of the upstream API. Our maintainer here, Nick Allen, is actually a maintainer of the Gateway API project itself as well. So right now, we're just trying to bring some of the features in Gateway API up the parody with what we already have in contour today. And I think we just had a big release in the V1 Alpha 2 and that's the version we're gonna be using in the upcoming contour release as well. The second highlight is we're exploring a new model where contour is basically managing or it's provisioning and it's managing the fleet of envoys in your Kubernetes cluster and this configuration is specified on the new configuration CRD. And a lot of the design here was really motivated by our support for Gateway API and thinking about how might a cluster operator want to deploy this, right? And how envoys should be configured and how statuses and errors should be reflected and we'll be going over those in detail later slides. The last thing I wanna mention is that contour is moving to quarterly release from the monthly release that we have today. And this is supposed to allow us to align better with the upstream envoy releases as well as Kubernetes releases. There's a lot of, if you head over to the contour GitHub there are a lot of issues and discussions happening on why that is and what this allows us to do in terms of just removing some of the overhead with the release engineering and really focus on feature delivery. Also it allows us to support multiple releases. So three releases at a time instead of just the latest release. Next slide please. All right, so I'll go over briefly of what project contour is really doing when you deploy it into your Kubernetes cluster. So you have traffic coming from the external world, right? It hits your external load balancer and then hits your envoy. And the external load balancer could be any number of options depending on how your Kubernetes cluster is deployed. So it's just deployed on public cloud or bare metal or on some kind of a managed service, right? This load balancer is brought up differently. So if we look at this deployment in the context of public cloud the contour is basically invoking the cloud APIs to instantiate a cloud load balancer, right? And then traffic gets your envoy proxy which is sitting at the edge of your Kubernetes cluster and then traffic gets routed to the different applications that you have in your cluster, right? So we have a web application and then we have a blog application and typically this routing behavior is defined using Ingress YAMLs, right? And so you specify a path and how that maps to a service cluster. So you can use plain Ingress YAMLs you can also use the CRD that we have here called HTTP proxy. It's just a much more prescriptive way of defining the routing behavior. And Gateway API has another option called HTTP route which we're implementing today. So this is basically what the data path looks like going to external load balancer going to convoy and then it goes to your application and you can see that contour is sitting here. It's on the side talking to envoy almost like a sidecar. We know that there's a Kubernetes API server in your cluster somewhere. And so what contour is doing is basically it's watching the Kubernetes API server. It's pulsing API server for changes to any relevant resources that envoy might be interested in, right? So changes to your applications, to your services and their endpoints. Changes to Ingress YAMLs, Ingress Resources or HTTP proxy resources. And it's reflecting all these changes back to envoy instantaneously and dynamically. This is one of the value propositions of building on envoy is that it allows itself to be configured dynamically. And so in the absence of contour basically an envoy cannot directly consume Kubernetes API. So contour is acting as that translation layer and reflecting these changes to envoy so it can reconfigure itself. And so this is basically, hopefully this gives a good overview of what happens when you deploy contouring to your cluster. And with that I'll hand it over to Steve. Hey everyone. Or Sanjay, sorry. Wanted to talk about as just touch on again what Alex mentioned about what's new in contour. So we've been hard work working on Gateway API implementation. As Alex mentioned, we are pretty far along and should be one of the most advanced implementations of the API. The Gateway APIs have recently released the V1 Alpha 2 version of the API and we're working on implementing that. That should be coming up in the next release or so of contour. Currently contour supports the V1 Alpha 1 release and Steve Kay will be demoing that in a few minutes. We're also moving to quarterly releases. We announced it recently in the community call that we have and we've outlined how we're going to perform this support and how we're going to support contour with an increased window. And we have a new config CRD that's going to allow us to more dynamically respond to configuration changes and give more feedback to users about contour configuration and hopefully make contour a bit more operational and mature as that also combined with Gateway API work and moving towards a version of contour where we will manage our deployment of Envoy because currently you actually deploy Envoy as a separate deployment or damage set from your contour deployment. In the future, we're hoping to have contour manage Envoy itself and see some advantages in terms of interoperability with Gateway API and dynamic changes and being able to also continually be more operationally mature. Next slide please. So talking a little bit more in detail about Gateway API, Gateway API is a project of the SIG network community aimed at kind of transforming how Ingress is represented in Kubernetes. Formally known as Ingress V2 or service APIs. Now known as Gateway API because the concept of a gateway if you look into some of the documentation on the site that's linked here that's kind of an important idea and it's represented in a collection of resources that can be targeted at different personas in an operational sense and in a deployment of applications. So you may have your infrastructure provider, your cluster operator and application developers that manage different resources at different levels API. So we introduced support for Gateway API first in contour version 1.13. And as I mentioned before, we're currently implemented V1 Alpha 1 on the API but there's big changes that have come in V1 Alpha 2 of the Gateway API. So we will be implementing those soon and we should be able to use those shortly. So for a quick demo of Gateway API in contour here we go with Steve, Steve Chris. All right, thanks Sanjay. So let's switch over to VS Code Window here. So I have a kind cluster deployed here on my local laptop. It's got contour installed in it. And so I'm gonna do a quick walkthrough of how to get up and running with Gateway API and contour. So the first thing we need to do is tell contour through its config map that it should listen watch for a gateway class and a gateway and tell it specifically which gateway class and gateway to watch for. So the way that's done is by specifying a controller name here under the gateway stanza in the config map. So I'm gonna go ahead and apply this file. Okay, so we've updated our config map. Now we need to restart contour to pick up the changes. This is sort of alludes to some of the changes we've talked about as far as using a CRD for config and then making these things more, let's see, more dynamic but for now we need to restart the deployment. So restart contour and let's wait for it to be up. Okay, so we've got a new pod coming up. So I'll move on and that should be up and running by the time we get to the next step. So the next thing we need to do is create a gateway class that has a controller string matching this value that we put in the config file. And so here I have a file that's defining a gateway class. This is in the gateway API group. And that's Sanjay and others have mentioned we're currently using V1 Alpha 1 but we'll be moving to V1 Alpha 2 shortly. So I'm gonna go ahead and apply this file. And so now we have a gateway class. I can list the gateway classes here and we can also describe it. And so at the bottom here, you can see in the status that we've got a condition added to the gateway class. It says it's valid and it's been admitted with the status of true. And so this means contour has picked it up and seen that it should process it. So the next thing we need to do then is define a gateway for that gateway class. So this file contains a gateway spec. We can see that it uses the gateway class that I showed in the previous file. And this is just a very simple gateway that has a single listener listening on port 80 for HTTP. And also it can select any HTTP route across all namespaces. So there's a bunch of sort of configurability here as far as what kind of routes each gateway should match what listeners it should have and which namespaces it should connect to. But for the purposes of this demo, we'll just stick with a basic configuration here. So I'll apply this file as well. We've got a gateway. This one was created in the project contour namespace. So we can take a quick look at the gateways in that namespace. And you can see here that we've also got a condition here indicating that this gateway is ready. That's valid and so this means contour is picked it up and now is ready to pick up some routes for it. So just before I define the routes in the default namespace here, I've got a couple of just sample workloads S1 and S2 that we'll be routing some traffic to. So let's take a look at the route that we have. So this is an HTTP route again in the gateway API group. And this one will pick up traffic that's directed at local.projectcontour.io. And so to start, I'm gonna have just a simple rule that says any request with a prefix of slash, meaning a match any request should be forwarded to S1. So I'll go ahead and apply this file. Okay, we've got a route. So now if I just curl local.projectcontour.io, we get a response from Echo server one, which was that first service. And regardless of the path I put here, we get routed to S1. So we know we have two services in here. So let's actually set up a couple of rules to route to the different services. So I'll change this first rule to say that if there is a prefix on the path of S1, we should route to service S1. And I'll uncomment this section here. And this says that if I have a request with a prefix of S2, we should route to S2. So I'll reapply this route. And so now if we route to S1, we still get Echo server one. If we use path of S2, it looks like that didn't actually go through. So I think if you go to save, thanks Nick, good call. Okay, so let's try that one more time. So you go to S1, it's Echo server one, go to S2, it's Echo server two. And then we can do a few more complex things with the matching rules here. So instead of using this one, let's use this one down here. So in this case, let's say that we only want to route traffic to service S2 if the request has a prefix of S2 and the request also has this header specified in the request. And so this configuration allows you to do that. So I'm gonna save this file again and reapply it. And so now if we add that header to the request, so I'm specifying the header that was defined as a match in that, we'll get traffic routed to Echo server two. But if I take that header out, we don't get routed anywhere. And so this is a simple way to add additional conditions to your routing. Now you might have an alternate scenario where you actually want to say, I want any traffic that has the S2 prefix to be routed to S2, but or I want any traffic that has this header to be routed to S2. And so the way you do that is to create a separate match within your route rule. And this treats them as kind of as logical ors rather than logical ends. So I'll reapply this route one more time. And now if we do, if we simply curl the path without the header, we get routed to Echo server two. And then if I change this to just few, actually let me, so if I add back the header and change the, the path prefix to something other than S2, we can see that we still get routed to Echo server two. So that shows you a couple of different ways that you can set up your routing rules and get traffic to different back ends. So hopefully that gives you a good sense of kind of the basics of using Gateway API with contour. There's a lot more to dig into the API here and certain things will definitely be changing with few one alpha two, but we plan to stay on top of the changes and continue to build out our support here. So with that, I'll go back to the slides and we'll turn it over to Nick. Hi everyone. Okay. My job today is to talk a little bit about the quarterly releases. So yeah, up to now contour has been releasing monthly using like a release train model where around about the end of the month, the release train pulls out of the station and everything that is merged into the repo is what ends up in the release. As of our October release, we're going to be moving to quarterly releases though. So it's still going to be pretty much a release train model. Whatever is ready at about the time we're due to release will be what gets cut. But given that we're going to have three months instead of one month, we're hoping that we should be able to do a bit better planning and be able to tell you more about what's going to be in the release before the release actually comes out and to be able to plan a little bit more ahead and say that we'll make it into release X. The other thing that's really important is that we're going to be moving to supporting three contour releases. Now there's some fiddly bits there in the slide, but I just wanted to talk briefly about the reason why we're doing this. And the reason why we're doing this is contour is getting to be more mature. We're not making as big changes every time. We don't have like a huge number of features left to hit feature parity with a lot of the other ingress controllers on the market. And so I think we're hoping here to make the life of the user of contour easier, that you don't have to update every month to get the latest security fixes or to get the latest fixes and patches and stuff like that. You shouldn't need to be updating as often as we have been. Previously contour has been updating very frequently because we've been changing things very frequently. But as we sort of start to round out some of those things and need to work on bigger features that are taking a longer time to release, it's starting to mean that we end up with a much smaller set of features in each release. And so we're having to make your life easier by, as I said, by lengthening these releases and making the releases a bit more meaty. So how are we going to move to supporting three releases? Well, this is the plan. We're going to for 120, which is currently scheduled for October. All of these dates here, I've got very tentative dates. Obviously we haven't done quarterly releases before. So this is the rough plan as of today. But this may change obviously as we have a crack at this and find the things that easier or harder than we intended to be. So for 120, which is scheduled for October, the only release that will be supported is 120. That's no change to today, only one version. However, when 121 comes out, which is three months from then is sort of end of January 2022, we will support 120 and 121. And then as we go to 122, which is about the end of April, then it'll be 120, 121, 122. And then once we get to 123, which is July 22, that's when we'll drop support for 120. So then we'll support 121, 122, 123. And what does support mean? Support means in this case that critical fixes like security fixes or other bug fixes will be backported to multiple releases. So if someone finds a severe bug in some feature that we built, then we will backport that fix to all of the releases that have that feature available. But we're not going to be doing our feature backporting. So we're not going to backport if we add you something, some new nifty feature X, we're not going to backport that to our previous features. It's only going to be fixes. That's just to make sure that we're not spending all our time just doing backporting and not actually getting to do any feature things. So yeah, so what that means at the end of the day is that any particular version, once we get to 122, any version will get nine months of security and other urgent feature fixes. So we're hoping that that should be much better than you. Obviously nine months is a slightly weird cadence. Kubernetes has specifically moved away from doing that by changing their one to a year. But we're hoping that this will be good enough and we will see how we go once we've run this sort of three quarterly release support for a while and see how much effort it is, how much value everybody's getting out of it. So yeah, very interesting to hear what everyone thinks about this. Yeah, and hopefully it should be more useful for you. Now I can't remember who's up next. Steve S, thanks, Steve A. Steve S over to you for the config CRD discussion. Well, thanks Nick. Yeah, so we're looking at moving to Contra's config into a CRD. So right now, Steve, if you wanna pop down a couple of clicks. Yeah, right now we store configuration options in a config map. So there's a config map you edit. You saw Steve do that in the demo where you had to edit that config map. They had to restart Contra for Contra to pick that up. That's where things live today. And there's also command flags that you can pass. So in the Contra serve, in the Contra pod, there's different commands you can pass to that or flags that overlap as well with the config map. So right now we've got this sort of mishmash of where the config map overrides the environment variables which get overridden by the command line flags and we're sort of in a weird spot. So our goal here is to help make this simpler and solve some problems here. So I think there's four different problems in this slide and I shouldn't put transitions in here. But so right now the operator that we have, the Contra operator has to translate from a CRD into the config map. So and the CRD that the operator is managing is different from what Contra manages. So the goal here is to centralize this and have this live as one type. So one configurations, you know, goes strapped in a sense. We didn't have any way to surface errors. So other than logs and contours. So if you did something wrong, you can figure something wrong. Contra would crash loop, you'd have to go tell the logs and see what was wrong. We can also implement some simple CRD validation. So if a field should be a number and you put a string in there, we can throw that out before you try to apply that CRD to the cluster. And then eventually we'd like to have dynamic resorts of contours. So if you make a config change, it'll just, Contra will pick that up and just restart itself automatically. All right, next slide, Steve. Cool. So the implementation kind of looks like this. It's like there was something on the left there, but maybe not. So the configuration file has been simplified. So we've taken all those little bits and put them together. Actually there may be some transitions here as well. So if you click with, yeah. So all the fields now are camel case. So we had a mix of camel case and hyphenated values. All the non-required startup command flags are now in this file. So again, you can have a simple file that's portable across deployments. And then we've grouped things together. So things that we're dealing with the XDS server, they're all put together, things that are debug related things or health and metrics. Again, they have a proper grouping now, which before we didn't have as well. As a design doc, you can check out for more information on this down there at the bottom. It's in our design directory in the Contra repo. And the next section here is gonna talk about managed envoy, which is the next part of this. So Contra's gonna moving to manage envoy itself. Envoy is the critical data path component that it has to go with Contra. Contra is the configuration server for Envoy. So the goal here is to have Contra read this spec that we just wrote. And then in that new configuration CRD, there's a new managed option under Envoy. And this will tell Contra, hey, I'm gonna manage Envoy. And then there's the details of how that Envoy Damian set of deployment should look or work in your cluster. That'll define how it's published to the outside of the cluster, what knows to put this on and various things. This is obviously a very simplified window of this, but essentially it'll just apply Contra and then Contra will then manage the Envoy fleet for you. When you upgrade Contra and change this configuration, then Contra will manage rolling out that Envoy fleet for you. And in future looking with the operator, we will have a spec that the operator will read to help you manage Contra. So everything will work kind of as it is today, but we're hoping that this layered approach will help you manage your environment in the best way that fits your world. So again, all these design docs out in our repo and in the design folder, please check them out and give us some feedback. So I think, Nick, you are next with the roadmap. Thanks very much, Steve. Yeah, so, I mean, if I would summarize this roadmap in a very short number of words, it would be, yeah, main thing we're gonna do is do what we just talked about. So, but I mean, to just run down, like, yeah, we're gonna finish up the release page and support window change. We're gonna migrate the config to the CRD. This is the, I think one of the things that, Steve has done a great job with designing, but maybe you missed a little bit there was that we're not gonna, we're gonna make sure this is gonna be a nice, slow process. Everybody's gonna have plenty of chances to move across. We're not gonna drop this on you and then you're gonna have to change this quickly. There's gonna be plenty of time for you to move across to this safely. And same goes for managed, oh boy, we're gonna bring this stuff on board, let everybody have a chance to try it out before we do anything about mandating. The Gateway API support is, a lot of these changes are asked to support being able to implement Gateway API in the best way we can imagine. And as Gateway API comes on, moves through its versions, we're gonna try and get our best to keep up. Now, I think Alex mentioned earlier, I'm also a maintainer on the Gateway API project. One of the things we're trying to do with the current release of V1 Alpha 2 is to make sure that all of the breaking changes that we need to make are in V1 Alpha 2. So as the Gateway API moves forward, implementers of the API like us, like Contour won't have to make as bigger changes in order to support it. So hopefully as Gateway API moves to beta and then to GA, there's gonna be minimal changes that we're gonna need to make to our Gateway API process and code. That's the plan. So that's sort of, the Gateway API is really like the future of ingress and Kubernetes. It's gonna be the way to describe things in a more complicated, anything more complicated than sort of the really basic ingress use cases that are handled already by the basic ingress object, that's gonna be pushed towards the Gateway API. I think that nobody really thinks that the place that ingress has ended up with lots of annotations is the best place. And the Gateway API is sort of a very much a response to that and a response to being able to make this stuff more exposed and declarative and understandable. And so that's the reason why Contour is very involved and very bullish about the prospect of the Gateway API. So the last couple of things we're gonna be looking forward to is where Contour has been around for quite a few years now the state of the art for XDS control plans has moved on. And so some of our XDS code is getting a bit long in the tooth and there's a bunch of optimizations that we can do there. So we've got sort of a longer running stream that has kicked off, but we're hoping we'll deliver some value over the next of a while to modernize our XDS code to enable some of the more advanced ways in which XDS code works. So hopefully it shouldn't mean that Contour should be more efficient talking on the wire to Envoy and vice versa. And lastly, we're aiming to push for CNCS graduation in 2022. We've done a lot of the sort of basic background work that we've got to do for that and a lot, but a lot of what's required is community interaction. So we're looking for contributors. We love to have new people come and contribute. We've got a bunch of stuff labeled as good first issue and help wanted. We have a tech docs working group. So if you're not a coder but you want to contribute to Contour, there are definitely ways to do that. And we've really loved to welcome you into our community. With that said as well, we're always looking for more maintainers. And so if you have bandwidth to really throw yourself into Contour and would like to be a maintainer, then talk to us and we can direct you to places in which you can contribute. So you can start climbing that contribution ladder. That's one of the requirements for CNCS graduation is that we need to have a more diverse maintainer pool across every access. And I for one would really like to see more people be maintainers and for us to have a better diversity in thinking in companies, in who people are so that we can build a better product that serves more people more easily. Obviously our public roadmap that's linked down the bottom which is in our community repo is the canonical place to talk about the sort of more lower level roadmap features. And so that's the place to go to to find out sort of all the detail on that. I think with that I'll flip over to Alex for, yeah, to finish up. Thanks, Nick. So this is, thanks for sticking with us. You've made it to the last slide. This is the state of the world of the Contour community. So you have some telemetry here. We have six maintainers. But again, like Nick said, we would love for more people to join us and you don't have to join as a maintainer, right? There are other ways to contribute. Just coming to these talks and coming to our community meetings are really beneficial to all of us. So we have these community meetings. So feel free to drop by and you can ask any questions or just watch. Watch us discuss releases and discuss PRs, talk about the roadmap, things like that. All the community meetings are recorded. So you can watch these on YouTube. I think we've also explored, we've been exploring an idea of making the maintainer meetings public as well, probably through a Zoom webinar of some kind. Also anyone can come and attend and follow us, but it allows us to stick to our agenda. So yeah, I wanna thank everyone who has attended in the past, given us really great feedback and all the things that you've been hearing us discuss just now like the managed envoy, the gateway API support that we're doing, all these things are still in flight. So definitely appreciate anyone come in and share your perspective. So that's it for me. That's it from all of us. I wanna thank you for attending this talk. And then we also have a couple of office hours during this could be con that you can drop by and ask any questions you like, deep dive any issue or any future requests. So that's it. Thank you.