 Hello, everybody, and welcome to our presentation. Today, we will be talking about Contour, a high-performance, multi-tenant ingress controller for Kubernetes. First, let's go ahead and introduce the team that's working on Contour. My name is Michael Michael. I'm the product lead and a director of product management at VMware. As part of our team, but not present here, is Steve Kreese, a staff engineer at VMware. Tong Liu, the engineering manager, also at VMware. And I'm going to pass it back on to Nick. Go ahead and unmute Nick. Hi, everyone. I'm Nick Young. I'm the technical lead for Contour. I'm a staff man engineer at VMware. And I am in charge of the overall direction and architecture of Contour. Hi, everyone. I'm Steve Sloka. I'm a senior member of technical staff here at VMware. I work a lot on the envoy integrations to Contour. Hi, everyone. I'm James Peach. I'm a staff engineer at VMware. I've been working on Contour for about a year. And I contribute in most of the different areas. Cool. Thank you all. And we have our Slack channels, IDs on the bottom. So if you need to reach us, we're all available on the Kubernetes Slack channel as well. So let's take a look at what Contour is today. So Contour is an open source Kubernetes ingress controller. And it's providing a control plane for the envoy edge and service proxy. We're really aimed with a mission to be the best ingress controller for Kubernetes. And lately, we've been accepted into the CNCF back last summer as an incubating project. So we plan to bring together a lot of the cloud-native community within the CNCF landscape and deliver an amazing ingress controller that meets your needs. We support dynamic configuration updates and multi-team ingress delegation out of the box while maintaining a lightweight profile. And I'm going to switch over to Mi Young, our technical lead to go through some of the highlights of Contour. Yeah, hi, everyone. So Contour was started in September 2017 at Heptio. It was started by Dave Chenney and Steve Soko, the original members of the team. And at the time, Contour was aiming to be a high performance, as Michael said. Contour was aiming to be a high performance ingress controller with a focus on multi-tenancy, simplicity, and being able to source your ingress conflict from whatever objects in there, whether that's ingress, our HTTP proxy CRD, or the new service APIs. A really key part of Contour's philosophy and goals all along have been to really keep the units philosophy in mind and to be a tool that does one thing well. And that thing is ingress, Kubernetes ingress. And so we've worked really hard on Contour. I've been on here since for about just over a year and a half now, I think. And we released version 1.0 last year in November 2019. And yeah, as we've kept going, we've, as Michael said, we've been donated to the CNCF and incubation level in July, which was amazing. Really great. Really glad to have been a part of that. And we've recently released version 110, which has a bunch of new awesome features which we'll go into later. But yeah, I'm really excited to be a part of this project. It's been a great journey for me being able to work on and then lead an open source project like this. And I'm really looking forward to where we go next. All right. So I'm going to walk through a quick architecture overview of how Contour is implemented and deployed. So you'll see here we've got traffic from the internet, which comes in from outside your cluster. It targets typically a load balancer. And this load balancer's job is to send traffic to a fleet of envoys running in your cluster. So Contour uses Envoy as its data path component. So all traffic routes through Envoy. Envoy is another CNCF project. And again, it's the reverse proxy that Contour implements. So Contour actually in this architecture is the configuration server for Envoy. So it implements Envoy's XDS server. So Contour's job is to watch the cluster for things like services, endpoints, ingress objects, and secrets. When it sees any of those things change, Contour then builds a new configuration in memory and streams those changes down to Envoy. Envoy then can take those changes without reloading and then serve traffic properly within your cluster. So again, the traffic comes in from the internet, hits load balancer, routes through Envoy, and then to your back end applications. Thank you, Steve. And we're going to see a demo later on of some of the next generation work on Contour here. So let's switch over and talk about some of the latest developments in Contour. So James, you've got the baton. Thanks, Michael. So as Nick pointed out, we've done maybe 10 releases in the last year. So we're actually adding features quite frequently. Some of the things we've been working on recently, the big one is external authorization support, which we'll talk about in more detail soon. We've also added cause support, which is really useful for anyone doing web applications or single page apps in JavaScript. We're working on Envoy XDS v3 support. So Envoy has done a major new version of its API, and we're taking full advantage of that. One of the things we've been doing as the number of releases is kind of improve the integration of Contour with other projects in the ecosystem. So load balancer IP status is one of the features in that area. We now support client certificate, client TLS certificates for the Envoy proxy. We're working on ARM support so you can deploy Contour in your ARM images on your ARM platforms in the cloud and TLS client certificate validation. There's a project in the Contour ecosystem called NetContour, which allows Knative serving to use Contour for its ingress support. And finally, we're working on a Contour operator, which will make it easy to deploy Contour and Envoy and also to have multiple Contour deployments. Can you go to the next slide, Michael? So external authorization is one of the earliest user requested features for Contour. And what people want here is a way for the platform and the Contour deployment to provide a single authorization mechanism that authorizes and authenticates users and means that individual applications don't have to worry about implementing it. So you can get a consistent authorization experience across all your applications in the cluster. So you can a lot of, excuse me. So some people will use OIDC. Some people will use LDAP. But basically, as far as Contour is concerned, it doesn't matter what you do, where you can build your own server and do your own custom authorization mechanism, or you can pull one from the broader Envoy ecosystem. For example, Istio's Auth service can be used with Contour and everything will work fine. The general architecture for authentication is shown here in this diagram. So a client will do a get request to Envoy. And there is an extension service at CRD, which the operator has already deployed. The extension service at CRD represents a binding between Envoy and a external authorization service. Contour will configure Envoy to send requests to the external authorization service on every request that comes from the client. The external authorization service will either accept or reject the request. And if it rejects the request, it can redirect the client so that the client can obtain any kind of authorization information that it needs. Then when the client returns, the extension will authorize that request and will forward it to the upstream service. Thank you, James. So let's take a look at all of this in action. And a lot of exciting work has gone into enabling external authorization into Contour. And this is actually one of the highest requested features for Contour. So next, I'm going to stop my sharing here. And Steve Sloga is going to share his screen and walk you through what it looks like in real life. Steve. Cool. Hey, thanks, Michael. Thanks, James, for that intro. So here we're going to walk through actually implementing what James just described there on the screen. So what I have here is I've got a cluster running and I have Contour deployed to it. So what I can do is I can go ahead and get all the HTTP proxies that I have in my cluster. So here I've got three of them. And this is a typical site that you might see in your cluster. So my domain name here is stevesloga.dev. And I've got a TLS secret on here. And that comes from Let's Encrypt via CertManager. So these are all Let's Encrypt certificates that were dynamically configured. So there's a couple of other proxies here for the marketing team and the blog sites. But we won't worry about that for this demo, at least. So just to prove that this works, we can come into here and let's go ahead and check out the site. And what you'll see here is it's got a simple echo server here. So this tells you that, hey, this is the default site. Here's the request and here's the headers. So we'll use this site here just to kind of validate how this authorization stuff is going to get implemented. So the first bit we're going to do, again, this is the same diagram that James just talked about, is we're going to go ahead and deploy some prerequisite bits to deploy this external auth service. So again, it's something that we deploy in the cluster. So first what we're going to do is we're going to go ahead and deploy a namespace for all of these. We're going to put all of this stuff into this namespace called ProjectContourAuth. In that namespace, we're going to create a service account. And this is going to be used for our authorization server to be able to look for secrets in the cluster. It's going to use these secrets to be able to pull out passwords. In this demo, we're going to use basic passwords or basic password auth, which is very simple. But we can extend it to other types, as James mentioned. So once we have this cluster rule, we'll create a binding. And then we'll also create a quick little cluster issuer here. And this is going to let us create some self-sign certificates for the auth service inside the cluster so that communication between Envoy and the service is all secured. Cool. Let's go ahead and deploy this first bit. So we'll send this one over. All right. So we're going to create all of those. So now we should have a namespace called ProjectContourAuth down here. You can see it's seven seconds old. All right. So next, we're going to go ahead and deploy our actual auth server. And that's this external auth service a bit right here. So this is here. Here's our service we're going to use for it. It's called HTPassword. And then here's the deployment. So down here a little further, you'll see this is the image. This is actually a project that the Contour team has out in its Contour repo. So there's a couple of different auth servers that we provide for you. There's a simple one that's used for unit testing. And then this one's used for basic auth. So again, this is what we're going to demo today with HTPassword. So the image here, this is V2. We're going to run the HTPassword auth server over port 9443. And we'll reference some secrets, our certificates, that come from secrets. And you'll see those secrets are volumed in here through passwords. The source of the secret here is going to come from the certificate. So in the previous step, we created that self-sign cluster issuer for CertManager. Here we're actually going to request the password from CertManager or request the certificate. Sorry. So we can secure that communication. So let's go ahead and deploy this bit. OK, so we created the service, the deployment, and the certificate. So if we go ahead and get pods in the namespace ProjectContourAuth, what you'll see here is I have this auth server now running. Similarly, I should have a secret in there. And it's called HTPassword. Again, that matches the certificate that we requested. And you can see that one's 14 seconds old. OK, so back to our diagram here. We've got this auth server bit deployed. Next, we're going to go ahead and deploy some secrets for this. So what we need is some sort of user database. So in here, this secret has, and it's kind of A64 encoded. But with HTPassword, we actually created a user called user1 with password called password1. And we created user2 and user3. And that's all done up front here in this secret. So whenever the auth server spins up, it's going to look for the secret and then use it to become the source user base for our users to log in. So we'll kind of apply this one. OK, so now we have that. Next, we're going to create this extension service. And again, back to our diagram here, we have this bit all deployed now. So now we're going to create this extension service. Again, as James mentioned, the goal of this is to help integrate this auth service into our contour infrastructure. So now we can reference this auth service from our HTTP proxy. So if you look at this, again, this is called extension service. And we're going to reference the service we created in step two here, this deployment. So HTTP password is the service of our auth server, again deployed here in this orange box. Let's go ahead and create this. OK, so now we have our extension service created. Finally, that's all the bits. Now we just have to tell our HTTP proxy that we want to turn auth on, right? And we'll do that here. So this is that root proxy that we saw originally. And what we're going to do is we're going to add these little bits here. So this authorization bit now within the virtual host lets us reference an extension service. So here you can see referencing this HTTP password. Again, that was created in this step four here. And it's in the namespace project contour auth. So now once we apply this, Contour will then tell Envoy that on new requests coming in, we want to have this authorized. So Envoy will then forward this request out to our auth server. The auth server will basically do a thumbs up or thumbs down depending on what it receives. And then the request will either be denied or it'll be allowed through into your application. Let's go ahead and apply this one. So we configured our root real quick. We can check for errors. So we'll get our proxies again. You can see we have no errors here based on the status information. So what we can do now is we'll go ahead and curl for this. So we'll do a quick curl for slash secure. And you'll see that I get a 401 redirect here. So this 401 response tells me that I'm not authenticated because I haven't passed any users in. So again, based on our authorization flow, I'm not allowed to come in. But if we grab the second one here and we actually pass a user. So user one with password one, now they see the request comes through. The only downside to what we've done here is that the secure path has security, which is what we want, but the other one doesn't either. So it looks like I already stepped ahead of myself and configured that for us. So what we've done is in your routes, you can apply some auth conditions. And from here you can say I can disable my auth policy. So what you'll see is on the slash secure we have auth, but if I hit slash dev or slash, I still get the 401. So what we'll do is we'll go ahead and apply this auth policy to our route, which basically will allow our route path, which is slash to not require authorization. So we'll save that and we'll apply the route again. Cool. Now when I curl for the route, I'm not required to have authorization, but if I add slash secure, you can see now that is secured. So I'm required to log in before I do anything there. So hopefully that's a quick overview of authorization in contour. There is a nice guide out here on the contour website. I've been looking for more information, which walks you through what we just talked about, also with some flow diagrams and everything to help explain this a little better. That's all I have back to you, Michael. Absolutely. Thank you, Steven. Thank you team for working on this. And obviously as you are users, you wanna implement your own authorization providers and extensions into contour. Coming to the contour channel, talk show up in our community meetings and office hours and tell us what it is that you're building. You might be able to redirect you to an existing solution or help you out. So we're here to make sure that you are successful with your ingress needs. Next, I'm gonna switch to Nick Yan, who's gonna talk to you about the roadmap and where we're going. Everyone again. So yeah, we're obviously, as a public open source project, we have a public roadmap. First starters with our public roadmap, I probably should say, this is our current directions based on our current priorities. This is an open roadmap. So it's really open for you to come and say, hey, I really need this thing straight away and I wanna build it or I really need this thing very urgently and I need you to build it. If you need either of those things, please come and talk to us. This roadmap is not set in stone. It's a living document and we're up to change it with the reason. That said, the current things that are on this slide are some of the things that we're scheduled to do by the end of this, hopefully by the end of this calendar year. So the first one is redlining. That's being able to plug an external service for rate limiting in a similar way to what we did with the external auth so that you can rate limit particular routes or services. So this one in the past, it's been very difficult to figure out a design for how we were gonna do this, but luckily in order to do external auth, we needed to build the same features that we needed, which was that extension service CID. Also, we're looking for some, we're hoping to get some more deployment support done by the end of the year. So right now we've got projects working, running to create an operator and to have an official helm chart for Contour. There is currently an unofficial one and we're working to build, bring that one and make it official. And the purposes of both of those are to just make it easier to get Contour installed, but also to make it easier to install Contour in a variety of scenarios. Currently our install method is that we provide you with an example deployment that will get you up and running. But then because it's supplied as a series of YAMLs, there's no way for us to give you options. And so both of those things are a way for us to give you options. The third one here is a bit more complicated to explain. We've historically when that project started, we were a very small team, two then three people when I joined and we had a lot of features that we needed to build. And so we needed to stay focused on the ones that were the most important. And one of the things that we didn't put much focus on at the time was configurability. I think we all missed to begin with the real importance for a centralized property team of being able to tweak a whole bunch of options about your requests, timeouts, and a bunch of other details. And so since I've become a tech lead, one of the focuses for me has been really opening up and finding sustainable ways to allow people to configure all those things about Envoy that you need to configure in a centralized property that we haven't allowed you to in the past. And so we're really hoping to close out this and to put, to sort of make adding new Envoy config to just the AU and something that we do all the time and it's done in a relatively standard way. So that's sort of that this road map item means. The last one and probably one of the most exciting ones for the future of Contour is service API support. So the service APIs are a project of Kubernetes SIG network that is about building a new set of APIs described services inside the cluster and how they get traffic, how you get traffic to them from outside of the cluster. That is, it's kind of a replacement plus more for both Ingress and the service of type load balancer that you have in Kubernetes right now. It's a huge project with huge scope that we need to cover. And so if you're more interested in this side, certainly encourage you to get involved in that project. There's a lot of work to do. But yeah, one for Contour, the thing that we will be wanna look at doing is making sure that we, once the service APIs are actually released, we are able to take those objects as a source for config in the same way that we can currently take Ingress objects or our own HWP proxy CRD. And the exciting part there is that this is going to be eventually the new way that we configure. It's Ingress V1, it's gonna stay around for the foreseeable future, but this is gonna be the way that you'll get all the Nifty bells and whistles that it's very hard to do in Ingress V1. So yeah, that's most of our roadmap for the end of the year. We've got a few other sort of maintenance things that are on there by the end of the year, big one is as Michael said earlier, the XDSV3 change that XDSV2 for the support from Envoy at the end of the year. So that's really important to finish that one. But yeah, those are probably the biggest things for until the end of this calendar year. Further out than that, we do have a few things on the list, but you, the roadmap is pretty open. Please come and talk to us about other features that you want. And then I'll hand back to Michael to write it up. Thank you. Thank you, Nick. So we've taken a lot of time so far talking about what's going on in Contour and from a technical standpoint and some of our features. But you know, we want you to come and help us shape what Contour looks like tomorrow. We're a vibrant community. We have multiple ways that you can engage with us. You know, I'm gonna go through some of them right now. So we're on the Kubernetes Slack under the hashtag Contour channel. We have a mailing list where you can get updates about the project, new releases, and also ask questions. You can interact with us on Twitter and the Project Contour. We have our GitHub repos where you can come and see all the work that's happening in real time in Contour. And we also have a playlist on YouTube of all the community meetings that we have. So if you wanna know about something that happened in the past or missed a meeting, go ahead and view that. Overall, the Contour community is super vibrant. Our ecosystem is super engaging. We're welcoming to new users, but let's go through some of the project statistics. We have 2,500 GitHub stars and that number is growing. 430 contributors, 400 forks, five maintainers, 55 releases since we started, 130 contributing companies, 731 Slack members and more, blogs, Cubicon talks, Twitter followers. We want you to be part of this community. Come and help us deliver Contour. We have about 10 minutes left for Q and A, so feel free to ask anything that's near and dear to your heart. Thank you all from the Contour community. Thanks, everybody.