 They're due. I'd like to get started. Thank you for joining us today. Welcome to today's CNCF webinar, Contour High Performance Ingress Controller for Kubernetes. I'm Chris Short, a principal technical marketing manager at Red Hat and a cloud native ambassador. I'll be monitoring today's webinar. A few housekeeping items before you get started. During the webinar, you're not able to talk as an attendee. I'm sorry. However, there is a Q&A box available to you that you can ask questions of Steve or myself. And if you have other questions that you feel like the chat might be able to help with, feel free to drop comments in the chat. Also, this is an official CNCF webinar and is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, be respectful of all your fellow participants and presenters. So with that, we would like to welcome our presenter today, Steve Sloka, senior member of technical staff at VMware. Take it away, Steve. OK. Hey, thanks, Chris. So good morning, good evening, good afternoon, everyone. Yeah, again, I'm Steve Sloka. And today we're going to talk about Contour. And Contour is an Ingress Controller for Kubernetes. So a quick look at the agenda for today. We're going to review kind of what Ingress is so we can help level set your background just so we are on the same playing field. We're going to go and look at Contour. You know, what is it? How does it work? What are the components that make it up? Then we'll dig into this CRD that you wrote called HTTP Proxy. And we'll look at why we built that and some of the reasons around. You might want to use that versus looking at using Ingress today. And then we'll do a bunch of demos. So I've got a whole slew of demos we can dig into. I think majority of today's talk will be live coding on a server. So it should be fine. So let's dig into and figure out what Ingress is, right? So Ingress itself, when you think of that, generically, it means incoming, right? So I can talk about Ingress traffic, you know, coming into my network. And this is the same for Kubernetes. So Ingress and Kubernetes LAN is the ability to route traffic from outside the cluster or from the internet into your cluster and route that to a service. And Ingress uses this L7 protocol of the application stack. So it's able to route requests based on the host header and the path combination. So I like to look at Ingress and then think about, well, why would I use something else, right? So let's look at alternatives real quick and then we'll see why Ingress may be a good fit for what we're trying to do. So I could use a thing called node ports. And I put node ports and load balancers together because they kind of fit hand in hand. But a node port type service in Kubernetes will expose a random port on your cluster in that 30,000 range by default. Whenever I expose that service on that port, if I hit any node in the cluster, then traffic will route to my application. So this works. The problem here is that I've got a whole bunch of random ports now depending on how many applications I have and I've got to manage the lifecycle of all those nodes, right? So if a node goes unhealthy in my cluster, I've got to take that out of service. Load balancer kind of works hand in hand. If you're in an environment that has a cloud provider or something like that, you can create a service type load balancer. And essentially behind the scenes, you create to a node port service, but it also provisions a load balancer outside of the cluster. And this load balancer will then send traffic to your cluster. And this will solve the problem easily. The problem is is that how many applications are gonna live in your cluster, right? There's a quantity of those applications. There's also a cost, whether you're in an on-premise environment or a cloud environment, some sort of cost associated with this as well as complexity. So let alone having all of these ports open up in your cluster, so your security team may not be happy with all of these different entry points. Another thing you could use are a thing called host ports. So host ports basically map a pod in your cluster to a host port on the node. And this, again, will be another solution. But the problem is you can only have one per cluster, right? Only one port can map port 80, or one application can map port 80. So basically unless you're gonna deploy one application per cluster, there's a lot of overhead in that. So Ingress gets around all of this by only having a single entry point, but they being able to handle multiple application and multiple paths into your cluster. So this diagram here is an overview of how Contor looks. And you can also apply this to basically any Ingress controller in a sense. So again, we have our traffic coming in from outside the cluster and we're gonna hit some sort of load balancer. And that load balancer's job is to send traffic out across the cluster. And again, I only need one of these for our Ingress. Once the traffic gets to, in our case, Envoy for Contor, Envoy is gonna take that request, inspect it, and then route it to the right place within the cluster. And you notice here that Envoy is the data path component in Contor. So it's the one handling all of the traffic, all of the network routing for us. Contor in this scenario is just the configuration server for Envoy. So Contor's job is to watch the cluster. It's gonna look for services, endpoints, Ingress objects, these HTTP proxy objects, as well as secrets. And when any of those things might change in the cluster, Contor will rebuild this configuration and stream that change down to Envoy. And again, Envoy is gonna handle all of the traffic. And again, they said this is an L7 proxy, a layer seven. So that means that in this example, xyz.com is my host and slash blog is my path. So Envoy is able to inspect that layer of the stack and be able to make decisions on routing. Oh, and I meant for, this could work for any Ingress controller. If you replace this box here, you can probably swap out any kind of different Ingress controller for Contor. And it should work in the similar vein, depending on the different components. But again, the concept is the same in terms of how Ingress controllers are implemented. Here's another look at the components of Contor. Again, we said that Contor is the configuration server and it uses Envoy as the data plane. Again, we're deploying this to Kubernetes. So Kubernetes is our underlying infrastructure. We're gonna deploy this too. And we also expose Prometheus metrics in Envoy as well as Contor. And you can use Grafana or any other tool that can consume Prometheus depending on your environment. So feel free to do that. But again, this is just giving you the overview again of all the components that make up Contor. Let's dig into some more highlights of Contor. Again, Contor is an Ingress controller. We're using Envoy as the data plane component. Contor can dynamically update its configurations without dropping connections. So whenever those services change or endpoints change or your users are pushing configuration changes, all that happens in real time or in your real time. And those changes are pushed down to Envoy and connections aren't dropped, right? There's no reload, there's no issue of receiving that update. Contor can safely support Ingress in multi-team clusters. We do this through our CRD called HTTP proxy. And we'll dig into what that looks like and feels like here in a little bit. That multi-team pattern is enabled through this thing called delegation. So we can delegate a path and a header combination down to namespaces. And by carving off these different pieces of the path, we're able to implement this multi-team or multi-user cluster in a safe way. In addition to having a CRD, we can now place things that are typically annotations in Ingress today, things like service weights, load balancing strategies, prefix rewrite rules, all those things that you've got to deal with today that maybe aren't described in the Ingress spec. We can describe them in a place within our CRD. Again, so you can do that without having annotations all over your objects. We're able to support multiple upstreams, right? So if you ever wanted to route traffic to multiple services within your cluster, this is helpful to do things like blue-green deployments or maybe a canary deployment. You can implement this maybe in a simpler pattern by allowing to have multiple services to route traffic to. Another cool thing is we can do this thing called TLS certificate delegation. So again, in Ingress today, you've got to have your secret live in the same namespace where your Ingress object is, right? And if you have that secret copied across multiple namespaces, one that can be a pain to manage and deal with getting that deployed out. Two, if you've got to rotate that, you've got to update it in multiple places and three, your users have access to those private keys which may not be what you'd want. So with Contour, we can tuck our secrets away in an admin namespace and then use delegation or to pass that permission off to other teams, right? So basically they can still use the certificates as if they were in their namespace but they don't have to have access to them. So it's a good separation of concerns. So why Envoy? Why did we choose Envoy when we started voting Contour? Well, it's dynamic configuration via an API. So we mentioned this as one of the first wins of Contour. So Envoy exposes a GRPC endpoint and it's their XDS protocol. So we can basically, Contour is the XDS server for Envoy and we're gonna stream those changes again down through that GRPC connection. So we've got this rich connection that we can push configuration changes down to and again not have to reload Envoy for it to accept them. Envoy provides first class support for HTTP too as well as GRPC. So again, looking for the future and what our users are looking to consume, these two protocols are our first class citizens of Envoy and it's battle tested in production. Envoy came out of the engineers at Lyft and they use this all over the place and this has become a well-known component in lots of architectures today. So I'm gonna kind of walk through a situation and we're gonna look at basically what Ingress looks like today. So if I have two Ingress documents right here, I've got one on the left and one on the right and you'll see here I have it, one is for team A so I've got one team here and one is for team B. So I'm splitting across two different namespaces here. Each team has created their own object and they both have defined the host of projectcontour.io and they both define the same path slash blog. But because they're in different namespaces, they're able to set different services to route traffic to. So in this case, team A is routing to WordPress blog and team B is routing to service new. Well, the question comes is when we go to process these documents together, what happens, right? What is the result of this merge because we basically have conflicting descriptions of what this should look like? Really it's undefined, right? And this could be dangerous. Depending on how your controller processes these, it could be the last one in wins, the first one in wins, maybe none of them win and your users just get a four or four error. So this is one of the things we look to solve with contour. We wanted to come up with a way that we could allow users to still self manage their own Ingress objects in their namespace but not have to deal with these sort of issues that can come up. And this is a common scenario where maybe you have, this is your production application here and then you go and deploy a new application, the new version of it and someone just accidentally deploys the same Ingress object and then you've taken down your production. So we'll look here how to, how we can solve that here with contour. So contour can do that with this thing called HTTP proxy. And if you're familiar with contour in the past, we used to have a CRD called Ingress Route. Ingress Route supported a path prefix and now our new proxy CRD supports path prefix as well as headers and additional information. So our users over the last year gave us a lot of feedback as to what we wanted to do with that and we can do a lot more now which is why we have the new proxy CRD. It's again, the same rules are still there though are the same goals, being able to support multi-team users and finding good homes for all these parameters. So let's look at how this delegation can work. So generically from a high level, this works as the idea that you have someone owns a domain name. In my case here is projectcontour.io and this is defined in what we sometimes call a root proxy where there's this top level root item. So it has ownership over this domain. Another team comes along and says, hey, I want to deploy my blog site. So this is slash blog and we want to give them authority to do that. So they'll create an Ingress or a proxy and they'll define it to be slash blog and we'll implement this by an include and we'll show you how this works with real code here in a second. But this root can then say, hey, this child object has authority to manage slash blog. So in the case that someone else comes along and they say, you know what? I'm the dev site, I'm gonna go enter or create a path of slash blog. What happens is this contour will throw this out, right? Because this proxy here does not have authority to manage that path slash blog because this one already has it. So you can see if we can apply this to paths as well as headers and other things. So if you just deploy this out over a large cluster, you're able to safely let users, again, self-manage themselves without having to deal with this overhead of the fear of breaking your routes. So we can take a quick pause here. If I may have any questions, that was all the slides that I had. So next I'd like to just dig through a bunch of code and actually deploy some of this and look at it if that works. No questions yet, Steve, but I suspect as we dive in, they might be more. Okay, sounds good. So what I'm gonna start with is just a simple hello world, right? Everyone starts with a hello world app. So this is no different. So we're gonna deploy a simple proxy here. That basically just says anything on slash is gonna route to an application. So what I have here is I have some default apps that I wrote. So what I built is this little simple echo server. And what the server does is it spits out some texts that I've defined. Again, this is just to help us understand which application is getting the traffic. And then, and it spits out some other information about the headers as well as the path, right? Again, just to help us debug and kind of see what's going on. So let's go ahead and deploy that. So we'll apply this O1 app. That'll create us some services and some deployments. And then we'll also deploy the proxy. Great. And let's dig into this proxy real quick and see what it looks like. So you'll see here is I have my version. Again, this is our CRD. So this is projectcontour.io slash v1. And the kind is HTTP proxy. We've got a name and namespace. But here's where the fun stuff happens. So here we define this fully qualified domain name. So this is the incoming host that Envoy is gonna look for. In my case, it's demo.projectcontour.io. And then I'm defining this TLS struct. And here I'm saying I wanna terminate TLS and I'm gonna use this certificate, right? So if I go and look at, again, my secrets, you'll see I have a corresponding secret that's called TLS wild. And this is using Let's Encrypt through the cert manager from the Jetstack folks. So this went out and got me a wild card certificate and I'm gonna just attach that and use it. Now that we're gonna terminate TLS, we'll look at the routes on here. So here we have a set of conditions. And conditions in proxy tell you what conditions need to be met for this route to match. In this case, I have one is just slash, right? So this is again the generic default route. So slash is gonna route to this root app application. So we can verify that we have our proxy in here. And we do, so I got all my proxies and it's healthy and it's valid. So let's go ahead and curl that and we'll just see how this looks. Let's make this a little smaller here. So if I do a curl on demo.projectcontour.io, you'll see this application returns. And this is our default app site here and it's slash. And here you can see the headers and we'll look at that here in a second. Now, one of the cool things that you might have noticed and might not have noticed is here I inherently said HTTPS. Because we defined this TLS secret here in contour, contour is gonna be secured by default. So other times you might have had to annotate your object to say, hey, do a 301 redirect from an insecure port to the secure port. But contour will do that for you automatically because we've seen, we have TLS here. So here if I do a curl on the insecure port, what you'll see is contour will reply with or envoy will return a 301 redirect. And this is, hey, make sure you go to the TLS version. There are always around this. If you're interested in using the insecure port, you can say permit insecure, true. And if I go ahead and apply this one, now when I curl for the insecure port, it'll work. Because we're overriding that default secure behavior. So we'll kind of take that off and then we'll chug along with some more things. All right, so let's go ahead and add a second route here. So right now we have this route one. I have another service in here in my name space and it's called secure app. Let's do a little demo with that. So if I go ahead and add a prefix of secure, what's gonna happen now is I have two different routes. I have a route for slash, which is gonna send traffic to root app and I have a route for slash secure, which is going to the secure app. So if we go ahead and apply that, and if we do a curl again on the roots, you'll see, whoop, do the TLS version. We get the default app, which is what we wanted. I do it on slash secure. You'll see now I get the secure site, right? So that's cool. Let's make this a little more interesting, right? I mentioned before that conditions are all the requirements to make this route match, but I can do more now with contour than just prefixing previously. So what I can do is I can add a header to this and I can say the header has a name of x header. And we'll say it contains the value ABC, right? So now we're gonna route on the existence of slash secure in the path, as well as this header on the request. So if I go ahead and apply, right, and if I do the same curl that I just did a minute ago, if we curl for slash secure, what happens is I get the default site, right? And this is true because we've required that the slash secure route have the header existing on the request and it's not there. So this other route is matching that condition, right? Because it's matching slash, which is true. But if I do the same request and I add a header of x header of ABC, what you'll see is I get the request to the secure app, which is what we've defined here. And down here in the request, if you see here at the bottom, we can have the x header of ABC, right? So again, these all get appended together and those all have to match for the proper request to work. Something else we can do is again, we can add maybe a more realistic thing, would be like a user agent. So maybe I wanna say user agent is Chrome, right? Let's take this one off, maybe. So if I go ahead and apply this one again and I come over to Chrome, what I could see is if I go to demo, is I get the default site and if I hit slash secure, I get the app, right? Because I'm actually in Chrome. But if we switch browsers and went to Firefox, you see where I'm going, I get the default site, right? I got the default one here because I'm asking for only having Chrome and the user agent. Headers are neat, I can also do the reverse so I can say not contains. So I could say, hey, I don't want Chrome and I don't want Firefox, right? So if I apply that one now, and we hop back to Firefox, you'll see I get the default site because we don't want Firefox. If I switch to Chrome, you'll see that you get the default one. But if we come over here to Safari, you'll see that it works, right? Because again, this is not matching Chrome or Firefox, it's matching Safari. So there's how conditions can work and function in your app. So let's take this one step further and let's look at delegation. So I'm just gonna take these off for now and we'll have just the paths now. So what I'm gonna do is I wanna go deploy another app, right? So I have a marketing team and again, the scenario we talked about in the slides where we wanted to have the marketing team have slash blog. So what I can do is go ahead and deploy a marketing app. If I do a code control apply, we'll do marketing 01. So we'll create a namespace for our team and then we'll deploy some apps. Again, I have a couple of sample apps that are gonna help us deploy this thing. All right, cool. So now if I get my pods in the namespace marketing, you see I've got an application there. And I also should have a proxy or two proxies, right? So I still have the root one which we deployed just a second ago. And now I have this new one in the marketing namespace and it's called blog site. And the status right now is orphaned, right? Because we haven't defined a linkage between the root one and this other proxy. We haven't linked them together. Let's go ahead and do that. So what I'll do is I'll go ahead and bring up my root proxy, which is here. And then this one is my child one. So let's put these together. So maybe this will help to see them. So again, this is my root one here on the left. And this is the child one here living in the marketing namespace. So what I wanna do is I'm gonna go ahead and add this thing called include. And includes in contour work just like they would in a programming language. So if I have a programming language and I say, hey, I wanna include a header file, what happens is before it's compiled, that header file gets dumped in the top of the code and then the whole headers is compiled. The same thing happens here, right? So if I have an include on an HTTP proxy, you'll see that I can add conditions onto this one before it's processed. Let's explain this. So from the root one, I'm gonna define, I have a proxy called blog site, which matches blog site, and it's in the namespace marketing. And then I'm gonna give it some conditions. So by doing this, I'm gonna link these two together. I'm gonna add these conditions to here. So let's go ahead and apply our root proxy, right? And then if we go ahead and get our proxies again, you'll see that now they should both be valid. And they're valid because now we have a proper linkage or a proper delegation from this root one over here to my child, which is in the marketing namespace. And the result of what's gonna happen here is as if on this one, we define a set of conditions like this, right? This is when the contour goes to process these, these conditions will exist here automatically, but we don't have to define them there, right? Because we're passing down these permissions automatically. Let's go ahead and test this out. So let's go back to here and we'll check out our root. So our root still works. Now if I hit slash blog, what I'm expecting to see is the blog site. Here we go, I've got that. But the cool part is that for one, this child proxy doesn't have to deal with TLS. We've done that off in the root site. In a different namespace. And right now they don't have to deal with even the prefix because that's been passed off to them. They have authority to manage slash blog. Again, if someone else made another path prefix of slash blog, Contra would throw that out because it doesn't have permissions. Cool. So now we can dig deeper, right? So we added a header to before. So the same thing happens here with conditions on includes. The sum of all those conditions get passed off to your child. So if I add a header condition here, and I call it name is user agent, and we say it contains Chrome, do Chrome again. Again, the result here is that I have a slash blog as the prefix. And now the condition is I have to have a user agent of Chrome, right? Because all of the conditions as a sum are passed off to this other proxy. So we'll go ahead and apply this root one again. I figured, good. So now in Chrome, this should still work. And it does, should get the blog site here. But if we hopped over to Firefox and hit this path, you'll see now I get the default site. Again, because our child slash blog requires that we have both the path slash blog as well as the user agent Chrome. And that doesn't match to that's why it's not working. So you can see where this is kind of powerful. If you had to have a bunch of things exist on a request, you can define them up here. And again, the sum of that rolls down. Cool. So another demo we can do real quick is the marketing team wants to deploy an info site, right? So within their own namespace. Let's do a self delegation. Let's have them delegate to themselves and see how this functions. So what I have here is I've got two more apps. I've got these info sites. So if we go ahead and apply marketing and we'll do O3, we have the info deployment as well as the info proxy. So now again, if we look at our proxies, I should have three of them. Again, the blog sites valid, the roots valid, but this new one here, this info one is orphaned. Again, because we haven't done any delegations to it. Let's go wire that one up and see how this looks. So let's add an include. And now we're gonna add an include to our child, right? So now this include here, by YAML, right? So if we add this include here, this acts as adding additional conditions to any of its children, right? So the root owns projectcontour.io and it passes off slash blog to this proxy here. Now this proxy is gonna add additional conditions and pass it off to another child, right? So it has another child of its own and that's the slash info and this should all wire up. So let's go ahead and save this child one here in marketing. So we'll apply marketing O2 proxy. Great. So now if we go ahead and get our proxies here, we'll see they're all valid, again, because we've tied them all together. But what happens now is the result of this child three of slash info gets everything appended together. So the result should be slash blog slash info as well as the user agent of Chrome, right? Because all these conditions are gonna get applied down the chain because of the includes. Let's go validate that this works. So if we go over to Chrome, so slash blog should still be the blog site. If I hit slash blog slash info, what you'll see here now is I get the info site, which is what we wanted, right? But if I tackle this into Chrome, I'm sorry, Firefox, this should be the default site, right, and it is because again of our conditions. But you can see here how this is kind of powerful, right? I can carve off pieces of my path, pieces of headers, whatever I'd like, whatever I need to do, and users don't have to deal with all of that logic, but just they can just focus on managing their own infrastructure within their own namespace. Cool. So that is deployments. Let's go ahead and do a quick blue, green and canary deployment and we'll poke through that real quick. So what I'll do is I'll go deploy two more things. So here I've got a blue deployment and a green deployment and they'd basically just spit out, this is the green site or this is the blue site. And that's again, just to help us visualize this a little easier. So let's go ahead and apply our marketing and we'll do 04 green as well as 04 blue. All right, cool. So now if I get my services, we'll see in marketing. We'll see, I've got four of them now. I've got blue and a green. All right, so let's go ahead and update the slash blog here and we'll change this one to blue. Right, so now when we hit slash blog, I'd expect to see the blue deployment respond. So we'll go ahead and apply our proxy too. Right, so we deployed that child there and just to verify this, we'll go ahead and refresh slash blog. You see, we get the blue site. Let's go ahead and look at doing a canary deployment. So canary deployment works where you have two services. You've got your current version and your new version. And what you wanna do is slowly move traffic over to the new version or the green version. So your production is blue and your new version is green. And we're gonna slowly turn the knobs and send traffic from one version to the next but do it in a paced environment. This can be done easily with contour now because we can do multiple upstreams. So what I can do is copy this and I can make this green, right? So I can have multiple services on an ingress resource. So now what I can do is also add a weight. So I can say, hey, blue gets a hundred percent or a hundred points and the green one gets zero. So if we go ahead and apply this, let's do this real quick. If I have this, let's do a while loop on this, here we go. So for every half a second, we'll curl for that site, right? And this will help us see it. So you can see here, oh, so here's the thing. So I'm getting the default site because my root is requiring the user agent of Chrome. Now I don't have that because I'm using curl. So let's go ahead and change that real quick and take that requirement out. Actually, we can just do it here and just say curl. Now we should get blue. There we go. Okay, so what I wanna see now is that, excuse me, I have, again, two upstreams. I have one blue service and one green service. Right now I have a hundred percent of the traffic going to blue. And you can see I'm getting the blue site over and over. If I want, I can switch this around and I can say, let's send 80 here and 10 here, right? So now if we apply this one, what I should get is mostly the blue, but I'll get some green every now and again. There it is. Cool. So what we can do now is watch our metrics, watch our performance, see how this is performing. If we want, we can go 50-50, split the traffic mostly. I think we can do this as fast or slow as you'd like. Now we're getting blue and green, pretty much 50-50. Once we're happy with that, we'll switch to 10-90. Apply that one and we should get mostly the green site now. Cool, mostly the green site every now and again, we should get a blue. There we go. And once we're happy, we can shift it all. So we'll make this zero and make this 100. We'll apply that and now we've done a successful canary deployment from blue to green, right? So that's canary. A blue-green deployment works in the same vein, but instead of shifting traffic slowly, we're gonna shift traffic all at once, right? And this may be because you have a, maybe requirement on a database that can't handle both versions, or maybe your application just can't live with having two versions at the same time, which is a totally reasonable thing. We can do it a couple of ways. We can solve it with weights, right? So just like this, right now I could switch back to blue by making this one 100, make this one zero. So now I've given blue 100% and given green zero. So if I apply this, we'll see an instant switch to blue. One way to do it. I know we could do it here so we could just obviously switch the service. So right now we're at blue. I could just make this green, right? And if I apply this one, we'll do an instant switch from blue to green. And the last thing that we could do would be to use delegation, right? So right now I have include sending traffic here. I could create a whole new set of proxies and then swap those delegation chains. And that would be an easy way to do it as well. The nice thing about that one is that you're not gonna be breaking any existing infrastructure, right? You leave everything kind of sitting as is. You swap the proxies. If you see an issue, you can roll back seamlessly. Cool, so I think that is all the demo that I had. Which is a lot, so thanks for everyone sticking around. So here's just wanna give you a quick, after all that, a quick overview of the roadmap of Contour. So Contour is gonna hit 1.0 here in early November. So tomorrow we're gonna ship an RC1 and two weeks we'll ship an RC2. And then come hang out with us at KubeCon. Much of the team will be there. I have a speaking at EnvoyCon about some of these things and we'll be all over the place. So come hang out with us and chat. Very interested to see. I'll meet all of you and see your use cases. And then here's how, if you wanna help get started in Contour. You can check us out at projectcontour.io. We have a Slack channel in Kubernetes. That's hashtag Contour. You can follow us on Twitter. We have community meetings every 30 Tuesday. So if you wanna hop on a call and chat about anything that's going on or anything that we're doing. And we also have a bunch of issues labeled with good first issue and help wanted. So if you wanna help contribute, happy to help you jump in and help out. That's all I have. So if there's any questions, we can go work on those. Wow, great presentation, Steve. Thank you so much. So we do have questions quite a bit. Let's see. So there was questions in chat and I tried to answer it, but basically Contour can work across multi-zone clusters, multi-region clusters. It could work across federated clusters, I'm assuming too. Yeah, so Contour is meant to work within a single cluster. So Contour itself is just an English controller for one cluster. So it's boundary, I think it's gonna be that one cluster. If you had to do a multi-cluster type thing, there's something that we have called Gimbal, which aims to solve that. So the idea there is you can have multiple clusters that this could be Kubernetes or OpenStack or anything else, and it's goals to go scrape those clusters and then basically turn a cluster into a routing cluster. And you use the power of Contour here with delegation and everything to then send traffic to multiple clusters. But any of those multi-cloud, multi-cluster things needs a conversation I think to chat about how you're gonna implement that and have it look and work. Yes. But inherently Contour itself out of the box is only a single cluster solution. Cool, all right, thank you. Does Contour support TCP traffic? It does, yeah, so you can expose an L4. We do that over SNI, so you still have to have it be a TLS endpoint, but we can route TCP on there through SNI, yes. Cool. Let's see, does Contour address the use case of developers controlling defining the FQDN, effectively managing DNS, or it only works downstream? So Contour doesn't reach backwards. So right now it's assuming, like I had in my example, I was using Project Contour.io. I had done DNS and set up a load balancer and done all that work behind the scenes. So Contour doesn't go out and programmatically set up DNS for you. That's something that's on you to get set up today. Right, got it, cool. Okay, talk about secret management with regards to Contour. How are they managed differently than like normal Kubernetes secrets? What's the process there? Sure, so secrets, so you notice I had that root proxies namespace and I just mentioned this, but one thing you can do is you can put all your secrets in a set of namespaces and then I can tell Contour that these root namespaces are only where root proxies can live and that key will basically, if someone else creates a root proxy outside of one of those, Contour will throw it out, right? So limits the surface area of where those can live. Within those root proxy namespaces, that's where you can store all of your certificates. So based on that, Contour, if you delegate off paths or headers to other namespaces now, all the secrets live in one namespace, but your users can consume them from other namespaces. That's the one generic scenario. The other way you can do it is through a thing called TLS certificate delegation. So what I can do is I can, we have another CRD called a TLS certificate delegation. And what it does is you can say, I have a certificate here in this namespace, I wanna go delegate permission to another namespace. So Contour, so it's just a software-based delegation and from the other namespace, the team namespace, you're able to say, hey, I'm gonna reference a secret which doesn't physically exist in that namespace, but you can still use it because of the linkage you've set up with that delegation chain. So Contour will make it all happen behind the scenes for you. So basically now your users can still reference secrets where they want to, but the actual secret will live in a different namespace from the user. That's awesome. Yeah, it makes sense to me. There might be follow-up questions that are coming in after that explanation, but we'll try to get to them. What authentication schemes does Contour support, basic, digest, OAuth? I'm assuming that's kind of not inside Contour's purview or maybe it is. So there's an open issue to talk about OAuth, so it's come up a lot where folks wanna have Contour help out with that. Today there isn't a story around having OAuth or Contour's gonna manage that for you, but there is an open issue which I think Jonas could help us maybe look at or find to go comment on. So again, we're very feature-driven and feedback-driven, so whatever users are wanting, let us know open issues. That's the best way to get feedback to us, but today there isn't any OAuth story in Contour. Cool, good to know. Regarding weight distribution in Canary, what happens if the total sum of weights is not 100? Yeah, so I always use 100 because these are for me to grok in a demo, but those weights are all arbitrary. You can have it be 111 or 333. So we'll just basically take the sum of all the weights and then make that the total. So you can use whatever numbers you'd like. You could use 10,000 and 10,000 and that will work as well. They're all just arbitrary. However you want to manage those percentages or those values. I'm reading one question, it's kind of on my hand. Okay, here we go. You'll have this one. If only one classic ingress controller is deployed in the cluster, we usually get performance hit if some team is deploying a lot of changes, breaking traffic for other teams or during reconfiguring, for example. Does Contour address this issue? And I think the answer is yes, but please explain how. Sure, yeah. So Contour shouldn't, you shouldn't see an issue with busy clusters because of how Envoy, we talked to Envoy. So Envoy doesn't have to restart or do any kind of reboot logic. Envoy will take those changes as fast as we can push them down that GRPC connection. So there's no issue there. There is a, usually there's an initial startup time. So as when you first turn on Contour, it's got to go and scrape the entire cluster and get all of the services and endpoints and all those things. Depending on the size of that, there could be a small hit, but it's never really noticeable. So yeah, you shouldn't really see an issue of lag in terms of configuration changes and seeing it happen in real time. Okay, cool. So there's, oh, how can I change the admin port number for Envoy with Contour? So you can't do it with Contour. Right now we don't expose that port publicly. So it's hidden behind, it's only on local hosts from the pod. So no one can get to it unless you can port forward to that pod. Does Contour support web application firewalls? It does not today. That requires a custom build of Envoy. So you'd have to have a custom build of that. So no, we don't support that today. All right, so the next series of questions, and I will leave this to you, is comparison. How does Contour compare to IngenX or Trafic or how do you look at Contour in the scope of Istio? I can, you can handle those how you see fit. Sure, so in terms of Istio, so Contour is not a service mesh, right? So you can do some service mesh type things that I think depending on what your needs are, sometimes you don't need to deploy a full mesh, but that's a different conversation. In terms of how Contour compares to like IngenX and Trafic and those folks, IngenX has been around like forever and Trafic's been around a lot longer as well. So I think there's more knobs under the hood. I think what sets Contour apart is this proxy CRD, right? I think your users will be happy, you'll be happy. You'll gain a lot of gains from having that CRD and not dealing with Ingress today. That said, Contour still supports Ingress as is the upstream Kubernetes Ingress. So you can still use that. I know it's all I demoed today was our CRD, but we still support normal Ingress objects as well. So it relates to whatever you feel like using. I feel like having Envoy in the back end is a big win. Again, with those dynamic updates, Envoy itself is a big community, so it's growing and it has lots of patches and security releases, which is good. So whatever fits you, I think is the best, but yeah. Cool, this is Contour support H2 or HTTP2 GRPC traffic and load balancer across all the pods of a service. Yeah, so you can, if you annotate the service with those parameters, we can do that HTTP2 GRPC connection. Then you can also have multiple upstreams in your definition. So it'll send them traffic across all of them. What was the last part? To all pods in a service. You covered it. Yeah, yeah, so all pods, yeah, so Envoy, so Contour uses Envoy, will route to endpoints, right? So we use the service in Kubernetes just to go discover the endpoints, but Contour, I'm sorry, Envoy itself is doing the routing. So we can actually, we didn't demo it, but you can change the routing algorithm if you like. So right now it's LeadedWaste or it's round Robin. You could do a LeadedWaste, Leaded, LeadedRequest in different types like that, yeah. So it is routing to endpoints, yes. Awesome. So have you seen many people migrate from InginX to Contour and what kind of problems have they had? Yeah, I mean, we have users all the time coming in and saying, hey, I'm using this, this is cool. Problems come up, I'm trying to think. We had a few bugs recently, just because we have the road to 1.0. We switched to a whole new CRD, so we've got a couple of things there, but all in all, I think those are getting ironed out. We have some users that have different scaling issues that some people have, again, it's always the unique environments that you don't plan for. The 80-20 rule, right? Yeah, so those things come up, you know, or you should just want like new features, like, hey, it'd be cool if I could do this thing or I could have this feature. And then let's chat about that and see how we can best implement that. So yeah, so I think it's been growing. It's a growing community and it's been exciting to see users using it more and more. That's awesome. Oh, someone just came in with a question. Sorry, let me answer. Long time, okay, let me preface this with the end of this question. Long time contour user from pre-0.9 days, why the rename of Ingress Route CRD to HTTP proxy? Sure, so if you, so Jonas, if you can help me out with this one too. Dave Chaney, he's our tech lead here on Contour. He wrote a blog post on this, which should help explain it. But the short answer is one, Ingress Route is confusing because there's the Kubernetes Ingress object. There's a generic term Ingress. And then there's this thing called Ingress Route that we created. So it's when you say Ingress, it's like what version are you talking about? The object, our object, the generic term. And then it aligns more proxy, aligns more with what we're doing. You know, we're in HTTP L7 proxy. It is confusing with the L4 component of that, but at its heart and soul, I think on Contour is an Ingress controller. So reach out to that and to that blog post and that should help hopefully answer some questions. If not, come find us on the Slack and we can chat about it some more. Nice. Last question about Yeager Tracing enabled via Contour. I would think that you can answer that, but I think those are two separate issues. Yeah, I'm not sure if we've ever set up tracing with anything. We'd have to dig into that. So I do a good issue. If we want to open that up, we can investigate that. Yeah, definitely the Nash, right? Like definitely post that issue asking that question on the GitHub page or in the Slack channel potentially you could spark a discussion for sure. Absolutely. So you're gonna be at KubeCon, right? VMware, we've got a big booth, big presence. Steve, you're gonna be there. Jonas is also, I'm assuming gonna be there or maybe I think he's in the... Yes, I will. Awesome, thank you. He's in the chat dropping answers for everybody. So thank you so much Jonas. So we look forward to seeing everybody there at KubeCon, I'll be there as well. Stop by the Slack channel for sure and the GitHub page is github.com slash project contour, right? Yes. Okay, cool. And yeah, Steve, anything else before we go? No, like that's it. Thanks everyone for your time and again, reach out and open the shoes and chat with us. Happy to chat about anything. So thanks again for all your time. Yeah, thank you everyone for joining us today. We look forward to seeing you at a future CNCF webinar. Have a good one.