 All right, thanks everyone for coming quick introductions. My name is Alice and I am an emissary ingress maintainer I'm Flynn. I'm also an emissary ingress maintainer, but I do not work for ambassador labs anymore. I'm now at buoyant Just in case anybody was wondering why I'm wearing the linkerd t-shirt Cool, so just to get started. I'm gonna set the agenda We're basically just gonna go over some intro about emissary ingress some recap about what we have worked on Get into a bit about how you configure emissary ingress. What are the benefits and stuff about why we chose the way we did to configure it? Talk a little bit about the 3.x major version. We released since the last cubecon Talk a bit about the path to the next version of CRDs and Then lastly mention a bit about convoy gateway. What's been going on with that and what our plans for the future are So to get started what is emissary ingress? Well, it is an API gateway So you've got your back-end services that are sitting in your kubernetes cluster You've got your users that are sitting outside the cluster and you've got emissary Which is the main gateway the front door into the cluster all that traffic is gonna be going through emissary from Externally into the back-end services. So that is the primary thing that your end users are going to be communicating with Emissary basically consolidates all of the things you want an API gateway to do into a single control point So a bit more detail on what it looks like in the cluster So as a background emissary focuses primarily on providing a self-service developer centric experience for controlling the gateway Emissary is powered by convoy proxy. So emissary is going to be sitting in your cluster It's going to be listening and watching for resource updates, and then it is going to be spinning up and configuring an convoy proxy So outside traffic is going to be talking indirectly with that convoy proxy that is managed and configured by emissary And that's going to get the traffic to your back-end services Like I mentioned emissary is an API gateway, but it is not just a proxy Emissary ingress also does authentication and as well as lots of other cool control patterns for traffic So let's say we've got two users here We've got Jane and we've got mark and they both want to access the service You can inspect things that are more than just the path things like the HTTP header All sorts of different things and say like hey Jane is going to go to this service and mark is going to go to this other service We also support extensions for authentication So let's say Jane wants to update some thing on the application and mark does too We're going to talk to that extension for authentication first We're going to say okay Jane is allowed to perform that update, but mark gets banished to the shadow realm So beyond authentication we've also got extensions for observability things like tracing rate limiting and resilience patterns A lot of this is overlap with service meshes, but that is okay Each one performs very different job depending on that north south and east-west traffic patterns A little bit more info about those specific features of emissary on the resilience side We've got things like timeouts circuit braking and retries and there are also extension points for Authentication with the X-Toth protocol and then rate limiting with the RLS Proto On observability like I mentioned we've got support for Distributed tracing and we've got tons of great metrics that are provided by Envoi so you can do things like setting up Grafana dashboard and Monitoring those metrics to see what's going on with your services Like I said emissary ingress is powered by Envoi That is the core of what it does and how it gets traffic to your back-end services You may have heard about this thing called Edge Stack That's not the focus of this talk But just so you're aware of what the relationship is Edge Stack is basically emissary ingress But with some additional features built on top of it anything you hear Edge Stack might be capable of Emissary is more than capable of on its own You just might have to do a little bit of lay work to build those things out yourself So going into the recap over the last two years what has been gone on with the project in 2021 we were previously called Ambassador API Gateway then we donated the source code to the CNCF and renamed it to emissary ingress We also had the major version 2.0 in 2021 and major version 3.0 in 2022 We've introduced new CRD versions if you've been a long-time user of the product You will probably be familiar with we've got a bunch of different ones latest one is v3 alpha 1 We've made improvements to our integrations with service meshes like linker D. Istio and console and Most recently with that 3.0 major version upgrade We've got support for HTTP 3.0 to downstream clients right now in the future We'll probably support it for upstream clients, but right now it is just downstream connections We've had a ton of releases a ton of commits and we really appreciate everyone who's gotten involved Tried out the product given us feedback and contributed to it The 3.0 why the major points were bringing Envoy up to date So we were running a much outdated version of Envoy 1.17 We brought that all the way up to 1.23 and mostly that is bug fixes security and stability improvements We also had to drop support for your Envoy's v2 API. They made that choice We were kind of forced on that but that should mostly be transparent to the users Beyond that we upgraded a bunch of our dependencies We're also now on the latest version of go upgraded our Python and a bunch of other parts of the container Like I said, we've got that new support in the 3.0 X series for HTTP 3.0 to downstream clients and One major contributor I wanted to call out was Paula Celebria who added support for custom tags and tracing services I'm going to give it over to Flynn and he's going to talk a bit about configuring emissary ingress Thank you. So Alice mentioned early on here about developer centric self-service configuration. That's been a focus That's been a focus of emissary ingress since it's start back in 2017 Let's see There we go Sorry Alright This is an example of how you would some of the things that you can do when configuring Configuring emissary. This is actually a complete emissary configuration To have emissary listening for a TLS traffic on 48443 Doing TLS with that assuming that you are using the hostname food at example comm and then routing traffic with the slash quote Slash path prefix over into a quote service If you were to look under the hood you would find that this generated an envoy config That is at least hundreds if not a thousand lines long You probably do not want to be actually messing with the envoy config in this one. The emissary configuration here is much more palatable You can do this with the ingress resource and that ingress resource is Roughly equivalent There's a couple of things that are minor minorly different here But those two the three CRD's on the left and the ingress resource on the right are roughly equivalent It's a little bit challenging though in operations to be doing this all with the same CRD You end up where if you have multiple people so for example if you decide that you want to change something about the TLS certificate While somebody is trying to change the path of the mapping You're both going to be trying to edit the same resource at the same time and things can get complex so That makes it a little bit tricky to do the whole self-service thing and now this has decided that it's just not going to work All right, I'll use the keyboard One of the things that's nice about this particular configuration language is if you separate out the mappings And then the other stuff more on the infrastructure side the way we've done You also get to separate those roles as people are working at this in operations So you can do all of this with just one person Where you've got a single person going through and dealing with all the mappings dealing with the listeners dealing with the host the authentication service the rate limit service everything else But it's very very easy to separate this into two roles or more So that you can have the developers of your application worrying about mappings and then have separate people filling the more Op-centric role trying to go through and worrying about the more infrastructure research of things This is something that's being baked into emissary literally since the start in 2017 It has been very important for a bunch of the adoption that we've got this is end up ended up resonating a lot with a bunch of users Enough so that it's kind of fallen into being a best practice at least in our heads If you look at Computing as a whole computer engineering as a whole We know very well how to deal with four or five developers on a team We kind of know okay how to deal with two or three dozen on a team By the time you're into the hundreds on it the hundreds of people who are all trying to work on the same sort of thing Pretty much the only way we know how to do this Well as with microservices where you arrange things so that you have different teams working on different chunks of the Thing all going independently all having an independent release train This is really interesting because it requires that you separate those concerns out very nicely And it also requires that if you really want the most most benefit from it You have to have all those teams able to work at full speed without getting bottlenecked either on each other or Getting bottlenecked on operations This ends up requiring a lot of trust Where you kind of have to trust your developers will have to trust that Operations is going to do the right thing and provide them all of the platform stuff that they need to get their things done Operations will in turn need to trust to that the developers are going to be Operating in good faiths trying to work on their things and trusting the ops guys to do the platforms things Everybody ends up benefiting from trust going both directions, which is Interesting because there are a lot of ways also where historically These two roles in a lot of ways and at a lot of times don't find that trust to be natural You don't have to do that trust blindly of course Kubernetes are back is a great tool for going through and putting guardrails around things so that the developers can only mess with the developer sorts of things the operators can only deal with the operator sorts of things I'm going to call out the kubectl sudo command as well We've been using that one or I guess y'all have been using that at ambassador labs for a while To great effect It's a way you can basically set things up so that normally you're just doing your normal stuff with your normal control account But if you need to you can switch over and impersonate a higher privilege account long enough to get something done There's a full audit trail so that everybody can go back and make sure that the right things have happened Kubernetes in general makes the whole audit trail thing really easy. You can always go through and pull the configuration out logical extension of that idea is the whole get-ups infrastructure is code thing Likewise Argo CD all that sorts of stuff, but all of these really get back to Constructing away where your different teams get to trust each other to support each other Where you can have guardrails, but you can still have everybody working at full speed without having to be blocked on each other It is a lot of effort, but that's okay. It's worth it All right, any questions about any of that so far before we talk a little bit about the version 3 crd's and where those are going All right, just as in general we're gonna be hanging out afterwards You can always come up and talk to us afterwards. I expect that we will end up with time for questions at this point so Let's talk a little bit about the b3 crd's And we probably get to slow down quite a bit at this point too, so Okay If you take a look at these crd's you will notice that they all say get ambassador dot IO slash v3 alpha 1 There's been a lot of discussion about the transition from Get ambassador dot IO slash v1 to get ambassador dot IO slash v2 to get ambassador dot IO slash v3 alpha 1 To oh great v3 final is coming. What should we do with that? We already are pretty sure about a couple of things that we want to do with this transition One of them is a lot of cleanup stuff if you look over at that yaml over on the right You'll see a bunch of things where we have fields that are named with underscores in snake case So we know we want to shift those things over to camel case because it would be kind of nice to do that the way The entire rest of the ecosystem does Sorry about that We know for example that we would like to be using durations instead of having things named with underscore ms or underscore s or whatever right We know That there are some fields that are deprecated but still present now a good one is there's a use web sockets fields that We really should just throw away entirely. You can already use allow upgrades web socket And we recommend that but at some point the use web sockets field is going to go away likewise Host and host regex get combined into a single host name regex thing And so there are quite a few things like this that live in the CRDs as they stand right now And I Am going to let Alice go ahead and talk about other stuff on the way to be three over to you Thank you So, yeah, a couple other consistency improvements We want to make to the CRDs when we're going to be three final is that we've got a bunch of different services Like the auth service the log service mappings These are all things that result in Envoy cluster creation But what's inconsistent about them is that you don't necessarily have the same convict options through our CRDs So one thing we'd like to standardize is across any resource that lets you configure For example Envoy cluster is just one example You should be able to configure the exact same things like timeout snap settings, etc Across all of these different CRDs So there's not this disconnect where a mapping allows you to configure one field but another CRD that creates a similar Envoy resource does not allow you to configure the same field or in instances where Maybe the field is being configured in both resources, but the naming and how you configure them in our CRDs Is inconsistent so that's something we'd like to fix What is the path going to look like for v3? Well, we're probably going to be doing a v3 beta one sometime soon We're not really firmly committed to any sort of a timeline on this We are mostly just watching around what the community is letting us know about the v3 alpha one CRDs Changes that they would like to make we introduced some of those things that we'd want to mention Sorry that Flynn mentioned in the previous slide We want to change such as the inconsistencies in the snake casing that we want to do That was brought up in Las Valencia cube con So we haven't heard a lot of feedback about that when we were in the other We're going to take that as that this is something the community is okay with but um We're always having conversations with people trying to figure out What are the things that people like and don't like about our CRDs? What changes do people want to see what things do they want us to not change? We may or may not end up doing a v3 beta 2 That's just going to depend on priority and what all goes into v3 beta one and what we want the v3 final CRD to look like Main thing to draw attention to is that we are currently still supporting the v2 CRDs We actually dropped support for the v1 CRDs earlier this year But we are going to probably end up bringing support for v1 CRDs back Just because we've heard there have been some friction points with us dropping support for them So main thing going forward is just that all the resources you've created all the CRDs You might already have if you're a current user of emissary those aren't going to become invalid You're not going to have to go and recreate all these things We're going to support these versions moving forward, but we're going to try to make new versions of the CRDs cleaner So the storage version is not going to change to v3 final until we release v3 final So if you're just coming out with like v3 beta 1 v3 beta 2 those are never going to be storage versions Main thing to focus on here is just that if you are a user of emissary or if you're interested in it We'd love everyone's feedback on any changes They want to see or do not want to see in the CRDs if you feel strongly about that Then please create a GitHub issue reach out to us on Slack. We'd love to hear from you Another thing I want to touch on really quickly is that a lot of people who have been using the product for a long time May have been aware of the friction from the 1.x to 2.x major version migration Particularly around the CRDs. Thankfully we have learned a lot in that time So we were trying to make sure that every other major version bump going forward Is not as is not as much pain as that 1.x to 2.x jump was Also, there's this thing called Envoy Gateway Um, this was announced in kubecon valencia this year So there has been a ton going on with this project some quick recap about Envoy gateway What does this mean for emissary ingress? So as I mentioned we are built on top of Envoy proxy Eventually once we feel like Envoy gateway is in a good spot where it's pretty feature complete and stable We will start to transition to emissary ingress being built on top of Envoy gateway instead of on top of Envoy proxy directly So what is the kind of driving factor for doing this? We've got two major cncf projects that are api gateways There is emissary ingress and there is contour There's also a bunch of other people that have interest in the api gateway space And opinions about how things should go or people who are working on similar projects We want to bring as many people as possible who are working on these different things that are duplicating a lot of that api gateway effort That is just like watching resources Updating the Envoy config translating from like crds to xds config That's a lot of that's a lot of effort that's being duplicated across these various different projects So the goal of Envoy gateway is to bring in a lot of these people who have developed expertise over The lifecycle of these projects and get everyone to focus on building one common core that we can all share And that's going to be Envoy gateway So it's better to work to work together to build something solid from the ground up Rather than duplicating that effort and having a bunch of different competing apis and standards This also doesn't mean emissary ingress is going away There's a bunch of things that Envoy gateway is not going to do if the emissary ingress is going to continue to do We have already been working on both in parallel. I am a maintainer of emissary and Envoy gateway So we're going to make sure that the project is Shaping up to where we can definitely use it and make it an improvement for all the users of emissary You're not going to see emissary be replaced by Envoy gateway and we're not going to stop working on emissary So like I said long term, we're just going to eventually Support Envoy gateway as the core of emissary ingress initially We're just going to start by using bits and pieces of it as an api So that way when we feel like Envoy gateway is doing something particularly well We can integrate a little bit of it. So it's one piece at a time instead of one big jump from Envoy proxy to Envoy gateway We've been working with them to ensure that everything that we need To accomplish as emissary ingress is possible with Envoy gateway and that Envoy gateway doesn't become a limiting factor So like I said, there are certain things that Envoy gateway won't do that's where emissary ingress is going to come in Envoy gateway is only going to support the gateway api CRDs and only Kubernetes resources So emissary ingress supports for example the console service resolution That's something Envoy gateway has no interest in doing and emissary will continue to do Emissary will continue to support its existing CRDs So all that stuff that we just talked about isn't going to become invalid once we accept Envoy gateway as our core you're not going to be forced to switch to gateway api CRDs if you don't want to And I'm going to hand it off to Flynn for some recap Thank you all right so Emissary is always focused on developer centric self-service ingress configuration Simply because it's a great way to let everybody get things done more quickly This takes a lot of trust It is worth it. It does work really well And you know the trust is a good thing in the first place, right? Um, if you are a new adopter or if you're going through and adding things to your emissary ingress configurations Use v3 awful one It will be happier and nicer And it'll be a smoother transition to v3 final when it comes out Even though v2 is still supported and as Alice mentioned we're actually bringing back support for v1 Just to remove some migration friction that people are experiencing um I think that's the next dot y that that'll be in right? um We are involved in the envoy gateway Alice is a maintainer luke's also a maintainer I show up and heckle a lot, but i'm not a maintainer Uh, we're involved in envoy gateway because we believe it's a win for everybody Arranging it so that we and the console folks are not constantly duplicating the same stuff over and over and over and over again freeze everybody up to Really do more things that are new and different and interesting. This is a good thing Emissary ingress will still stick around because there are ways that emissary ingress gets to add value on top of that Likewise, I expect console will still stick around because there's contour. Did I just say console a minute ago? contour ever so sorry to all the fine people who work on console and counter um Yeah, I expect that contour will also still stick around because it too will be adding value for its customers in ways That the envoy gateway is not going to want to do community stuff I said this in valentia. I said this before in marcelona. It's there's no possible way that emissary ingress would have made it this far without the community Many many thanks much appreciated You can find the two of us if you want to get involved with developments Or if you want to ask questions or if you want to provide feedback or if you just want to say hi You'll be able to find both of us on the community slack at a8r.io slash slack That is very hard to say she's at a loss go i'm at flinn and um Yeah, thanks note alice at data wire dot.io, but flinn at buoyant.io and I think we have What five ten minutes for questions so any questions And we're also going to be sticking around afterwards So if you just want to come up and talk and instead of shouting your question out, that's fine, too Hi, thanks for the talk if somebody's running istio for east-west traffic Is there a way for ambassador to route directly to pods or would it need to be like a Have its own istio sidecar There are a lot of answers to that question um as of I mean, I know i'm going to get the istio versions wrong, but for a little while now it has been possible to have emissary Watch the mtls configuration for istio And then just simply participate in the istio mesh Without having to be fully meshed in with its own sidecar from istio, etc, etc I'm pretty sure that you can do it by injecting emissary fully into the istio mesh But if you do you have to make sure that you change the envoy of base id for one of them either istio or emissary So in most situations, it's going to be simpler just to let emissary watch the tls configuration and do it that way Does that answer the question? probably Try it. Let us know what it's at. What happens anything else? I've got a question. Um get ambassador In the url there for the Any any chances that's going to change? Oh, that is a lovely lovely question um Well, actually, that's a good question. Are you talking about in the crd definition? Are you talking about the url? Yeah, yeah in the yeah, okay, so There was a fair amount of discussion about that and the conclusion was that it was Probably more of a headache than it was worth for existing users to change all of their api groups um, I see some nods in the audience It would be technically possible to support both sort of but We've never heard from anybody that that was a real thing that the community wanted So if you are in the community and you do want this, let us know um, otherwise nobody Everybody seems to be looking at it as oh my god, this would be awful for migration Any other questions anything else? I don't want to hug mic, but I've got another one is the um Is the envoy gateway a wrapper around the envoy proxy or a rewrite So it's meant to be a wrapper around the envoy proxy. Um Actually, we just had our first um functional release of envoy gateway just a couple days before kubecon If you weren't able to attend the envoy gateway talk, um mean features of that is that um envoy gateway will spin up a bunch of um Envoy proxies um as a fleet it uses the gateway api so while emissary has um it configures envoy by um being In the same container where we've got like one pod it contains both the processes for emissary ingress and envoy um envoy gateway actually has separate deployments You've got like your main envoy gateway deployment that will listen for like gateway resources Which will instruct it to create envoy proxy deployments that it will manage Got it. Thank you Any other questions? All right. Well, thank you guys very much Yep, thanks for everyone coming