 All right, I guess we'll go ahead and get started then. I'm Flynn, with me are Luke Shoemaker and Alice Vasco. We're all from Ambassador Labs. We all work on emissary. Ooh, we can take our masks off. So yeah, this is what we actually look like. All of us are working on emissary ingress. I've actually been involved with the project since the beginning back in 2017. Luke has been elbows deep in a lot of the low level stuff for, I don't know, a few years now, I guess. Not three years. Alice actually came over and worked on the support side of the world at Ambassador Labs for a while, got a good sense for what the customers need, what the users need, and then decided to join us at engineering. We have successfully dragged her over to the dark side. We're gonna talk quickly about, I'm gonna do a quick intro and a recap of some of the stuff that we've, some of the places we've been. Alice is going to talk about self-service configuration, about why it's important, about how to use it well and how to avoid getting burned. And Luke will then take over to talk about some changes that have been made and are being made to the CRD input language for emissary ingress and about, as you may have heard, the Envoy Gateway. So, without further ado, 2021. 2021 was kind of an interesting year for us. We donated emissary to the CNCF, got a new name in the process. We did the emissary 2.0 release, where lots and lots of things changed for those of you who are used to it from before. We made a lot of changes to the input language, where we retired, get ambassador.io v0 and v1. We switched fully to v2. We introduced v3, alpha one and also enabled it so that the system could automatically convert between v2 and v3 for you to make the upgrade path smoother. That happens with a piece of the system that's new called API-Xed. And we also found that we were having a fair amount of friction with having Helm manage CRDs for us. So we ended up yanking the CRDs out of Helm and that made things a lot simpler. Emissary can be used on top of other service meshes. Service meshes, excuse me. We improved our integrations with Linkerty and Consul and Istio. A lot of that work was around MTLS. Not all of it, but a lot of it. That's probably the thing that was most visible to folks. We ended up doing 30 releases that year across 2,300 commits. We ended up with something north of 7,500 users on Slack. Huge thanks to the community. There's no way we could have been here without you. So for those of you who are less familiar with Emissary, let me do a really quick intro. It's an API gateway. So if you have a Kubernetes cluster with services in it and you have some users who are not in that cluster, then Emissary's job is to sit in between those two at the edge of the cluster and mediate access from outside the cluster to inside the cluster. This is colloquially the Ingress problem. Emissary focuses on the Ingress problem because the Ingress problem is always the first one that cloud native developers have to wrestle with in a lot of smaller organizations from less complex situations. It can be the only problem that a developer needs to deal with here. Emissary is an open source cloud native developer-centric self-service API gateway powered by Envoy. It was designed from the beginning to make it easy to get started, to make it easier to take advantage of the power of Envoy without actually having to learn to be an Envoy expert. At this point, it is a CNCF incubating project. We started in 2017. We've seen pretty widespread adoption since then. We're running in thousands of places right now. And I want to take your attention back specifically to the part saying developer-centric self-service there. That's the bit that Alice will be talking about in more detail shortly. This turns out to be something that's been really important in terms of adoption is that particular bit of focus. Okay, it is an API gateway. One of the core roles of an API gateway is routing traffic. So if you have a user Jane who wants to request a quote from Sun Service, then she can do that. A user Mark can request a quote. They might not talk to the same service. That might just be something where they're getting started off for load balancing. It might be something where there are more deliberate things going on. Maybe they're users in different tenants. But this is a core function of an API gateway. It is not the only function. API gateways, in addition to being proxies, are a really good place to bring together more centralized functions that you really don't want your developers to have to worry about individually. So for example, application security. Maybe Jane is allowed to update quotes, but Mark is not. And you can centralize that in the API gateway. Let it worry about auth for you. And then your developers can just rely on that and not have to worry about it. Other things that are really useful for bringing into this area, there's observability, there's rate limiting, there's a bunch of resilience and development stuff. A lot of these things, and there are a lot more, I'm not gonna go over all of these. A lot of these end up overlapping with service meshes. And that's okay. The API gateway and the service mesh are different roles. You could, if you set things up correctly, you can mix and match these functions to take care of them at the level that makes sense for your organization and for your use case. I mentioned earlier that Emissary is powered by Envoy and some of you may be also familiar with Ambassador Edge Deck, which is in turn built on top of Emissary. Everything that Edge Deck can do, Emissary can also do, you just end up having to write more code on your end. And with that, I will hand it over to Alice to talk about self-service configuration and why that is an important and useful thing. Thanks. So Emissary Ingress supports self-service configuration. What's that really mean for the end developers? You take a look here, this is an example of some of the Git and Basterio resources that you use to configure Emissary Ingress. In particular, note how we separated them out based on distinct function. So each resource is generally focusing on what you're gonna be doing at any given time so that you can focus on exactly what you need to do at any given point. Note that these three resources are only about maybe 10 or so lines each. They can result in an Envoy config that's easily hundreds, if not more lines long. You can also configure Emissary Ingress using the Ingress resources should you need to. But we think that the best way to do this is using the Git and Basterio resources. Note particularly that the Ingress resource, you have to specify the host name you wanna use, you're providing your secret name if you're doing TLS, termination, and you're specifying the rules for what traffic you wanna send into what upstream service. Whereas with the Git and Basterio things, you can configure things on an individual level that you might not need to configure more than once or twice. If you wanna configure where you're listening for traffic, you can set that up. And once that's configured, then you can focus directly on things like mappings to focus on getting config straight to your services. We've put a lot of time and effort into the design of the Git and Basterio resources. That's why we really think this is the best way to be configuring it. We've taken a look at some of the challenges that result from configuring things at a scale with existing resources such as the Ingress and we're constantly revisiting it to look at what is working for people at a scale, what are points of conflict and friction in ways that we can upgrade things and make them better in the future. Obviously, we didn't design perfect resources on the first try, but that's why we continually look to improve them and see how we can make them better as Luke is gonna talk about in a little bit. Now the whole self-service point means that any one developer can do this as they need it and focus on pieces bit by bit. When an author is particularly relevant to you, you can start creating off-service resources to focus on that. You can do hosts and listening to configure how the traffic is getting into the cluster and then when you're ready to get it sent to your upstream services, you can just focus on the mappings. You get the most benefit out of this though when you have that distinct role separation where you've got your ops and your admin folks who can focus on kind of the cluster level tasks and more maintenance things, getting things set up and prepared so that it's ready for the developers who can just focus on deploying the services that they're developing and figuring out what kind of traffic they wanna send about service. As for the best practices with configuration, obviously you get the most out of it when you're using that separate distinct role approach so that you can trust that your operators are just gonna focus on making sure things are set up for you. Your developers aren't overburdened worrying about how to configure all these different things when they just wanted to play a service and then each person can focus on their individual tasks trusting that the other person is gonna be taking care of that side of the world for them. This really empowers teams to do independent releases without having to necessarily worry about what operations is doing or what the other teams are necessarily doing. They can just think about the services they're developing and how they wanna get traffic to those. It also means that you don't have to wait for ops to expose the services you're developing. You can figure out exactly how you wanna expose them and do it really quickly with a small amount of config. Each team benefits the most when they're trusting each other to take care of their separate roles so that each person can focus on the stuff that's relevant to them. Now the trust between teams doesn't necessarily have to be blind and you don't have to give everyone just total access to your cluster. You can use Kubernetes RBAC to set up permissions for who is allowed to use what resources, apply resources, configure and create things. In particular, we're a big fan of this tool called Qubectl Pseudo that can let you set up permissions for who particularly is allowed to use elevated permissions when working in the cluster and who might necessarily just have only read configurations. That makes it really easy for everyone to take a look at the config that's in the cluster but you can limit who's allowed to make changes without limiting who can inspect things, get logs, get information, see what's in the cluster and the status of all these resources. You can also use tools like GitOps and store your infrastructure as code to add additional control points so that you're not necessarily limiting who can make changes to the cluster but you're putting it behind that barrier of a pull request so that if someone wants to make new config changes they can pull request it and then you can have specific reviewers who can approve dynamic changes before that actually impacts your cluster. Other CD tools here as well can help make sure that you're automatically validating that config before you're pushing it out to your cluster. And now Luke's gonna talk a bit about the V3 CRDs and some upcoming changes. So one of the things that we mentioned that we changed in the last year is that we dropped support for the Get a Baster IO, V0 and V1 input languages. Now we had deprecated those and introduced V2 way back in Emissary 1.0 in January 2020. And so only just recently in Emissary 2.0 that we finally dropped support for those old versions. Now, even though we, you know, two years of lead time that was a pretty non-disruptive change for us to have that much lead time because the languages had been pretty much evolving additional only of we because we didn't have any real conversion mechanism to convert between the versions we can only make additive changes. Well now with V3 alpha one which we also introduced in 2.0 we have a real conversion mechanism and that's allowing us to make some changes that we've wanted to make for a long time but couldn't in order to clean up the interface. So one of the most obvious is a pretty simple one. There are a lot of things that were snake-case before that we were transitioning to the camel case to be more consistent internally and more consistent with everything else in Kubernetes. We've got a bunch of fields that were like timeout underscore MS and then an inter-number of milliseconds we're transitioning those to be durations to do both again, both internal consistency and consistency with everything else in Kubernetes. And when we'd have a field that we wanted to deprecate or replace with something else we'd have both options. And so now we're getting rid of some of those older things. So we used to have this use web sockets which we replaced with allow upgrade which is more capable and more flexible but we've been supporting both. And so in V3 we're removing use web sockets similar deal with hosts and configuring which host names a map and matches. There used to be this kind of clunky mechanism with host and host rig X. We were consolidating these things to make them easier to work with and also adding support for DNS globs. And so being able to make a lot of these cleanup changes is what's been going on in V3. And a bigger, those are all pretty small changes. A bigger one is that auth service, log service, rate limit service, tracing service and mappings all create what we call an Envoy cluster. And that's essentially a thing that Envoy can dial out to a cluster of IPs or whatever. And the basics of this is give it a set of IPs or give it a host name that it can resolve. But there are a lot of tunables on those like timeouts and stat settings and how you do that for each of these services and mappings. How you do that's all different whether you can even do a specific thing for one is inconsistent between all of them. And so we're gonna be factoring that out making that consistent and having parity between all of those for all of those resource types. Now I'd said so we're on V1 alpha one right now or V3 alpha one right now. We're going to be working toward V3 as we're making more of those changes. And so V3 beta one should be happening soon with most of those. Now point out that MSA3.0 is also coming soon and someone's gonna talk about that in a bit. Those, the progression of the input language and the MSA3 version number, they aren't tied together but the 3.0 is probably happening before input language V3 beta one, you could happens. They should both be soon but the 3.0 is probably is actually gonna be first. Sometime after that we'll probably have a V3 beta two that has those that cluster configuration consistency. And then sometime after that we'll say okay we're satisfied with all the changes we've made and let's promote to V3 final. Now a note on compatibility. So we've got this cool conversion layer and about how it works is that there's one version that you're allowed to store into SED in the API server. And so when you create a V3 alpha one resource today it actually converts it down to V2 to get stored and then converts it back up to V3 alpha one for MSA3 to consume it. And so because of that V1 is very central to how things are actually running. And so you can be sure that V2 supports not going to break and that you don't have to worry about that ever breaking. And that's going to keep being the storage version until at least until V3 final. And there are several reasons for that. A big one is to have a smooth upgrade path from MSA3 1.0 because of some bugs were there but it's gonna keep doing conversions. You can still, you don't have to worry about any of these changes breaking you. You can still keep using V2 or V3 alpha one and then opt in to the newer cleaner stuff as we come out with it. So not gonna break anyone. And we are trying to make this cleaner and easier to use. And so feedback would be very welcome about what are some of the friction points that we could improve. Now related to going forward with new API with new input languages on Monday you might have caught the announcement of this new envoy gateway project. So there are, we are one of the two CNCF gateway projects. There's another one called Contour. And a lot of ways MSA3 and Contour are very similar both functionally and like architecturally. And so we've been duplicating a ton of effort between these two projects. And so Envoy Gateway is less coming together and saying let's collaborate instead of duplicating the effort. It's not just MSA3 and Contour folks coming together. There are several other organizations coming in with Envoy Gateway, but those are the two CNCF ones that are very visible. And so does that mean that you would be a sucker to keep investing in MSA3? No. One, we're not gonna stop working on MSA3 tomorrow and start working on Envoy Gateway. We're gonna be working on both in parallel for a while. But what about after that? What's the long-term outlook on MSA3? Well, we're going to be transitioning the long-term plan is to transition MSA3 to being built on top of Envoy Gateway. There in, you know, I'm involved with both projects and so I'm going to be making sure that's one of my roles in Envoy Gateway is to be making sure that it's capable of hosting MSA3. And MSA3's long-term perspective is that there are a lot of things that it does that Envoy Gateway is just not interested in doing. And so Envoy Gateway is supporting Gateway API 0.5 and later, 0.5 is not out yet, but you should turn into tomorrow's Gateway API chat talk to hear more about that, hopefully. But so MSA3 can take that, get a master IO language that Alice was talking about earlier, it can take the K-native language, it can take the old Ingress API language, it can take the older versions of the Gateway API language that Envoy Gateway is just not interested in. And so this allows you to have compatibility between some different systems, that's your migration paths. And so MSA3 is still delivering value there, will continue to deliver that value. And also Envoy Gateway itself is interested in watching resources other than from Kubernetes. And MSA3 lets you do this, config from other places, namely console. And that will never be part of Envoy Gateway, that's not on its roadmap. And so MSA3, you can think of it, it's going to keep doing these things and hopefully have even a better base as we join forces on engineering for that base instead of duplicating all of this effort. And I'll hand back to Flynn for the recap. So Alice talked a lot about self-service and about ways you can use that without getting burned, ways that it's been important. We talked a little bit about the nature of the Ingress problem and why MSA3 has been focusing there. The self-service thing really does turn out to work very well, especially that separation of concerns where you can have developers worrying about applications, while other people are worrying about infrastructure, they can support each other, but they don't have to get tied up with each other. It does take a certain amount of trust, it can take some time to establish that trust, but in our experience it's been worth it to do so and to continue forward. Luke talked some about the input language changes. I'd like to reiterate that we really do believe that the version three stuff is the way to go. We're actively looking for feedback on that one. V2 will still be supported, we think that's important. There have been a lot of burned fingers getting us to V3 and the lessons that we learned doing that and getting burns doing that are things that we'd like y'all to benefit from. Luke also was talking about the Envoy Gateway. The short takeaway from that is really we are involved in the Envoy Gateway because we think it's better for everybody. For us, for y'all, for the rest of the community, we think it's a good thing. Three of us, to people who are not working at Emissary, will definitely be involved with Envoy Gateway going forward. You heard a little bit about Emissary 3.0. This is coming soon. Emissary follows semantic versioning, so the biggest reason that Emissary 3.0 is Emissary 3.0 instead of 2.0 or something else is that there's a massive breaking change underneath in that the Envoy version 2.0 configuration will no longer be supported in Emissary 3.0. This includes the transport protocols that Emissary uses to talk to external auth and rate limit services. So Emissary 2.0 already supports the Envoy V3. Yes, it isn't knowing that there's so many 2s and 3s here. I'm really sorry about that. Emissary 2.0 already supports the newer transport protocols. There is nothing preventing anybody from going ahead and switching to Envoy V3 right now before 3.0 comes out. Once Emissary 3.0 comes out, the old V2 transport protocols will not be supported. That's important. There are going to be new features in 3.0. I can't really talk about those yet, sorry. But the biggest driver for calling it 3.0 is that breaking change under the hood. So I'd like everybody to be prepared for that one. Kind of restate the obvious here. There's no possible way we would have gotten this far without the community. So thank you very much for that again. If you're interested in getting involved, please be interested in getting involved. A great way to do that is to just join our Slack channel. Three of us are easily reachable there. I'm also not going to lie. I have to acknowledge 2021 was kind of rough in terms of contributor friction. And we are actively working on making that better right now. So this is how to reach us by email. If you want to, we'll be here. We have a booth in the, it's booth S30 and Pavilion 2.0. We'll be there at various points. We'd love to see you. We have some time for questions, I think. Yeah, we have a few questions. First of all, congratulations on the Envoy Gateway. And I'm saying it from heart because I'm part of the contour team. I'm the community manager for contour. So good luck with that. Likewise. So anyone questions from the audience over here? If not, there's a question from the virtual attendance. Is there a point where you can integrate with external authorization service? There is. Yes. When I was talking about all of the resource types that create clusters, one of them that I mentioned is an off service. That's exactly what that is. That's configuring a service that you want Envoy to go talk to over the XDOT C protocol. All right. The off service. And the next one is actually, yeah, I was going to extend that a little bit. XDOT Z works by every request that comes into Envoy. Envoy turns around and asks the external authentication service. Hey, is this okay? And the off service gets to use really whatever resources it wants to decide whether it's okay and then come back and either tell Envoy, this is good to go, or come back and say, no, it's not okay. You should respond to the user. Respond to this request with a given response. So the off service actually has an enormous amount of power to do things in that world. It's a very, very flexible setup. Thanks. And another one from the virtual. Is there a way to do a smart rate limiting like based on payload field? Would you like to talk about rate limiting? So there's the rate limit service, which lets you use Envoy's rate limit filter to talk to an external service. You get several pieces of information about the incoming request. You don't get the full payload. You could do something like that using the Xdoth and an off service instead of the rate limit proto and a rate limit service. But you kind of wouldn't want to very much because you want the communications with the rate limit service to be pretty lightweight because that's what you're counting on to stop you from overloading all of the other stuff. And so if you want to say, hey Envoy, I want you to buffer the whole body to pass it to my rate limit service, that's gonna be adding a bunch of overhead that you probably don't want in your rate limit system. And so you can get several pieces of metadata and hopefully it's enough. And so even if you did want to say, okay, rate limit service for kind of tier one filtering and then more advanced stuff using the body with an off service that's acting like a rate limit service, you could do that and that probably wouldn't be bad but you probably don't want that level of everything in the first level rate limiting. Thanks. The heavyweight nature of that request is kind of why the rate limit services don't typically include the body because it can be really expensive. Yeah. First of all, thank you for your presentation and could you please specify in more details what's the relationship between ambasador edge stack and the emissary ingress? And I mean, sorry for straightforward question. Why do I need to buy for ambasador edge stack? So without diving too much into a pitch for ambasador edge stack, from a technical level, ambasador edge stack is basically taking emissary and bundling implementations of an authentication service and a rate limit service for you so that you don't have to do that engineering yourself. The underpinning is all the same. The way it handles traffic is the same between the two of them. And if you want to implement your own on-service and such, you're more than free to do that with emissary. Sorry, and a very quick second question. So if we are just looking for to switch to ambasador edge stack, do you prefer us to switch to envoy gateway after the latest news? So like we said, the long-term, so it's going to be a while before envoy gateway is up at parity with being able to do a lot of these things. It's just starting, it's going to be a while. We're not reinventing the wheel, there's a bunch of code from emissary and from Contra that we're going to be borrowing in envoy gateway, but it's going to be a while before it's useful. And then even once it is, emissary is going to still exist, still be built on it, still have some added value that you wouldn't get from envoy gateway. Okay. Thank you. And if you're still up by the booth, we're happy to talk more about that too. Any more questions? Thanks for the talk. And I wonder whether you supported the equivalent functionality such as the web application firewall protection for injection attack or, yeah, like cross-site scripting SQL injection. If it's not a support out of the box, then whether there is some extension point that could introduce those functionality into the system. So emissary doesn't release, emissary focuses on more on the English problem than on that particular kind of security issue. We've seen people have really good success with using emissary in combination with other products. And we have also seen people approaching that using wasm stuff in envoy, which I could see supporting in emissary as well. I gotta be honest, I'm not sure how well that approach is working for them yet, but we've seen people go after it that way. The most effective way right this second is probably partnering. What were you gonna say? Yeah, so there's no, none of that built in to emissary. I do wanna say that like with edge stack, some of the stuff that it does does its own cross-site request for protection for like the things that it does, but it's not doing it for your application. But we do have the extension points for it to integrate. There are WAFs I know of that use the Xdoth integration point to be able to see the request coming in and then also use the access log service to have the view of request to be able to notice suspicious patterns. And that's actually specifically why we added the log service in order to facilitate that WAF extension point use case. Thanks. Anyone else? Okay, thank you folks. Thank you. Thank you.