 So, welcome. I am Flynn. You can reach me at flinnaboyant.io. I will be talking to you about emissary, which is not a buoyant thing. I would like to apologize in advance, both for any strange winces that you might see on my face, and also for the fact that this is probably the most boring slide deck I've ever put together in terms of transitions and animations and graphics. I broke my collarbone a week and a half ago, and so I'm a little behind on some of this. We are here to talk about emissary, the past, the present, and the future. We will of course start with the past. Quick show of hands. How many of you are new to emissary? Okay, so this is going to be a little bit interesting. Let me go through this deck, and then if you like, I'll pull up another one where we can go into a little more detail about how emissary works and things like that. But on this one, the quick summary is that emissary is an API gateway. It is an open source gateway. It's a CNCF incubating project. It is a developer-centric thing, meaning its entire focus for its life has been on enabling application developers to get things done. It is self-service in that it tries to arrange it so that the application developers don't have to go off and talk to other people with ops and open tickets and all sorts of things like that. And it is a very opinionated API gateway. There are a number of things that we don't do that we could have done. The developer-centric and self-service parts of this are very important. Emissary was actually designed from the start around these ideas, that the application developer is the important person here. The other critical idea there is that application developers generally don't care very much about Kubernetes. They just want things to work. So that's the way that emissary has always been designed. It also is interesting that that idea is also important for being able to develop at scale when you have lots and lots of developers doing lots and lots of things all at the same time. That idea also ended up being an influence on Gateway API later, which is one of the things that I found really fascinating about this. The bit that I said earlier about opinionation is also very important. There are a number of things that we could have done in emissary that we decided not to do because they don't fit this model of we just want the developer to be able to get stuff done without having to be a Kubernetes expert. I feel compelled to point out that the opinions that we were holding when we made those decisions and the opinions that are expressed in emissary could be wrong, in which case we can talk about that later. But yeah, opinionation is a thing. Emissary has also been around since 2017, and for those of you who've been paying much attention to Kubernetes, this means that emissary predates the concept of custom resources. Emissary came along right at the point that third-party resources were deprecated, but before the point that CRDs were standardized. And so some of the things about the way the input language was designed reflect that, sometimes in good ways and some ways in less good ways. Now, for most of emissary's life, it was funded by Ambassador Labs. Ambassador Labs is the company, well, okay, Ambassador Labs is the current name of the company where I was working when I first wrote emissary in 2017. And Ambassador Labs also has this Ambassador Edgedac commercial project, which has historically been based on emissary. The presence, Ambassador Labs is actually pulling back somewhat from emissary because they are shifting gears to making Edgedac being based on Envoy Gateway. And this makes things a little bit interesting. Some of you may have looked over and found that it's harder to find the emissary documentation, for example, right now. And this raises the fun question of, okay, so what happens to this project if the company that was primarily funding it is kind of stepping back from that? And this is a lovely question that gives rise to some other lovely questions like, okay, do people still want to use this thing? And this is particularly interesting to me on the one hand, because so many of you say you're new to the concept of emissary. But on the other hand, because API gateways at this point, what they do tends to be fairly consistent from API gateway to API gateway. What seems to matter a lot is the input language. Now, I am biased about the input language for emissary. I think emissary's input language is useful, especially because if you look at the Gateway API, which is the other obvious place to look for this, you know, Gateway API is not really caught up yet with the needs of an application developer who doesn't really care a lot about Kubernetes. Gateway API is still with apologies to Shane, a bit focused on more of the platform engineer. So, like I said, I'm very biased, but I definitely believe that if you are coming into the cloud native world, and you're trying to just get stuff done, I tend to believe that there's a place for the emissary input language in helping you get your stuff done. There have been three different versions, though, of the emissary input language. There's the V1 CRDs from probably 2018 or 2019. There's the V2 stuff. And then currently, V3 in the shape of V3 alpha 1 is the current one. I do not think, especially if you're coming into this new, I do not believe that you should be messing with V1 or with V2. Part of this is because one of the things that we learned along the way here, when we put together conversion webhooks to take, if you have an existing emissary installation where you're using the V2 CRDs and then we have a conversion webbook that can magically convert them to V3, one of the things we learned about that was that conversion webhooks are awful. Really, really awful. Don't do them. They're yeah, you want to avoid them. So, especially if you're coming into any cloud native thing, but particularly emissary, if you're coming in as a new installation, you really don't want that. Again, I'm very biased, but I think you should be avoiding the V1 and V2 situation, and I think you should avoid conversion webhooks. This gets us to the question of, okay, so short term, what sorts of things should happen for the health of the project? And there are a few in the short term. One of them is bring up the docs on emissaryengress.dev. We already on the website. We already have the docs. There's some work on the build chain that needs to happen to make them get published, but that's a straightforward, easy-ish short-term thing that needs to happen. We need to ship multi-architecture images. This is a long-standing open issue where people are saying, hey, why are you not shipping ARM64 stuff? For reasons, but we'll come to that later. We need to bring on some new maintainers, and we need to make development simpler. Longer term things, well, making development simpler is not just a short term thing. That's going to be an interesting piece of work. For the future, though, I should say, there are some things that are happening right now for the short term goals that I want to talk a little bit about. In particular, I have pushed a branch on the emissary source called flindev4.exe. The current release of emissary is 3.9 or 3.10, so the 4 is important there. This is a prototype using GoReleaser to actually build the thing. It uses Disturless rather than Alpine. It does multi-arch builds. It uses stock envoy rather than a custom envoy, and it has a CRD home chart rather than having to apply to get the CRDs in there. Is there anybody here who's not familiar with GoReleaser? A few. Okay. Weirdly, it's this thing whose job in life is to make it easier to release things written in Go. Using it for emissary is perhaps slightly counterintuitive, because a lot of emissary is still written in Python, but it turns out that GoReleaser is fine with that. We can use GoReleaser to directly build the GoBits and then bring in a docker file that pulls in the Python. This works really nicely and has the advantage that GoReleaser then gets to worry about all the horrible multi-arch stuff, which is great, because it means I don't have to write it. In the very long term, I think it would be great to rewrite the Python, but then again also in the very long term, I think it would be great to rewrite the whole damn thing in Rust. I say distrelis-ish. There is a distrelis-python image. I'm not using that, because its Python is too old. Historically, we used to build emissary on Alpine. The reasons that we used to do that kind of don't really make sense anymore. Distrelis is a better fit, especially because if you're doing stuff on top of Alpine, then getting the Python side of the worlds to work effectively turns out to be very, very hard. If you do this in distrelis, it is very, very easy, which is lovely. Rather than using the distrelis-python build straight from Google, I'm actually using the distrelis-cc thing and then installing Python and then copying a bunch of stuff around. There's still some more work here to make the images smaller, but it hasn't been worth doing that before I got some feedback on whether the images were going to work at all. Part of the feedback on whether the images work at all is this multi-arch thing where the 4.x branch now builds AMD64 and ARM64. This is not an official release yet, but I have handed it off to people to let them play with it and so far feedback is positive. If you are also a person who would like to play with us, let me know. We can arrange something. We don't even have to arrange something. It's actually published on Docker Hub. I can just tell you where it is. One of the most controversial bits here is that the 4.x branch uses stock Envoy rather than the custom Envoy that Emissary 3 has used. The reason for this is that maintaining builds for Envoy is difficult and expensive and painful. The Envoy project is pretty good at maintaining their build pipeline. It would be much nicer to just use their Envoy builds, and so I tried that in 4.x. The cost is that there are two features in Emissary that don't work if you do that. One of them is Emissary has the ability to do custom error responses where if you see a particular thing go wrong instead of returning a 404 or whatever, we can return some other random HTTP error code. That does not work on the 4.x branch. We could do this with stock Envoy. There is an Envoy thing that was added after the Emissary thing that does that sort of mapping of error responses. This one is possible to support. There is also a thing in Emissary where if you get an HTTP header that is not all lowercase the way HTTP2 specifies, Emissary 3 can maintain that case all the way through. You can't do that with stock Envoy. That feature is not something that can be supported in stock Envoy. Those two Emissary features don't work on the 4.x branch. I don't think anybody cares except for people using Ambassador Edge Stack. I don't think any open source Emissary users care about this. If you are an open source Emissary user and you do care about this, please come up and tell me that I am wrong. I would be okay. I wouldn't be happy to know that, but I would be grateful for the feedback. And finally, I mentioned earlier that the 4.x, actually I'm sorry, this is in a separate repo right now, but it will come into the 4.x branch. A home chart to be able to do the installation of the Emissary CRDs which currently can only be done with a coup control apply and which currently always brings over the conversion webhook implementation. So the home chart for the CRDs defaults to not including v1 or v2 CRDs defaults to not having the webhooks. And the home chart for Emissary itself on the 4.x branch defaults to assuming you don't want the webhooks, but there is a way you can turn this stuff back on. So next up is on that one. Yeah, I intend to use that Dev4x branch as the basis for an Emissary 4.0 release after getting some more feedback on it. It's for not because there are extra features, but because it is breaking changes. In particular, those two features that go away are that's a breaking change, and the opt-in to the older CRDs is very much a breaking change. If you're interested in helping out with this, then let me know. That would be wonderful. As always, thanks for the community. If you have questions, you can come up and talk to me here or you can reach me on the CNCF Slack as at Flynn. We have a fair amount of time left, because I deliberately went quickly through on that one. So we have a couple of options. One of them is I can go and find the presentation to talk in more detail about what Emissary is and what it does, or we can just do questions. So I guess raise your hand if you'd like the deeper dive into what Emissary is and what it does. All right, we'll do that. Oh, this is handy. You can't even see my screen while I'm rooting around, hunting for that slide deck. Excellent. That is going to be impossible. So let me do this quickly. I think this is the only time that my Macintosh did not automatically default to mirroring the display, which is great, except for the fact that I now want it to mirror the display. All right, so everybody can see that, right? These are the slides. Yeah, you can. Good. Excellent. So Emissary Ingress is an API gateway, like I mentioned. This is dealing with the problem where one of the basic things that you run across when you're doing cloud native stuff is that you've got all of your services, all your workloads are running in a cluster. And part of what the cluster does is prevent things from outside the cluster, from messing with things inside the cluster. But of course, you would like for your users who are outside the cluster to be able to reach into the workloads and get access to the things inside in a controlled way. And this is what Emissary does. It is an open source cloud native developer centric self service API gateway, like I mentioned, incubating project powered by Envoy. Envoy is used as the data plane, meaning it wrangles all of your data. Emissary wrangles the Envoy. API gateways can do a bunch of different things. The most basic is traffic management, where if we have a username Jane who wants to get a quote, then she ends up actually talking to the Emissary. The Emissary then routes the request over to some workload in the cluster to handle that. And if we have a username Mark who does the same request, maybe Mark gets replaced, routed to a different pod, maybe not, doesn't really matter. Another basic thing that API gateways can do that differentiates them from simple proxies though, is that they can also make more intelligent routing decisions like, oh, if Jane wants to go and update a quote, that is okay. But if Mark wants to, that is not okay. And so you have a fair amount of power at the API gateway around what routing can happen, what routing cannot happen, who can do what, things like that. There's also observability and rate limiting and resilience and all this stuff. Some of the things that I'm going to call out here are advanced load balancing, specifically Emissary knows how to do things like load balance per request per HTTP and GRPC rather than per connection, which can make a big difference. Circuit breakers, retries, timeouts, that's all around the resilience stuff, authentication and authorization, they're kind of the same thing from the perspective of Emissary. Rate limiting, I think the most interesting thing that I want to point out here is that a lot of these things overlap with what service meshes do as well. That's kind of intentional, because depending on the topology of your application, you will sometimes want to do certain things right at the edge, and you'll sometimes want to do certain things deeper in the call stack. So whether a given feature happens in the API gateway or in the service mesh is very dependent on what exactly you're trying to do and how your application works. I think that's actually the really important stuff to cover for what it does. So yeah, questions, comments, random heckling. Now's your chance. Yes. So I have a question about the future of Emissary versus Ambassador. Now it's splitting, but Ambassador is using the gateway version. Ambassador, my understanding, and I am not with Ambassador Labs anymore, so we're playing a little bit of telephone. But my understanding is that Ambassador Edge Stack 4 is based on Envoy Gateway rather than being based on Emissary. Ambassador Edge Stack 3 is still based on Emissary 3. So at version 4, there's a pretty clear split as to how that happens. Okay, which kind of leads me into the next question about support for the gateway API kind of. It's not, from what I look at it, is I think the gateway API is structured in an unsuitable way for the independent developer publishing mappings and such. But anyway, So the way I summarized this at KubeCon in Chicago in the Emissary update was that Emissary is going to stick with the Emissary CRDs for the time being. Ambassador Edge Stack, I think is going to gateway API. I am not sure about that. But there are some really, and let me back up for a moment. I am involved with Gateway API. I'm one of the people, I'm actually one of the co-leads of the service mesh chunk of Gateway API Gamma. So it feels really weird to me to say things like this. But there are ways where at present, I think that Gateway API is better suited for the mesh case where we've got people, the users of a service mesh are more likely to be, or the people who are configuring a service mesh are more likely to be a bit more expert with Kubernetes than Jane, the very busy application developer. And so there are ways that I feel that Gateway API is a better fit at the mesh layer or for cluster management people or series infrastructure stuff than for Jane, the very busy app developer. For that reason, at the moment, Emissary is going to stick with the Emissary CRDs. I would really like for that to change. And this is the subject of much ongoing discussion within the Gateway API stuff. For us, one of the key points was the off filter design where Emissary just works really as a very simple protocol, very straightforward. You can do whatever you want. And that allowed us to make some very cool decisions in the beginning of our journey. So I don't see that happening right now with the Gateway API. So it's, yeah, it's interesting. Funny story, by the way, is that the Xdoth protocol that you're talking about actually originated in Emissary. And then we got that wedged into Envoy. Yeah, okay. Yeah. That's all I have. Anybody else while we're waiting for the microphone, I will be around. You know, I'll be up here if you have other questions, just do me a favor and don't slap me on the back because I ever broke a collarbone. Hey, I'm trying to determine what would be the use case for using this over a simpler Ingress controller. Like what's the point that you would look for something like this over something like just Nginx and traffic? Nginx and traffic are not actually simpler to use. They, it's the right way to phrase this one. I guess I should probably phrase that slightly differently. Fundamentally, it comes back down to the input language again, running Nginx or running traffic or running Emissary are all about the same in terms of operational complexity. The Ingress resource that traffic uses is being phased out in favor of the Gateway API. And there are a lot, I mean a lot of things about Ingress that end up being very problematic when you have more than one or two people who are trying to maintain the way routing works. Inginx is a little bit different in that regard, but I don't think I'm going to dive too much into that because I know less about the non-Ingress configuration of Inginx than about Ingress configuration there. So yeah, but ultimately it comes down to, and again remember I'm biased about this, but the Emissary input language is designed to be simpler to work with when you have a bunch of developers who are all trying to get stuff done at the same time and I think it succeeds pretty well at that. Okay, thanks. Anybody else? Another one? Uh-oh. For us it was all about the freedom that development teams had in managing their deployments and their mappings and the structure of their clusters. There was completely self-service built into the deployment, they shipped their Kubernetes manifest, they shipped their mappings and it just works without somebody centrally having to manage all of that. That's why we... That was the point. As an industry we know really well how to develop with groups of 10 developers. We know a couple ways of handling 100 developers and we know basically one way at a thousand which is you carve everything up into microservices and let everybody run in parallel. But the more sync points that you put in in that sort of parallel development than the worse it gets and the slower things get. So that was a lot of where we were starting when we were putting this together. All right, going once, going twice. Okay, I think we're done. Come on up and find me if you have any other questions that you don't want to shout out. Much appreciated.