 Hello, everyone. Thank you for coming. This is the maintainers track for emissary ingress. I am Homsa. I'm a senior engineer and ambassador. I'm Flynn. I'm a technical evangelist for buoyant. I was the original author of emissary ingress low these many years ago before I went over to the dark side to work in marketing. Thank you for that. I'm still a maintainer, though. Thank you for that. And yeah, we're just going to do a little quick intro and just give a quick update on sort of the current state of the world of where we are with the project. We'll talk a little bit about self service configuration and then we'll switch gears a bit and talk a bit and give our thoughts about Gateway API. Quick show of hands. Who here is new to emissary ingress? Anybody? All right, got a few hands. So I guess we're going to do the, I guess we're not just going to skip the entry. Oh, man. Emissary ingress is an API gateway. So if you have a cluster full of microservices and you're trying to talk to them, your users are going to be outside your cluster in a great many cases. And you need something to allow them to reach into your cluster to see your services from outside the cluster because one of the points of a cluster is to protect you from this exact thing happening. So you throw in an API gateway so that you can both allow this and also control it. Emissary ingress is an open source cloud native developer centric self service opinionated API gateway. It's a CNC of incubating projects. It is powered by Envoy. Specifically, we have Envoy wrangle all of the user's data and then we let Emissary wrangle the Envoy's one of the core functions of an API gateway is traffic management. So if you have a user named Jane out in the world and she wants to go and make a request to a slash quote slash endpoint, then Emissary can allow that and route it through to some microservice in the cluster. If another user named mark request exactly the same thing, the same thing will happen. It might go to a different instance of that workload. It doesn't really matter. It might go to the same one. On the other hand, traffic management is not the only thing that goes on with API gateways. This is one of the differences between an API gateway and a simple proxy. You can also do things like maybe Jane is allowed to update quotes, but mark is not allowed to. So you can just look at that figure out who's doing it and block it rather than allowing it through. There are a lot of things in here observability rate limiting resilience helping up development. The astute observer here will also notice that many of these things overlap with things that service measures can do, which is okay, because you get to mix and match and decide what makes the most sense at what level for your specific application. Good example of that. If you are trying to do progressive delivery, then your API gateway can do canary rollouts or a B testing or whatever, but it will only be able to do it right at the edge of the call stack where a user outside the cluster comes in the very first time. Anything deeper in the cluster, you would need a service mesh for that, not an API gateway. So there's a lot of overlap and a lot of synergy there that's going on. Having both is nice. Of course, I would say that because I now work for a service mesh company. We kind of talked about this. I'm not really sure there's anything. Circuit breakers are kind of cool. Automatic retries are very cool. Zero downtime configs. Honestly, I kind of feel like all this stuff is basically table stakes for a proper API gateway these days, and we will come back to that a little bit later in the presentation. The gateway API guys in the front row are smirking because they kind of know where I'm going with that. Sorry. I think you were going to do where we are today. Over to Hamza. Thank you. In terms of where we are with the project, what's the current state of the world? In a nutshell, emissary, it's a stable product. Most of the last slide really showed that. We've gone a lot of the features, a lot of the table stakes, so there isn't really much we can really do, but the project expanded greatly. We have like over 9,000 members of Slack. Our project has 4,200 GitHub stars and climbing. And as always, we always appreciate a community and we give as many things for the support over the years. Emissary as a project started back in like 2015. The first release was in early 2017. The first commits were in late 2016. If anybody was using emissary in 2017, I would like to apologize. It was pretty rough back then. Many of those rough edges have been smoothed out over the years. It's a rough journey. We were also one of the first API gateways, English controls for Kubernetes back then. We've gone through a lot of those growing pains as we were figuring out and going through the journey alongside Kubernetes itself, in terms of all the current practices that we take around it for today. We're currently on our major version 3.0 with 3.9 just around the corner. Most of the things are just going to be just a lot of just updates. So we've upgraded Envoy to the latest on 1.27.2. We've also added coming up in this release that we're going to be out of support for setting specific Envoy runtime flags in the module resource, which is sort of like the global configuration of a sense for emissary. This is really to help deal with the recent HTTP.2 devolumability that's been going around. So we've been able to set those Envoy flags so that you don't have to keep resetting them on restarts and things like that. We've also got to be out of support for the resource exhausted, GRPC responses for clients when they get rate limited. If anyone who uses emissary also knows about API extension and everything around that, but just as a good table, we're just updating the minimum version for TLS to 1.3, a very long overdue security upgrade. But we do take security and a lot of bugs fixes seriously. Over the past year, we've gotten a lot better in terms of just keeping up to date with the dependencies. We usually try to keep updating into Envoy within the latest two releases. We try to keep pretty close and oftentimes pretty good in terms of being close to the latest and greatest of Envoy. And so we're going to be continuing to put in the bug fixes, keeping things up to date, security patches, keeping Envoy up to date. And as always, if you have any idea of new features you want or things you want to contribute for, let us know, reach out to us and we'll help figure out in terms of how to make that happen. So now, as we said before, Envoy is a self-service, developer-centric. Emissaries. Yeah, Emissary Ingress, opinionated configuration. Envoy is also technically self-service, but not really opinionated or, yeah. Opinionated and debatable, but part of the reason we built Emissary Ingress was because we wanted to be opinionated on top of Envoy. Most of the reason, actually. Yeah, so I wanted to talk about this every time. If you've ever seen some of our previous presentations, we hammer this thing home. And so why did we talk about it? Well, the main part is that Kubernetes is a means, not an end. And what that means is that when people are running Kubernetes, they're not running it just because. They're running it in service of other goals that they have. They might be supporting application developers, getting their applications out the door. They might be in service of business goals. Even if people are running Kubernetes for funsies, they're usually using it as a learning experience to add a tool to the toolkit that they can use in other places. And so developers have goals just beyond Kubernetes. And so when we mean by self-service, we're talking about how do we achieve those goals faster? At the end of the day, a lot of developers just want to get their applications. They just want to get out the door. They want to be able to be iterated quickly. Developers centric is about how do we make those goals done easily, right? Especially for those who are really in that Kubernetes mindset. A lot of the things that we might think are straightforward are they take things for granted. A lot of people who are maybe more newer to Kubernetes and the entire ecosystem just, it's just not in their wheelhouse. So how do we make things so that it's done in a language that they can really understand and work through them and get things done quickly? And in the opinion it is about how do we do it in a straightforward way, which is basically what I've been talking about. How do we really focus in terms of just getting things done quickly in a language that developers will understand? And so how do we try to do that with an emissary? Well, if you look at the control plane, in terms of how you configure it, we split it all into separate resources. So you have the first, you have the listener, which kind of specifies what protocols, what ports to listen on. You have a host, which specifies your host names and any TLS configurations related to that. And then you have your mapping resources, which is basically sets up your route tables. You have a path that you want to expose your service on, and that will allow your service to be available to the outside world. You can do all of this with Ingress in one single resource, but when you combine everything into a single resource, then you have everyone basically running on the same resource and modifying that same resources. And over time, that just ends up not being a really good user experience. And obviously, when you split things up into multiple resources, yes, the natural question becomes, one person could do all of this. And often will in smaller organizations. Yeah, maybe in the beginning, people will do all of this. You have a four-person start-up. It's very common to see your application developer just owns the whole thing. But one of the big advantages of actually being able to separate into multiple resources is that our developer, Jane, can hand off all of this stuff to our ops guide, Julian. So, you know, Jane can just handle all the mappings and just getting her services exposed. And then Julian can deal with the TLS, the auth service, the rate limiting, and dealing with all of that to keep the infrastructure in the cluster running. And so, by being able to split off those separation of concerns, this requires trust. And developers are basically sort of empowered in a sense that where they can basically just expose their applications, they kind of just do things independently without having the operations team or the infrastructure team as a sort of bottleneck. You know, microservice is one-on-one. But, you know, it does kind of go both ways, right? Like, developers have to trust the operations team to make sure the cluster is actually up and running. But the operator team needs to be able to trust the developers that whatever change that they make doesn't bring down the entire cluster. And, you know, people that are fairly new to Kubernetes and this sort of entire ecosystem, this might be a little bit uncomfortable. But it's sort of like a thing that growing pain that you should be able to go through because it turns out like this is a really nice way of being able to be able to separate work and ultimately good in the long run. But just because it requires trust, it doesn't necessarily mean it has to be blind, right? You don't have to blindly trust people that give access to everything. You can build guardrails in place to be able to, you know, have like good, decent, you know, same controls in terms of how people sort of interact with the cluster. So, like, you know, being able to use the tools of Kubernetes RBAC, there's tools like, you know, Kupucho Sudo, which so you can have like, you know, reasonable defaults in terms of access, but then, you know, go into, you know, in like isolated places, you know, be able to use elevated privileges to, you know, do more dangerous things. You know, Kupucho GET makes it really easy to just audit, you know, configurations and, you know, there's really no reason why you can't let everyone to do that. But there's also a lot of things you can do with GitHub's infrastructure code, you know, being able to set up that model and, you know, be able to like audit and have like nice like control points in place in terms of like how applications get deployed. And, you know, there's a lot of things you can do in the CI CD space in terms of how you can sort of like set those kinds of workflows. And then now I'm going to talk about... That was going to make me do the hard part. About our thoughts about Gateway API. So, Gateway API, if you have not... Okay, raise your hand if you've not run across Gateway API. There are no hands. You don't count. You've got two Gateway API maintainers here saying, I've never heard of it, man. Gateway API is a standardized set of resources to deal with networking within your cluster, really. It was kind of started as a successor to the Ingress resource. If you have not been around Kubernetes for a little while, you might not know that the Ingress resource showed up a very, very long time ago, promptly landed in Wild West annotation hell, never got out of beta. And did I mention the Wild West annotation hell? It was a very weird sort of thing to work with. It was tricky. It didn't so much give you a lot of transferable ways to talk about Ingress so much as give you a lot of ways to talk about Ingress that might be able to do some things or might not be able to do what you wanted. Very messy. Not able to validate anything. Gateway API. Actually, you know, a little bit more background that you might find funny that emissary in the very beginning, I actually looked at the Ingress resource and went, oh my God, nobody can use this. So my initial idea was not to support it at all. And eventually, I think like two or three years after that, somebody came up and basically beat us about the head and shoulders until we added support for it, which was kind of funny. Gateway API is community-driven. It is role-oriented in the sense that you heard Hamza talking about Jane and Julian. Julian is an application developer. She's very, very busy. To her, Kubernetes is nothing but friction. And she wants to focus on her business goals. Julian's her counterpart in ops. Jane is never, ever going to be interested in learning about how to configure, I don't know, the different types of Amazon load balancers or certificate rotation. And she's probably never going to want to actually go and figure out how certificates work in the first place. She's probably happy doing all those things. I've had actual ML engineers tell me to my face that the less they have to work with Kubernetes, the better. Yeah. We've heard that in many, many different situations. In Gateway API, there are three roles right now. Ana, the app developer, Ian, the infrastructure engineer, Chihiro, the cluster provider. Ana and Jane line up very, very nicely with each other. It's a little messier once you talk about the other ones. But that role oriented concept is very important for both Emissary and for the Gateway API. Gateway API can do HTTP and GRPC and stuff like that, which are honestly kind of table stakes. And it's pulling in a lot of learnings from Emissary and Contra and one of the wire to Kubernetes ecosystem and everybody. And it's all lovely. And we all get to come together and sing happy songs around the campfire. And it's a wonderful thing. And there's never any arguing in Gateway API. I also have a lovely bridge to sell you. Gateway API in many cases looks kind of parallel to the Emissary Ingress resources where a listener and a host, there's not a one-to-one correspondence, but both of them have this idea that there's a set of resources dealing with the infrastructure of telling your cluster what sort of connections should be coming in and a different set of resources that talk about which chunk of the URL space goes elsewhere or goes to which services. If that made any sense, hopefully. And in both cases, that split that we just talked about, the separation of concerns between the application developers and everybody else is a big deal. And it matters a lot. And they've both been designed to support that separation of concerns. Now, this is the part where the Gateway API maintainers go, dude, really, come on. There are things with respect to Emissary that are very concerning about Gateway API. Rob, Shane, none of this, none of what I'm about to say is anything new to you two. The Gateway API is not as focused on the developer as the Emissary CRDs have been. Who's the Jane counterpart to Gateway API? Anna. Anna. So, I should probably have changed this side to say they're not as Anna-centric. In a lot of ways, Gateway API started off focusing on the lower-level roles. Because the people who were coming up with the Gateway API in the beginning, those were their roles. Emissary deliberately picked Jane or Anna as the place to focus because that was the market we were going after. Because every single developer working in Cloud Native must solve the Ingress problem. And so we decided, okay, this is where we're going to go. At this point, I actually believe that Anna's role, Jane's role, is the most critical role in the Kubernetes ecosystem. Because if the application developers are not creating applications to be run on clusters, we have no reason to run the clusters at all. So, this is a big concern of mine. Even though it sounds like kind of a minor sort of thing. I think this is getting better. I wanted to point that out in particular. I think there's a lot more work to be done. I'm very grateful for the fact that the Gateway API as a project is shifting and trying to do a better job with this. The other thing that bugs me about Gateway API is that it is not as expressive as the Emissary CRDs yet. And the word yet is very important there. But right now, there are a lot of things on that table stakes slide that Gateway API cannot do. Or maybe I should say cannot do any standard way. There are extension mechanisms in Gateway API that would allow you to do these. But at that point, you're working ahead of the standardized part of the API. And that is both very challenging and very risky. This is another example of something that is primarily getting at the Jane-centric part or lack thereof. These two are equivalent. I'm looking at them again because I want to make certain that I think I've gotten the equivalents in there. Ah, I did not get the equivalents in there. There's two more lines that need to be in the rules section because that rule doesn't actually only match the quote prefix. So I missed two lines of text in the HTTP route. Maybe three. I think two. I think two. All right. Let's highlight the lines in there that deal with things Jane actively cares about versus things Jane has to type that she will view as friction. And there are more on the Gateway API side of the world. Should be one more line of each on the HTTP route. And again, this on the one hand seems kind of minor and petty. And on the other hand, I'm a little voucher here. When we go through and we talk to real application developers in the real world, they do not interpret this as minor and petty. They go, what are you trying to do to us? And the interesting thing is that even if we want to do this further on, wow, I made some other mistakes too. I didn't change the name of the back end on the fancy quick thing. This is why you should always have somebody who's a maintainer of the thing you're writing the code for. Check your slides before you show them off. But even in this case where we decided to do a canary deployment, where 10% of the traffic that directed to the quote path is now going to go to the fancy quote service instead of the quote service. That's what's supposed to happen. And so we need four more, yeah, we need four more lines in the HTTP route and we need to change the second quote to fancy quote. If you count up these lines, you will still find that Jane is typing more stuff that she doesn't really care about with the HTTP route example. Again, sounds kind of minor and petty and when we talk to real application developers, we find out it's not. So we can take some of this stuff and synthesize it as a bunch of lessons that apply in both cases and there's been a certain amount of cross-pollination, although this is emissary to the gateway API, we probably should have said between emissary and the gateway API. That would have been much more fair, I apologize. The whole role-based design thing is really the only way to properly support the separation of concerns that permits the whole developer-centric thing to work. If you don't have that separation, you will end up getting the developers and the ops people bottlenecked on each other in the best case. In the worst case, you end up with them stepping on each other's toes and breaking things. So this is very, very important. This bit with trying to balance composability against ease of use is also a really big deal. The gateway API is very composable. This is wonderful. It is not as easy to use if you're not a Kubernetes expert. That's less wonderful. Finally, as I said before, Jane is the most important role in this ecosystem because she is the one who's giving us a reason to be doing all this in the first place. Also because Julian is going to be able to adapt to things that are trickier than the things to which Jane will easily adapt. That also turns out to be a big deal. The end result of all this stuff is that Emissary itself, in a near term anyway, is likely to continue focusing on its own input language rather than the gateway API. Most of that is directly because there are things in gateway API that we cannot use gateway API to do all the things that Emissary can do right now. We can't think of a safe way to try to carry the existing user base of Emissary, which is pretty large over into a gateway API world without going way, way, way ahead of where gateway API is as a standard right now. This is likely to change in the future. I'm kind of moving ahead because we are a little bit short of time here. I've already said all that stuff. If you personally are sitting out there going, I'm an Emissary user, and oh my God, I want to use gateway API 1.0, come talk to us and we will see how to figure out a way to make that happen. I'm going to be candid and warn you that that will be the Emissary maintainers trying to figure out how to support you to do a lot of development. But, yeah, that would make us happy. We would be fine doing that. And there's also stuff that Emissary, as a project, is doing to figure out how to move some of this forward with gateway API. A lot of the Emissary maintainers are in fact off working with Envoy Gateway, which is all about gateway API. And so, yeah, this could easily change in the future, gateway API is continuing to evolve. We have a very vested interest here and a lot of us are actively working with this. So, yeah. If you want to try it out now, Envoy Gateway, as I just mentioned, is a thing you can go play with that is natively gateway API is also powered by Envoy and is being worked on by lots of the Emissary Engress maintainers. I would probably recommend that you try it in staging, not in production right now because there's a bunch of stuff it can't do yet. Let's be candid. I think that I have been told that there is somebody running it in production. I think there are some. That's very spooky to me. There are a lot of issues in Envoy Gateway that come from essentially experiences from production, essentially. And to be clear, I'm not trying to dis-envoy Gateway as a project. I'm involved with it too. It's cool. But yeah, there's stuff where as a project too is trying to figure out, okay, how do we do things like cores if we're natively Envoy Gateway? And you can either do that with policy attachment or you can do it as an HTTP route extension filter. Both of them are interesting. It's a fun challenge. We have invested interest in Envoy Gateway and by extension Gateway API to actually succeed. There's still a lot more work to be done. But yeah, just to quickly wrap up since we are running short on time here. Emissary Engress as a whole focuses on self-service and it is missing here, it's a pinninated way of doing Ingress because it's a great way to let everyone to get things done faster. This kind of model does take a bit of trust both ways. But if you can get over that hump, it does work really well. And in terms of the Emissary as a project, it'll likely stay focusing on the Emissaries input languages, its own CRDs. But we also are very, very interested in really looking on to Gateway API and we want to see it succeed. Yeah. Thank you for again coming out. If you want to get involved or just chat with us, we're on our community Slack and there's also the QR code if you want to black that as well. That's the Slack QR code, I think, right? Yeah. And I think we have a few minutes for questions. We've got about five minutes. We should have taken five minutes more. Yeah. Any questions? Yeah. So the question was if I were to start off in Greenfields today, what would I use? I assume you're talking about if I were going to start writing an API Gateway on a Greenfields today, right? Okay. Yeah. Wow. Way to put me on the spot. I regularly still tell people to use Emissary. And I am biased. Right. You need to recognize that a lot of the early decisions that went into Emissary Ingress fundamentally boiled down to decisions made that way because I personally thought they were a good idea and the things that I think are a good idea haven't changed all that much since 2016. I still think it's a good idea to focus on the user. I still think it's a good idea to try to make things easy where they can be and make them possible and they can't be easy. So yeah, a lot of that stuff has not changed and so I still direct people towards Emissary. I think it depends on what the use case is. If you're looking for something stable and production ready I would say Emissary but I'm also quite biased also in this regard. But if you are willing to experiment a bit and grow through those then I think Envoy Gateway is worth checking out. I'm also kind of biased. I tend not to think about things I would do if I weren't interested in production readiness. Did you have a question as well? The Ingress resource was also about North-South and the Gateway API started as North-South. So once again I am biased because I work for a service mesh company and what we do is East-West. So of course you need both. You actually do sorry, the question was do you really need something managing North-South and East-West or can you make do with one or the other? You must solve the North-South access problem. You don't have a choice if you're doing things collaborative because your workloads will be running in a cluster whose purpose in life is to isolate traffic from reaching into the cluster so you have to do North-South. There was a fairly long time where I kind of found myself thinking that the East-West thing was overrated and since then I have moved over and seen the light by learning a lot more about what you can do with a service mesh. Given that we have a service mesh that is simple enough that you can get good utility out of it even if you're a four-person start-up I would say yeah, you should use a service mesh. The observability wins all by themselves are incredible and having automatic MTLS is also really really nice even when you're just getting started. You should think of an API gateway and a service mesh as complementary things. Like as Flynn said, you have to solve the North-South problem like you have to that's how you get traffic from the world into your cluster. So an API gateway or an English Coachella is a must and there are a lot of benefits to having a service mesh that handles the East-West case. Real world... Wow. Real world situations when you're trying to debug things you will often find that the observability just from the North-South aspect is not actually enough to figure out what's going on. The observability from the service mesh alone makes it worth it. It's pretty amazing once you get used to that. Yeah. Anybody else? Follow on. Go ahead. I do not think Gateway API is production ready. Sorry. A better way to phrase that is the part of the Gateway API that is defined for Gateway API 1.0 is absolutely production ready. But there are a lot of things for managing North-South traffic that are not part of that specification. So that's why I say that. There are cores and retries come immediately to mind as things where everybody wants that for an API gateway and you can't yet talk about that in a standard way in Gateway API. Now, again, this is nothing you all haven't heard. And we're working on it in the Gateway API world because everybody recognizes that it's very important. It takes time. Yes. I'm sorry, Rose. Why don't your thoughts change? I'm taking an ideal world. It's going to be nice. I'm a little uncertain on the extent to which Jane is going to care greatly about some aspects of East-West traffic. Right? The routing aspect, East-West, I think she cares about a lot. Some of the other things, I think she just might fundamentally never, ever care about. I think the same thing about North-South traffic, too. I don't think Jane's ever going to care about certificates. I don't think she'll be able to do that. But it's critical. Somebody has to. I'm also one of the gamma co-leads which is working on the problem of how do you adapt the Gateway API to do East-West as well? And I would agree that the concept that people have to do completely different things for North-South than for East-West, I completely agree that that is a problem. We should do something about that, which is why we did the whole gamma thing and are trying to do something about it. Okay, so we are a signal to that side of time. But if you have more questions... Thank you very much. We'll be here. You can find us in lots of places.