 All right. Hello, everyone. Welcome. Thanks for joining this session on a Kubernetes-based API gateway. More specifically, I'll be talking a lot about ambassador, but I'm going to take a journey to actually get there because in the Kubernetes world, there's a lot of different constructs you can actually leverage. I want to talk about the pros and cons of those different approaches and how something like ambassador can help in your environment. First quick introduction on myself. My name is Steve Flanders. I am head of product and experience at a still startup called Omniscient. We're in the observability space, and we contribute. We're one of the core maintainers of OpenCensus, now OpenTelemetry with Google and Microsoft, trying to provide an open standard as well as open-based data collection for observability data. If you're interested in learning more about me, I have some social media stuff you can take a look at where you can talk to me after this session. In terms of agenda, I want to start with some quick background information. What is an API gateway? Why should you care? How does that play into the Kubernetes world? I'd like to introduce ambassador, and then I'd like to talk about different scenarios in which an API gateway can be beneficial. Some of those include edge routing, but this also expands into things like service mesh, which you might be interested in as well. So let's start with some background. Why is this important? Well, a lot of people that are living in the cloud native world or moving to the cloud native world, they come from the previous generation where they had a monolith. A monolith is this box of services or processes or threads or JVMs or whatever I'm running, all self-contained. And there are actually traffic patterns in here, but from a monitoring and observability standpoint, it's a single thing that I need to worry about and problems can occur within it, so I can monitor things like metrics and logs pretty easily. Now we're moving to more of a microservices-based world, which basically means just remove the outer shell. Instead of being threads or processes, they're just dedicated services that talk to each other over the network. There's actually another change as well, though, that front-end service or the UI or how you access your application, usually that's replaced by an API gateway. And that's because you might have multiple different microservices you need to communicate to. You want to route to different paths from the internet. You want to offload some things like TLS and authentication. And an API gateway, it's an extensible way of doing that in a microservices-based architecture. Next, we have Kubernetes. I mean, we're at the open-source summit here. You probably know what Kubernetes is. I'm not going to drill into the specifics of this, but basically a lot of lessons have been learned into how to deploy and orchestrate the automation deployment of microservices-based architectures, handle stateful and stateless services. It uses YAML for configuration, which is very flexible and powerful and allows you to do a lot of things as code. And this construct that Kubernetes provides can actually be leveraged by non-Kubernetes services as well. And something like Ambassador can actually do that, and I'll show it in a second. Now, I do want to talk a little bit about how you handle something, how you handle deployment within your Kubernetes environment, just at a high level, because that will introduce the concept of what actually an API gateway can provide. So let's assume that you have deployments or stateful sets or daemon sets. It doesn't matter what it is, but you want to actually expose them and usually a highly available way. So you want to be able to route between multiple replicas of, let's say, a deployment here. And the Kubernetes world uses service for that. The thing is, by default, a service does not publicly expose anything that you have in that deployment. It only exposes it within the Kubernetes cluster itself. So the first question that you might ask yourself is, well, how do I expose this to the Internet so I can actually access it? Because if I'm actually deploying something in Kubernetes, I probably want it to be able to talk to things outside of Kubernetes, or I want at least people outside of Kubernetes to be able to talk to it. So in the Kubernetes world, if you're using services, there are multiple different types of services that you can deploy. One of them is known as a load balancer service. So a load balancer service can basically integrate with some sort of load balancer. So let's say I'm deploying this in the AWS environment. So this could actually deploy an AWS elastic load balancer for me and make that service publicly available. So for every service that I specify a type of load balancer, I basically would get, in this case, an AWS elastic load balancer for each of these services. And as a user, I could route through one of these two ELBs. So very powerful, quick way to go ahead and expose what I've deployed in Kubernetes to the Internet so people can access it. This is usually where a lot of people start. And this is powerful and flexible, but it has some shortcomings and some limitations to be aware of. I mean, in this case, it's a one-to-one mapping of an AWS ELB to a service. That can get expensive very quickly. You have public IPs you have to deal with. Also, any logic that I have, like let's say TLS offloading, I have to handle that for every single service. If I want to introduce things like authentication, I need to do that for every single service. So if you only have a couple of services you're exposing, maybe that's okay. But as you get bigger in the microservices worlds, that typically doesn't scale very well. So what would be a better approach is instead of doing it this way, what if I could expose a single load balancer and be able to route traffic in between multiple services that I have in Kubernetes and the backend deployments. And this is where something like an API gateway like Ambassador can help. There are actually multiple different ways to handle this in the Kubernetes world. And that's what I want to focus on in this talk. So the first thing is actually this dotted line here, this edge, if you will. This is how we're actually getting access outside of Kubernetes into Kubernetes. The next thing is actually the thing that sits on the edge that's going to actually allow this to happen for me. So this goes by a bunch of different names. You might hear edge router. You might hear reverse proxy. You could hear Ingress controller, which is Kubernetes terminology for it. But it's different ways of basically allowing me to route traffic between multiple services and or expose that to the internet. And to do it in a centralized way so that I can offload responsibilities like TLS, circuit breaking, retry logic, and the like. So let's talk about the differences between what is a Kubernetes Ingress controller and Ingress between that Kubernetes service. So first we'll look at an Ingress. An Ingress is really responsible for making HTTP or HTTPS services publicly available. That's its primary goal. But it also standardizes three basic constructs generically, no matter which backend you choose to use to handle this Ingress routing. It handles load balancing policies, SSL termination, and the name-based hosting here. The basic premise was it shouldn't matter what is actually doing this for me. I want to define it in a standard way that Kubernetes understands and can actually be translated into the backend of my choice. How this actually plays out is you define this in YAML in the Ingress that sits in front. The Ingress then tells an Ingress controller to configure itself based on what's been defined in the Ingress. That's the basic premise. If you think about that for a little bit, and you know a little bit about Kubernetes in general, well, it sounds very much like what a service does in front of another service. And this is actually another way of thinking about it. Instead of using an Ingress and an Ingress controller, I could actually front multiple services by a single service, and let's say a deployment behind it. So here the service is the Ingress, and the deployment is the Ingress controller. Very similar. In the service notion of Kubernetes, there are four different types of services. Maybe external name is not as familiar because that's a more recent addition to Kubernetes. And cluster IP really isn't relevant because it's really for internal routing within the Kubernetes cluster. So it comes down to node port and load balancer, two external ways to expose something outside of my Kubernetes environment. So in the case of something like Ambassador or an API gateway, Ambassador actually follows this approach, where you deploy it as a deployment, and you front it with a service. But there are other API gateways that use Ingress and Ingress controller as a way of standardizing it. I'd like to talk more about those differences, and I'll do that in a minute. But first, let's introduce Ambassador. So with a basic understanding of API gateways, kind of the move from monolith to microservices-based architectures and some of the constructs that are available in Kubernetes, we have Ambassador. What is Ambassador? Ambassador is an API gateway. Basically, it's meant to be open source, Kubernetes native, and it's built using Envoy's proxy. Envoy is a CNCF project, one of the graduated ones that came out of Lyft. Sorry, Envoy is meant to be an open source edge and service router deployed in front of your cloud native applications. We actually see a lot of people that deploy Envoy in production environments today, but Envoy is a data plane. Something needs to control that data plane. In the case of Ambassador, Ambassador controls Envoy, and it's specifically trying to solve API-based use cases. So how does it work? Basically, I go ahead and I deploy Ambassador, which is a bundling of the data plane and the control plane together in a deployment. I can, of course, have multiple of these and front it with a service. And then as a service owner, if I deploy a service into Kubernetes, I can actually define in the manifest how I want it to be exposed to the internet. So using YAML definitions, Ambassador is able to pick that up from the Kubernetes API and then tell Envoy how to configure itself through ADS. And then Envoy can seamlessly take dynamic updates without any downtime. So I can reconfigure this environment and make sure that I don't drop any packets, so I can do hot reloading. It's a very powerful way. And Envoy supports most of the features that you need today, authentication, encryption, circuit breaking, retry, different load balancing policies. It has service discovery. It's very flexible, and it's actually used for service mesh as well, and we'll talk more about service mesh in a second. So what does Ambassador provide? Well, it focuses a lot on using Kubernetes constructs. That's where it started. So it's very much Kubernetes native. You can do things that you would normally do in Kubernetes through Ambassador. It does have integration with Istio with service mesh. Again, I'll talk about that in a minute. And there actually is a pretty good focus on development type activities, like a canary deployment. So one of the features that Ambassador offers that a lot of other API gateways do not offer today is traffic shadowing, or the ability to route traffic to multiple destinations it wants to do some amount of testing. From a feature perspective, it supports all the major features you'd expect from an API gateway. It has GRPC, HTTP2, it supports WebSockets. It has rate limiting authentication. It has diagnostics. It provides observability in tracing and metrics. So it's very flexible from a deployment perspective. And then the next big question is, how is it different than like other API gateways? And to answer that question, you need to understand what different categories of API gateways exist today. There are three primary types. You have hosted API gateways, like for example, AWS has its own API gateway that you can use. You have more traditional API gateways like Kong, which is another CNCF project. And then you have Layer 7 Proxies. So like Nginx, HA Proxy, for example, or good examples of this. Now, how is Ambassador different than them? Well, its focus is more about making sure you don't have vendor lock-in. So unlike, like say, the Amazon API gateway, where I can't move outside of Amazon, Ambassador can deploy in Kubernetes. Kubernetes is portable, so I can basically use Ambassador anywhere I use Kubernetes. That's kind of nice. It also doesn't have any dependency on external storage. So one of the potential downsides of Kong is to deploy like Cassandra, Postgres, in order to handle state when you want to scale that out. In the case of Ambassador, it actually leverages etcd, which is out of Kubernetes. So the state is in Kubernetes. Again, makes it very portable and easy to use. Sell service so I can use just normal Kubernetes constructs. CRDs are supported, annotations are supported, and I'll show that in a second. It leverages Envoy, which I actually think is pretty critical. I mean, Envoy has been battle tested. It's rock solid. It performs very well. It's written in C++. It's a graduated CNCF project. Instead of like writing your own or relying on someone that's not using Envoy, today I would put my bets on Envoy using many production environments. And it does offer a fair amount of development features. So like traffic shadowing is not common in API gateway today. Ambassador offers this. Now, from a configuration standpoint, it depends on the version of Ambassador that you're running. So if you're running a version less than 50, you should just upgrade. But if you haven't upgraded, then you have the options of using config maps or annotations. So basically when Ambassador started, the most common way to handle any configuration of a service in Kubernetes was through a config map. Config maps are very powerful, but they have a few limitations. One is there's a size of the config file. It's maxed at like one megabyte today. The second more important one is, it's very hard to do dynamic updates. Typically you would update a config map and you'd go kick the pod. Well, that can result in downtime that's not ideal. You can of course watch the file system, but that has some shortcomings as well. So config maps are not an ideal way to handle dynamic updates when Kubernetes leverages SED and you can actually watch for changes to SED. That's a much more powerful way of handling that. So Ambassador realized this very early on and moved to this notion of annotations. In the YAML file, you can have as many annotations as you want. They're basically key value pairs and they allow you to define anything that you care about that's valuable. Ambassador actually used this to define configurations. That was great, but annotations weren't meant to be used that way. I mean, annotations are unstructured. Annotations are meant to be more key value pairs or metadata that you care about. They weren't actually meant to handle configuration for a service or a deployment. So in version 50 and greater of Ambassador, support for config maps was deprecated and Ambassador went all in in config maps. And now more recently, if you're running 70 or newer, it now supports Kubernetes CRDs. And CRDs are actually the way to properly handle this. It's a known structure that you can follow and a way you can configure something and get the proper updates. And what you see is instead of new APIs being introduced to Kubernetes, the recommendation is to actually use CRDs as well because you can extend on the existing core functionality that Kubernetes provides. Now in terms of configuration of Ambassador, the three big things you're going to do, you're going to configure encryption, you're going to configure authentication, you're going to configure mapping. So I know there's a lot of code up here. These slides are actually posted on the site, so you can download them. You don't have to take screenshots of this if you don't want to. But I provided options for both annotations, which was the previous way to configure Ambassador, as well as CRDs, which is the new way. They're very, very similar. It's just basically moving out of the annotation and into the normal CRD format. Some things to note here, Ambassador actually supports configuration on a per cluster basis. So you see this Ambassador underscore ID. You can run multiple different clusters of Ambassador and you can configure them differently. They're actually valid use cases for this. I'm not going to cover it today, but I actually did a presentation on Ambassador in KubeCon China just a couple of months ago. And those slides, as well as video recordings online, if you're interested in learning more about why you'd run multiple different API gateways at the same time, I definitely recommend taking a look at that session. So from an encryption standpoint, if you have an API gateway, that means you're probably exposing it to the Internet. If you're exposing it to the Internet, you probably want encryption. So one of the first things you're going to do is turn on encryption. This actually leverages Kubernetes secrets. So that's very powerful. Again, all notion within Kubernetes itself. It's a normal construct. It's stored in LCD. Very easy to maintain. The next is authentication. So now that I have an encrypted API gateway, I probably want to authenticate before I allow it to talk to downstream services. How do I handle that? In the case of Ambassador, out of the box, Ambassador does not provide authentication. It provides a way to specify something that handles authentication. So you can write your own authentication service, basically. There are open source projects that you can use, or if you have your own authentication, you could just point it to it. If you're interested in having an API gateway with authentication kind of built in, Ambassador does offer a pro version called Ambassador Pro. That does cost money. One of the features of that version is it actually has native authentication built in. Ambassador is actually from a company called DataWire out of Cambridge, Massachusetts. So Ambassador itself is open source, but if you want authentication out of the box, that would be the pro version. In our case, we actually already had our own authentication service. So we just hooked it up and pointed Ambassador to it. Ambassador is smart enough to actually authenticate every single request before it calls downstream. You can, of course, tell it to bypass authentication if you don't want to do that approach as well. So the first two things I just showed, encryption and authentication, it's kind of a set it once and forget it type of thing. Those are the basic constructs to get an API gateway online. It's the initial configuration to secure it, but then you probably won't touch it again unless you make changes to your authentication service or you need to upgrade your cert. If you're upgrading your cert, use Let's Encrypt that can automate the entire process for you. Ambassador supports that natively. Where you spend most of your time configuring an API gateway like Ambassador is through mappings. So this is the logic of how do I route once I receive traffic, like data? Where am I going to send that? What are the requirements to send it there? What happens if I don't, if those requirements are not met, what am I going to do with that data? So here I have a very basic example. Let's say I'm trying to hit omnision.io on the path slash. If I receive a request with that host and that prefix specified, then I will go ahead and send that down to web.default.svc. So in the default namespace, I have a service called web. Ambassador will go ahead and say, okay, I received something from omnision.io. You want it to go down to that web service, deploy it into the default namespace. I'll handle that. This is, of course, a very basic example. You can do port base. You can do path base. You can do header mappings, gRPC, web sockets. There's a bunch of different options from mappings, but in general, this is where you spend the most of your time when you configure an API gateway. So with that, let's talk about different scenarios. So now we know what an API gateway is, why it might be important. We talked a little bit about Kubernetes. I introduced ambassador and gave some configuration information. Why, when, how, would I go ahead and deploy this? In the case of an API gateway like ambassador, the number one use case is edge routing, or I like to call it north-south traffic handling. So it's how am I getting data into my environment? And how am I sending it down to a downstream call? I want to handle things like TLS and encryption. I want to handle things like authentication, but at the end of the day, my goal is to expose one of my services in my Kubernetes cluster out to the internet, and I want that traffic to be seamless with retries and circuit breaking, and I need everything to work. So ambassador's number one focus is on this use case. It's not its only use case. I mean, you can actually deploy ambassador an API gateway as a daemon set. That's another way I've seen it done, where you have services always talk to the API gateway, even internally, to make mapping decisions to make another call. So that's more of a hub and spoke level. But I would say a lot of the effort of ambassador to date has been in edge routing, which has some very unique use cases, like GRPC and TLS, non-trivial. TLS in general is not a lot of fun. Authentication, hard problem. Like how do I handle this and make sure it's working well and making sure it's secure? In fact, there were some very good GRPC issues, vulnerabilities, CVEs that were released recently. Ambassador, Envoy already patched that. So if you're running a version of Envoy and Ambassador, I highly recommend you upgrade because this would be a publicly exposed service. And then it tries to offload some of the responsibilities beyond what, let's say, an ingress controller can do today. So an ingress controller cares, if you remember, about load balancing policies. It cares about name-based virtual routing. And it cares about TLS. Authentication wasn't mentioned. Circuit breaking wasn't mentioned. Retries wasn't mentioned. There are other things you're going to want to do in an API gateway. Yes, you could do that with an ingress controller if it's been extended to support those use cases, but you can't do that through ingress itself today because that's not supported. The networking sig for Kubernetes is working on that, so it may end up changing over time, but today it's a very fixed scope, so it usually has limited use cases. So edge routing. How would I go ahead and deploy this in my environment? I just want to walk you through a quick example. Let's assume you have a monolith and you're moving to microservices. My recommendation would be throw an API gateway in front of it immediately, whether it's ambassador or something else, doesn't matter. Make it be an API gateway because now I have flexibility in handling this traffic routing. So now, as my developers start spinning off microservices from the monolith, I can handle that automatically. It's not uncommon to actually have these services actually talk directly only to the monolith at first. It's a common approach of moving to a microservices-based architecture, but eventually you will either start making calls directly to these microservices or you'll build like another app off to the side that you're going to need to route to as well. This is where an API gateway becomes very powerful because it can handle all these different routings, these different paths, and then over time your end goal is probably to get rid of your monolith entirely or at least reduce the size and footprint of it, reduce its scope. And if you're lucky enough not to have a monolith or if you're starting from more of a green-field environment, you might be looking at a microservices-based architecture from day one. Again, an API gateway will provide a lot of flexibility for you in order to handle the routing demands of your environment. So I talked about edge routing. I mentioned there are other scenarios. I will dig into those, but I do want to dig into service mesh because this is where things get even more confusing. And I'll pick on Istio because usually when you hear about Kubernetes, Istio comes up a lot, but this same basic concept applies to Linkerd or any other service mesh as well. So service mesh has been a term that's been thrown around for a while. I've listed some of its key functionality here on the side. You see things like load balancing, GRPC, WebSockets, HTTP traffic, fine grain control, mutual TLS, service communication, telemetry. Does that sound a lot like an API gateway to anyone? I mean, there's a lot of overlap here. So one of the common questions I get asked is, well, why would I deploy an API gateway if I have a service mesh or why would I have a service mesh if I have an API gateway? This picture actually does a pretty good job of explaining why. So in the case of Istio, I have Envoy. So again, I have that Envoy proxy. This time it's actually embedded with my services, though. Usually it's a sidecar that's deployed with my application. That Envoy needs to be controlled. In the case of Ambassador, that controlling would be actually Service A or Service B. Ambassador controls Envoy directly within the process. Here, the controller is actually outside. So you see Mixer, Galley, Pilots, Citadel. Those services are actually being used to configure and tell Envoy what to do. So the control plane's been extracted out. But that isn't the primary difference. The primary difference here is that service mesh is typically trying to tackle what we like to call East-West traffic. This is traffic between my microservices. It doesn't care as much about North-South. That's not to say it can't do it. Of course it can. But the initial focus is I have two microservices within my Kubernetes cluster. I want them to talk. I need to be able to get observability data out of that. Maybe they're written in different languages. I need circuit-breaking retry, mutual TLS. And I don't want to write it for every one of my services. So, I mean, especially if you're in a polyglot and you have multiple different languages, if you add circuit-breaking, let's say, to Java, and you also have Go, you're going to have to go add it to Go. Well, if I use a service mesh, I can do it just all in Envoy. It's handled there, not within my application codes. My application becomes simpler. I don't have to solve it for every single language. Envoy handles that for me automatically because it doesn't care what the back-end language is. It can communicate with it over the network. So, to make this slightly more complicated, if we actually deploy this in, let's say, Kubernetes, so here I have Istio deployed. Let's say I have three different services. They have Envoy as the data plane. Istio is kind of controlling it. Istio actually has another thing that they recently released that's called an Istio gateway. We'll talk about that in a second. So, if I want to route into this environment, one option I have is deploy an API gateway on the edge, handle my north-south traffic, and go ahead and talk to the services. I mean, you don't typically talk to Istio directly because Istio is the control plane. So, I could say route traffic to the query service and if the query service needs to talk to, let's say, the billing service, it can do so through Envoy through the Istio configuration. Now, Istio came out with something called Istio gateway, which is basically an API gateway. So, it's basically an Ingress controller where the routing definition policy is defined in Istio instead of in the Ingress, if that makes any sense. So, the basic concept was, hey, Ingress isn't flexible enough for us and it's statically defined basically for Kubernetes. I already have this notion of defining like routes and paths in Istio, why can't I use that instead? So, they basically took the notion of an Ingress and they basically said that the routing part of it, I'll handle that in Istio gateway itself and now I can actually do north-south traffic routing into my environment. So, could I use this to replace an API gateway? The answer is yes. Should I? The answer is it depends. What does it depend? Well, in order to deploy Istio gateway, I have to have Istio. Basically, I have to have Istio everywhere. So, unless I'm all in on service mesh already and in production, this model doesn't work. I wouldn't really deploy Istio gateway without having the rest of my environment on Istio. What happens if I have the route to non-istio? What happens if there are multiple different environments and ones on Kubernetes and one isn't? It's a similar problem I see from like AWS API gateway perspective. This locks me into the service mesh approach. That might be the approach you're looking to get to long-term, which is fine, but service mesh today is still a maturing market. I don't see a lot of people running it in production yet. I see a lot of people running API gateways today. I see a lot of people running ingress controllers today, reverse proxies today. But service mesh is trying to tackle a bunch of problems and it's still maturing as a product. So, even if you use Istio and Istio gateway, I would probably still recommend you have an API gateway in front of it because this gives me flexibility again. Now I can choose where I want to route my traffic and how do I want to handle it and I can handle other scenarios beyond just the primary three that an ingress controller support today. All right, so now some general tips for you. First, if you're looking at an API gateway, an ingress controller or a service mesh, you need to understand your business requirements. For example, does it need to be Kubernetes native? I mean, if you're all in on Kubernetes, that's probably a nice to have. If you're going to have more of a hybrid approach or if you're outside of Kubernetes as well, that might not be as compelling for you. Do you need portability? Yes, no, maybe. I mean, are you okay running your own persistent store? Is that a deal breaker for you? Are you comfortable running that? Could you run it in multiple different environments? Do you need non-destructive configuration updates? That's usually a big one. Like, can I handle downtime when restarting or giving it a configuration change? Next one is around doing your own research. My favorite example is just because it's written in Go, doesn't mean you should use it. Go is fantastic, but just because it's written in Go, doesn't mean it's the right solution. You have to do your own research, your own testing, you have to make sure it's meeting your use cases because they could be different and the benchmarks you see are not equivalent to your environment. Also, just because it's mentioned in the Kubernetes documentation, doesn't mean that you should use it. So, like, nginx is listed in there a lot. nginx is great, but doesn't mean it's the only solution out there. You need to do your own research to make sure you understand what's available and what are you trying to solve for. Next one is follow-first principles. So I like to keep it simple. It's probably the big takeaway from this. Mixing service mesh with API Gateway today, I think, is a mistake. Long-term, they may converge. I hope they do. I'm actually a big proponent of service mesh and I really hope that it takes off, but today they're still a maturing market and when you're trying to solve Northwest and East, North, South and East-West at the same time and you're trying to handle all these different languages, tracing libraries, everything that's available, it's a lot to bite off. And API Gateway is a defined scope, a reverse proxy, whatever you want to call it, is a very defined scope and it's trying to solve a very particular problem. So I like separating that out. Follow the open-source community. Things are changing very rapidly. There has to be at least a dozen open-source API gateways that I can think of off the top of my head right now. Ambassador is one of them. That's the one that we decided to use. We know a lot of other people that use it as well, but it's not the only one out there and they're all great. I mean, the open-source community is very active. And again, it comes down to your requirements. And then the final one, please don't write your own. There are so many API gateways out there today and I've actually worked at big companies where all they did is write their own API gateway because they thought they had their own requirements. I'm sorry you don't. Like, this is kind of a solved problem. Sure, there might be some nuances, but most of the API gateways today, like Kong, is extremely flexible. It has plugins. You can write your own plugin. So even if the base functionality isn't enough for you, you can add onto it and enhance it. The same applies to Ambassador. Like most of these API gateways can be extended. They provide a great foundation. There really should be no need for you to write that unless it's going to become your core IP. If you think that that is what's going to distinguish you or if that's what you're trying to sell or whatever it is, fine. If not, focus on your core IP. Start with one of these open-source projects. Extend them if you need to. Please contribute to them. Make them better. But if you're not using this as your core IP, you shouldn't be investing all your efforts and kind of writing it from scratch. I have a ton of references. Again, I've posted this link, the deck online, so you don't have to take a picture if you don't want. I've tried to break it down into three different categories for you. So I have Kubernetes documentation about ingress and ingress controllers that I thought would be relevant. I have some relevant Ambassador links, including some blog posts that the data wire folks did that I thought were relevant to this talk. I mentioned that I did a KubeCon China talk. That was really on more cloud-native architectures. It wasn't specific to Kubernetes, but it did drill into other use cases for an API gateway like Ambassador. So that might be worth taking a look at. The Google doc here, second one underneath other. Really cool. Someone put together a Google spreadsheet that compares all the different API gateways and their features. Why is that cool? Well, now I can actually understand if my business requirements are going to be met. But when I actually looked at it, pretty much every API gateway supports the exact same features. So maybe they're written in a different language. Maybe they have more GitHub stars. Maybe they're based on envoy or not. But like base functionality, they're almost all the same. GRPC, HTB2, WebSockets, almost all of them support it. Authentication, almost all of them support it. TLS, almost all of them support it. A pro version that offers support, almost all of them support it. So you'll end up looking at that sheet and being like, well, how do I decide? It comes down to what have you tested? What research have you done? What are you most comfortable with? So definitely take a look at that. And with that, I think we have a couple minutes where I can open it up for questions. Thank you so much. So the question was around a multi-cluster API routing and whether ambassador supports it based on like the ambassador ID. So the answer is yes, it supports it fully. The ambassador ID is if you want to have unique ambassador clusters, not unique Kubernetes clusters. So one use case for that would be, let's say I have an ingest path and a query path. Maybe I want to use different API clusters for that so that ingest doesn't impact query. But from a ambassador being able to route to multiple different Kubernetes clusters or namespaces or anything or non-Kubernetes, it fully supports all those different configuration options. Other questions? I have a question. How many people use an API gateway or reverse proxy or ingress controller in production today? About half. How many people use a service mesh in production today? In production? Nice. Usually I see very few hands. So with large audiences is actually a great polling question. What you'll end up finding out is everyone has a use case for something like a reverse proxy or an API gateway or an ingress controller. They may have use cases for a service mesh, but they haven't gotten there yet. I think that will change. I hope that it changes because service mesh is so powerful, especially in polyglot microservices-based architectures. But it's going to take a little bit more time, I think. Other questions? All right, that's all that I have. I'm here afterwards, so if you have other questions, you want to come up and ask me. Otherwise, thanks so much for joining.