 Hello and welcome, my name is Leif. I'm going to share my screen, make it a little bit easier to see what we're going through today. So yeah, we're going to go through a webinar consisting of well mainly two parts. It's a PowerPoint part, I think that's legally mandated when you do webinars and such. And there's going to be a demo component as well where we're going to have a look at some things in real time. Popping over to the PowerPoint part just to kick off quickly. We're going to be talking about API gateway use cases for Kubernetes today. This is not going to be a product presentation specifically, it's more of a technology presentation talking about general concepts. So to have a look at our agenda for today, we're going to be talking about, well, we're going to start with an introduction to what an API gateway is, because still to this day, there's a lot of confusion about that terminology. We're going to give a quick nod to API management, how that fits in with API gateway strategies and so on. We're going to look at some core use cases for API gateways, both inside and outside of a Kubernetes environment. And the demo component, of course, is going to look at API gateway use cases within Kubernetes specifically. And at the end, as already mentioned, we should have time for some Q&A. So yeah, my name is Leif Beaton, I'm the Global Channel Technical Lead for FI, my email address is on screen for those of you that want to jot that down. And now let's break into the gist of things. Let's start off with a quick poll because we like to throw people into the deep end of the pool right from the get go. The two questions here are, where are you with Kubernetes? And you'll find the alternatives on screen. And secondly, who are you? The poll should be on screen for you guys right now. I suppose I should answer as well, shouldn't I? I'll leave you a little bit of time to complete the poll. Okay, about 70, oh, about 80% have responded. Yeah, I'll leave it to you. It's very rare to see 100%. So I'll leave it to you to see where it tails off. Well, we're right at about a minute here. So I'm going to go ahead and end the poll for now. Yeah, it looks like 40% are just getting started with Kubernetes followed by 37% working on an organization using a hybrid of traditional Kubernetes apps. As far as who we are, it looks like about 50% is an admin or operator followed by 38% developers. Excellent. So there's a nice spread. And we love to see that, of course. Now, it's not surprising to see a relatively high percentage of attendees being reasonably new to Kubernetes. Even though it's been around for years, it's been considered a bit of a beast to tame. And then in some regards, I understand that in others, it's not as complicated or mystical as it may appear on the face of it. That being said, it's always good to be able to explain these things to people while they're getting their feet wet. So I'm happy about that. And with that in mind, let's just pop on over to this slide. So what is an API gateway? API gateway as a terminology is being thrown around left, right, and center in the industry today. And for good reason, I mean, it solves a lot of problems. An API gateway is an essential tool for a modern architecture. But it's not a product. That's something that we need to make clear from the get go. An API gateway in itself isn't a specific product. It's a set of use cases that a number of different products can fulfill. So an API gateway is very typically a reverse proxy and load balancer with a number of policies at play, as well as some additional functionality around the reverse proxy load balancer, things like authorization control and potentially authentication and access control, single sign on solutions could be a part of it. But certainly authorization through things like job tokens or strategies to that effect. So the API gateway isn't a specific product. It's a set of use cases. The API gateway will typically sit in a position similar to where you would see an ADC, an application delivery controller, right? One of the things that the API gateway should be able to do is things like facade routing. So what is facade routing? Well, if we're looking at the overview here, we have, for instance, our warehouse API here. And when we look at the URIs, we see that there are different parts at play here, slash API, slash warehouse, slash inventory, likewise, slash API, slash warehouse, slash pricing, and so on and so forth. The client, the consumer of our API or APIs, shouldn't be concerned about where these APIs are, as in do they sit on Kubernetes cluster one or Kubernetes cluster two, or do they sit outside of a Kubernetes cluster, nor should the consumer be concerned about how many replicas of a specific API do we have? Do we have three inventory services as indicated on the display here? Do we have three pricing services? Do we have 30? Do we have one? The client doesn't care, it shouldn't care, and it shouldn't know. All of this is the purpose of the API gateway. Also, the API gateway should be able to unify access to potentially several sources. So we have our warehousing API consisting of several sub-APIs here. We may have other APIs that's residing entirely elsewhere, and perhaps also governed by an entirely different set of policies at play. All of these things should be taken care of by the API gateway. Returning to what I talked about, facade routing, a modern API gateway should also be able to expose something as a single API, as, for instance, let's say for the sake of argument that our slash API slash warehouse slash inventory API is in fact gathering information from multiple different APIs on the back end. It might be multiple different APIs required to give the response to the inventory request. It could be APIs dealing with specific product information. It could be APIs that specifically just looking at how many items of this do I have in stock, and all of that gets tied together in one, presumably, JSON object that's going to be responded to the API client. Essentially, when we're talking about API or APIs in plural, we're talking about a contractual relationship between a client and an endpoint. So the API contract stipulates how should a request look, what format should the request have, are there potentially a set of headers that I need to see in the request in order for me to be able to fulfill it, et cetera, et cetera. And inversely, the contract also stipulates how is my response going to look. So generally speaking, we're most often talking about RESTful APIs, which very typically would use JSON as a data transport format. Is the JSON object going to follow a specific schema, which is very often the case? And if it falls outside of that schema, then likely the API gateway would discard it and yield an erroneous response of some format. So the API gateway fulfills a number of different tasks that you are seeing in environments that does not classify itself as an API delivery system. The API gateway shares a lot of DNA with traditional load balancers, with traditional active directory controllers, with traditional Ingress controllers, and also elements of service mesh, which we'll talk about briefly in a later slide. So when would we use an API gateway? Well, typically, these sort into one out of three verticals. You have resilience use cases, traffic management use cases, and I have apparently forgotten to put myself on do not disturb, apologies for that, and security use cases. So under resiliency, we see things like A-B testing, canary deployments, blue-green deployments, and the likes. We also see protocol transformation, a very typical example here is if you have a legacy API that speaks so or XML, you might want to transform that to JSON bidirectionally, so between the clients and the endpoints. Policies like rate limiting, etc., etc., and also, of course, service discovery. Under the traffic management use cases, we're looking at method routing and matching and request and response header and body manipulation body, of course, being an edge case that we typically would like to architect ourselves away from, due to its relative heavy lifting approach to things, but nonetheless, pretty much anything that deals with the also model layer seven would fit into that side of things. And then, of course, security use cases is in play as well. So as I briefly hinted towards API schema enforcement, the request body perhaps needs to look according to this scheme and response body needs to look according to that schema. And what do we do if anything falls outside of those cases? Authentication and authorization takes place in this use case as well. Custom responses. Obviously, a restful API call wouldn't adhere to the typical HTTP response codes, so you wouldn't see necessarily a 200 response in the API itself. You typically see that in the HTTP header, but you might want to see a different kind of a response from an API endpoint. And, of course, these responses are fully customizable, and you'll find that different API endpoints may adhere to different standards there. TLS termination is also a very typical use case for the API gateway. And as you can see, all of these elements individually fit into traditional application delivery controller mechanisms as well. So essentially, what I'm trying to say here is an API gateway isn't a mystical magical device that just happens to speak API natively. It is essentially a reverse proxy and it's simplest of definitions. It's a reverse proxy that has some additional functionality and policies at play. Let me see here real quick there. So that leads us into our second poll. Two questions again. How many APIs does your organization have, if any, and are your APIs internal or external? Let's give you guys some moments to have a look at that. We'll give people another second right here, but we're looking pretty good, right at about 70%. Yeah, it's rare that it goes much above 80, I'd say. I know that we've gotten some great participation today. Okay. For question number one, how many APIs does your organization have? It looks like we're on kind of a nice little bell curve here, 25% between 11 and 50. Next highest being one to 10, 22%. And then kind of a change here, more than 122%. Nice, nice. It's right there. And are your APIs external or internal? It looks like 65% said both. Also, also nice. I mean, this shows that APIs have matured into the mainstream, right? If we had asked the same question not more than two, three, maybe four years ago, we would see a very, very different reality. And this is excellent. I mean, APIs allow us to construct software in a very different way than we would have done traditionally. Traditionally, it was, well, a monolithic approach to everything, which meant a lot of things really. But principally, it meant that my ingenious piece of software here, my ingenious function that does something specific only fits within my monolith and only does things for me. With an API approach, all of a sudden it opens the possibility for having previously thought of impossible functionality within our applications. If I'm developing an application for, say, keeping track of my motorcycle rides or stuff of that nature, then I can now with relative ease without having that kind of expertise personally, I can very easily incorporate mapping capabilities using something like Google Maps or Bing Maps or anything of that nature, simply consuming their APIs to put that functionality into my application and make it appear as a native overall application externally, which is brilliant. So, and with regards to the numbers, the bulk of them would seem to have at least double-digit APIs, which is really, really interesting, because that's where things start getting how to put it. I'm hesitant to use the word complex, but the complexity does increase with the number of APIs in terms of how do we route traffic to these things. In an unorchestrated environment, API management becomes a critical component at that stage, because our individual gateways, you'll typically end up with more than one, would have their specific configuration and keeping track of what configuration goes where can become very, very tricky. The higher the number of APIs, the higher the complexity, of course, and more automated systems would be required to keep these things in check. Cool. Well, thanks for the responses there. Moving along, when should we use an API gateway in Kubernetes? The eagle-eyed observer would note that this slide bears a striking resemblance to one we just looked at, the difference being, of course, that two of the elements here have been grayed out. The reason for that is, using an API gateway outside of Kubernetes, you have the entire list available to you, but when you put an API gateway inside of the Kubernetes environment, these two elements, so protocol transformation under resiliency and request response manipulation under traffic management, should typically not be part of the equation. The reason for that is, both these are quite computationally heavy. They're quite expensive from a computational point of view. Generally speaking, they're only required or necessary when you're dealing with legacy type APIs. Things like SOAP, for instance, which isn't particularly Kubernetes friendly. Kubernetes was born when SOAP was already on the steady decline, so it's quite natural that these things are reasonably mutually exclusive. Is it impossible to run a SOAP-based API inside of Kubernetes? Of course not. Is it against typical patterns? Absolutely. You would generally not want to use these types of approaches when you put an API gateway inside of Kubernetes. That's not to say that they're unavailable to you. They are absolutely present so long as your tool, be that, for instance, nginx plus, does have it available. Then, yeah, you can use it with Kubernetes as well. The point here is you absolutely shouldn't. You should think long and hard and try to architect yourself away from that kind of a solution. That brings us into where can API gateway use cases be achieved? We can very simplistically split it into three scenarios. At the front door, meaning outside of your Kubernetes cluster, this is well where the rest of the world lives, so to speak. This makes sense if your policies needs to be applied on a global scale. If you're dealing with multiple clusters, that kind of an approach. Number two here is at the edge, which is where the north, south traffic takes place between your Kubernetes environment and everything else. If you have policies that needs to be applied to the entire cluster, this is where you would take care of that approach. Finally, number three, at the services level, which would generally, we would be talking about the east-west or intra-service communication at that stage. That's policy enforcement on intra-service communication. So should service A and pages be allowed to talk directly to service B or service C, or should it have to go through somewhere else? Should it not be allowed to talk to it at all, et cetera, et cetera? In terms of tool sets on these three locations, if you will, in scenario one, outside of the Kubernetes environment, that's typically taken care of by a load balancer or more accurately, an application delivery controller. At the perimeter where we're dealing with the north-south, that would typically be taken care of by an ingress controller in Kubernetes lingo. Internally, for the services, for the intra-service communication, once we start talking a certain level of complexity or a certain number of services, it would be beneficial to look at something like a service mesh to take care of that. That being because in the intra-service communication, if we're putting an API gateway or anything really to facilitate the policy enforcement there, it would generally be employed as a sidecar to the individual pods. Those sidecars will, each and every one of them, will require its unique configuration. There isn't a single configuration that you can spit out to all of them. Well, I say that. I suppose you could technically do it, but it would be an exercise in futility to put it mildly. These are, from a bird-side perspective, what tool set or what approach would fit best at each of those scenarios. So then, what type of tool is best for Kubernetes? The simple answer to that would be a Kubernetes native tool. So, what is a Kubernetes native tool? What are we talking about there? Well, we're talking about tool sets that are typically specifically designed for Kubernetes. More generally speaking, they need to be able to communicate using YAML. So, receiving YAML instruction sets from Kubernetes and ultimately, it can convert that YAML instruction set into whatever internal structure it uses, but it needs to be able to speak it. Equally, it should be able to output YAML to Kubernetes if needs be as well. Very typically, we would see it's deployable using Kubernetes-friendly tools like Helm and the likes, and it needs to be able to communicate with Kubernetes standard tools to like, for instance, CUBE control. Cool. That leads us into the demo portion of today, which is perhaps the fun part. So, I'm going to swap over to my environment here. So, in Kubernetes node right now, it's Kubernetes node number one. This particular environment has three separate Kubernetes instances or nodes, K8S1, 2, and 3. I'm simply working on this one because one is the first number. You could obviously do this from wherever. So, what are we going to do here? Well, the environment is already set up for us in the interest of time. So, first, let's have a look at what we have in terms of an Ingress controller. So, if I run CUBE control, get pods on the namespace nginx-ingress, I see that I have, indeed, an nginx-ingress controller here. I can step into that ingress controller to see what it actually is. I could, for that matter, execute commands from outside the ingress controller. So, now I'm inside of my ingress controller. You see the prompt changed, and we can have a look at... Okay, so that didn't have... So, we see that this ingress controller is running on Debian 10, and we can have a look at what nginx version we're running here. And we see that this ingress controller is based off an image running nginx plus. So, this is an nginx plus-based ingress controller, which is great news for us. Cool. So, we have the ingress controller sitting up and running as is. And I'm going to just do a couple of things here, two seconds, and there. If I go back here... So, the ingress controller would be this fella here, right? That's the one we were looking at just now. So, what else do we have to play around with? I'm going to... Quick look at what we have here. Yeah. So, I'm going to go into the folder, and we should have some YAML files available to us here. Yeah. So, the ingress controller is set up with this cafe virtual server YAML. So, let's have a look at what that one has in terms of instructions. So, we're using a virtual server. Traditionally, ingress controllers were specified and configured using annotations, and you can still do that by all means. However, it's quite limited, because that means you're limited to the features that the Kubernetes project has specified as features of an ingress controller, meaning that you would leave a lot of features off, for instance, nginx plus on the table, and you wouldn't be able to utilize them because the specifications weren't there for them in terms of annotations. So, to rectify that situation, the Kubernetes project implemented custom annotations, which allowed us to specify annotations, allowing us to add features from the data plane, from say nginx plus, for instance, into our ingress controller, without that being explicitly specified by the Kubernetes project. That's better. However, that was problematic in itself as well, because, well, for a number of reasons, one of them being that the custom annotations were typically specified globally, which meant it was complicated, very, very complicated for complex organizations to get exactly the flow that they wanted in their traffic. The virtual server kind allows for a much, much more flexible approach to specifying and configuring up your ingress controller, so we're using that by default. Yeah, so all of this is pretty self-explanatory. We're looking for a host name of cafe.example.com. We have some secrets defined, certificates and keys for TLS. We have some upstream services specified. We're exposing some ports, and then we're setting up some layer seven routing rules, part matching, slash coffee, over here, and slash tea, and what we will do with that. What I want you guys to pay attention to here is that we're doing some request method matching. What we're doing is, if the request is coming in as post, we are going to yield a 403 and reject that request. If it's not coming in as post, it will not fit this match, right? It'll bypass this entire conditional match and four slash coffee. If it isn't post, it'll be proxy post to the coffee deployment service, rather in this case. For the tea, there's nothing special there. The request comes in for tea. It'll be proxy post to tea, simple as. But this indicates the whole element we talked about earlier, about method-based matching and routing. So let's go ahead and play around with that. We are, let's see, I'm just going to get the pods here again. So we have two pods here. If we look up here, let me have a quick look at that one. Yeah, there we go. So the cafe.yml is specifying the deployments on the service, right? So here for the coffee deployment, we see we have two replicas. We set up a service for the coffee as well and all of those things. And then we're setting up the deployment for tea with one replica and the service for tea as well. So when I was looking at the pods here, you'll note that I do indeed have the two expected pods or replicas of the coffee and the one of tea. The walkthrough one isn't relevant for this particular lab, so we can disregard that. So let's have a quick look at the Ingress Controller again. There it is indeed running. Everything is fine. So yeah, a little balancer type. All of that's good. And let's have a quick look at the host file as well to see that I'm actually resolving stuff here. There we go. It was hidden here at the top. So I have a resolution in ATC host for cafe.yml.com. Cool. So that means I'm going to use a curl command here. Most of you will know exactly what this is doing, given the number of architects we have on the call and so on. But essentially, I'm running a get command here. It would do get by default, but I put the method in explicitly just so we can see it. I'm also using the insecure flag here. And the reason for that is it's a self-signed certificate for this fella, so Carl would just yield errors if I didn't use the insecure bit. So what am I going to request here? Well, I'm going to request cafe.yml.com slash coffee. So we should expect to see a 200 response on this because it's not post, right? And indeed, that's exactly what we're seeing. HTTP 200, everything's fine on the slash coffee or I. But what happens? I wonder if I run a post with some data. So it's the same request, but I'm now using the HTTP verb of post and I'm also including some data, so an upload body. Other than that, it's the same request. So if everything's correct, I now expect to see a 403, I'm sorry, forbidden. And indeed, that's what we see. And the message you are rejected as we saw in cafe.yml.com, that's this bit at play, right? So the moment we saw a post request, it fits into this conditional matching and we're yielding an HTTP 403 with a body of you are rejected. So that is this element at play working exactly the way it's supposed to. So this is an illustration of how we can manipulate things at the Kubernetes ingress controller side of the game. And you can do so much more than this, obviously, you can have policies at play, you can have web application firewalling. You can do rate limiting connection limit, all sorts of funky stuff. So again, most of the use cases of an API gateway can be facilitated simply through the nginx ingress controller for Kubernetes, because of the strength of the virtual server type. But what about a service mesh then, because hopefully at least some of you are aware that nginx has positioned themselves in the service mesh side of the game for a while now as well. So what can we do there? Well, we saw earlier, I'm just going to run get the pods again, so we can have a look at that. We saw earlier, we have these fellas here, and these are just standard pods, nothing special about them. But I have nginx service mesh installed on this system as well. And there's a command for the nginx service mesh similar to kube control that looks like this nginx-mesh control. And I'm just going to give it the parameter of config to see how it's currently configured. So I get the configuration for the nginx service mesh. And what I want to highlight here is this bit here, right? So I have an injection here. And I have disabled namespaces as an empty collection. Enabled namespaces is a collection consisting of a namespace called t-cream. What does this mean? Well, it means that the Kubernetes namespace of t-cream, any pod that spins up in that namespace will automatically have the nginx service mesh sidecar proxy injected into it. So we can obviously do the opposite approach if we wanted to auto-inject in all namespaces except one or two or three or four, we would leave enabled namespaces empty and just add the exceptions in the disabled namespaces as well. This is for auto-injection. You can also manually inject sidecars outside of this scope. I just wanted to illustrate that the t-cream namespace will have auto-injections off sidecars. So what does that mean then? Let's go around and have a look at the pods in that namespace. So I'm going to get pods, but I'm going to specify the namespace of t-cream. How does that look? Well, I have a few things here. I have t-version 1, t-version 2. I have the t-cream service. I also have an additional pod which is there for us to be able to do service-to-service communication a little bit easier. I'm going to step into that in a moment and you see two of two here. The reason you see two of two here is because there are in fact two containers in the spot. It's the service itself and the auto-injected sidecar. So that's why you have the two of two here. So I'm going to go ahead and where am I? Okay, yeah. So I'm going to do there. And we should have some mammals here as well and we do. Excellent. So the one we're going to play around with here is the nsm.yaml. So what's in that? It looks like this. Very simple. We're setting up a traffic split YAML file. What does that mean? Well, the traffic split is useful for all sorts of cool things. Mind you, we're now inside of the service mesh, right? So if we pop back over here and have a look, that means we're dealing with these fellas now, right? So the traffic split allows you to do things like blue, green deployments, canary, et cetera, et cetera. You can specify. So you have your version one of the t-service here, but you've recently developed version two and you're deploying that out and you can start to either do a canary deployment, which is what we're going to do now, where you start gradually increasing the amount of traffic that you're sending over to the version two of the service. The idea here being that you're obviously doing testing of your version two of the t-service at several tiers before it comes close to the production environment. But once you introduce it into the production environment, it's typically a wise decision to gradually move traffic over to it to see if that increases the pressure on help desk or whatever happens. You'll obviously, when you're doing this, you'll log the output of the service and so on and so forth to see if there are any errors are occurring as this happens. But anyways, so we have these two services. We're currently waiting them 50-50 just to illustrate that we're distributing traffic on them. But we're only doing that if we have a conditional match here. We want specifically to find a cookie with a key valve pair of version equals beta. If that happens, we're going to do the split. If we don't see this cookie in the request, we're simply going to send root all the traffic to the t-version one and business as usual. I can't remember if I applied this YAML file before we started this morning. I'm just going to do it again. It doesn't matter. Yeah, I had. So nothing changed. Everything is exactly as we were expecting it to be. So again, just to remind ourselves, how does our distribution of pods look here? We have the t-version one, t-version two, and the front-end service. Cool. So I'm going to go ahead and step into the sleep container to do service-to-service communication. So that's this fella here in the top one, not container, pod would be the more appropriate naming convention here. I'm simply going to step into the interactive terminal of that pod in the namespace and there we go. And you see again that our prompt changed. We're now inside of the pod. First thing I'm going to do is I'm simply going to curl the t-cream service directly. So that's the front-end service there. And I do that by just curling. It should be a t-cream service on pod 80. And we see we get a response fine. I'm getting that specifically from the t-cream fd7, etc, etc, server. And if I run this request again and again, nothing changes. I'm getting the response from exactly the same server. However, if I, for instance, I'll just copy off this command. If I run this command and I into the color request, I add a cookie of version equals beta, we should see a different behavior. And indeed we do. Sorry, there we go. I have another command I made this morning before the session that will show this a little easier. There. So you see, we're now seeing load balancing, if you will, between the two, because we have the 50-50 waiting. Now I didn't show you that as we were going through the configurations earlier, but the load balancing method that's implemented here is least time. So that means it won't be exactly 50-50, as it would be if we were utilizing round robin, but it will be somewhere in the vicinity of 50-50, assuming that the services are performing in or around the same way. Leif, did we lose your audio? My apologies. Thank you for keeping me honest. Yeah, of course. I was just yabbing away here. Yeah, okay. So what I have been talking to myself about for the last 20 seconds is when can a non-criminatus tool be appropriate? Well, first let's recap. So a Kubernetes native tool can speak YAML natively, can typically be deployed with Kubernetes-friendly tools like Helm and so on and so forth, and all of these good things. A non-native Kubernetes tool can also be appropriate for usage within Kubernetes. Like, for instance, when we're talking about Kubernetes Ingress Controllers from an nginx perspective, we're talking about our two ones, which are based on nginx open source, and of course, the counterpart, which is based on nginx plus. These are Kubernetes-native, but only by virtue of the Kubernetes Ingress Controller module, which is a module that sits between the Kubernetes API and our configuration and APIs on the nginx side of things. And holistically together, they present themselves as a Kubernetes native tool. Take away the Kubernetes Ingress Controller component, and it's no longer a Kubernetes native tool, but it can still be appropriate for use in a Kubernetes environment. And a lot of other tools play similarly. So our entire data plane portfolio is super, super useful in Kubernetes and absolutely fits neatly within a Kubernetes strategy, not least of which because they're so compact. So small in terms of storage size, which makes them very, very quick to spin up and convenient from that perspective. So you could absolutely use, for instance, say, take an example nginx plus functionality as an API gateway, for instance, inside of Kubernetes without deploying it as an Ingress Controller. It may perhaps be micro gateways sitting in front of a subset of your organization that needs certain policies at play. Perhaps you want to expose that API gateway to a specific team, so they can self-govern how traffic should be routed to applications or services that they are responsible for. That pretty much concludes what I had prepared for you guys today. So if we have any questions pressing, then we can move along to that. I don't think there's any open questions on the Q&A at the moment. Let me just check if there's any on chat. No, it looks fine. Well, if you guys are anything like me, you may find yourself an hour from now or a day from now or a week from now thinking, why didn't I ask that question? And if that happens, please feel free to reach out to us. We're easy to find. Again, my name is Leif Beaton. You'll find me on the nginx web pages and feel free to drop me an email or whatever may be the case. Okay, wonderful. Thank you so much for your time today, Leif. It was a pleasure having you here, and we hope to see everyone back at a future LF Live webinar. For now, we will say goodbye, and this recording will be available up on YouTube later today. Thanks, everyone.