 Hi, everybody. Thanks to be here for this talk. I'm Nicolas Frankel. I've been a developer for like 17 years. And a couple of years ago, I've decided to become a developer advocate. The reason I proposed this talk is, well, there are so many ways to access Kubernetes ports that I also wanted to understand. And when I try to understand something in general, I dig a little deeper. And then I try to transmit what I learned. This is a back-to-basics talk. I won't talk about deep networking stuff, no IP tables, no nothing. I want to tell about stuff that actually you can use in your day-to-day life. If you were expecting all the nitty-gritty details, just leave now, because you will be super disappointed. So I assume that everybody is using Kubernetes and that you probably know what a pod is. Just the thing most people think about a pod as being like a container, a pod-connem, multiple containers. Just in this talk, it's not necessary. But the mapping one-to-one is not really right. I will try to demo everything, what I'm telling you. So I've created a Kubernetes cluster. I always avoid using cloud stuff, because it can be pretty. Oh, you don't see anything. That's interesting. And I need to switch very, very fast. I switch too fast. And display settings. And then I want mirroring. And I cannot use it. That's really fine. Help. And how did you solve it? Yeah, OK. Let's try it. Perfect. OK, mirror. Amazing. Thanks a lot. You earned chocolate. I'm not kidding. Swiss chocolate. OK, perfect. Thanks. Anyway, so now I have it. So what I already did is I created a kind cluster. Then I uploaded the image I will use throughout this presentation. And then if I get the nodes. So I also aliased kubectl2k, because I'm like super lazy. So we can see the control plane and the two nodes. That will be my starting points. And so the next step is actually to who-who like crazily amazing stuff. No, I already went through this. Deploy a pod. I guess that everybody knows how to do it. I will do it through a deployment. Yeah, I have my script. I'm sorry, this is supposed to be life coding. But at some point, I don't want to waste your time. And then I get the pods. And I can watch the pods being deployed. And they are already deployed. That's cool. I've deployed one pod. Who-who, that's amazing. At this point, the question is, well, what's the IP of this pod? So we can ask about its IP. So here everybody sees it, or I should make it a bit bigger. A bit bigger. Let's do it a bit bigger. Yeah, you're welcome. So here I'm getting its IP. And amazing, I've got the IP. So if I try to ping the IP, of course, as you can imagine, it won't work because it's an internal IP. We can also ask. So here what we did is we got through Kubernetes to get the IP. But we can also ask the pod itself to get its own IP through the host name. So we can go on the pod and ask, hey, host name. So of course, we've got the same IP. That's normal. Still, we cannot access it. The thing is, it's an internal IP, and it's inaccessible outside the cluster. So the next step is to realize that the internal IPs, they are not stable. So even if we found a way to get those IPs, we cannot get them. So let's try a little experiment. Let's kill the pods and create a new one. Not us, but since it's a deployment. So I didn't get the deployment. So there is a deployment. So the pod is attached to the deployment. We can kill the pods. So this pod, we can kill it. And we can Kubernetes will create a new one. And now, Presto, we've got a new pod running. And the fun part at this point is it has a different IP. So before the IP was 1.2, and now the IP is 2.2. So even if we got hold of its IP, then it's not really interesting because, well, pods will come and go. As you know, like pods, they are ephemeral. If there is a resource contention, if they are not responding, whatever Kubernetes kills them. And if they are attached to a deployment or a set, then it will respond new ones, but with different IPs. So the next step is actually to create a service. The service is the way to access a stable interface over pods. So the idea is then you have a stable IP, then you can do whatever you want with it. If a pod goes and it's replaced, and it's part of a deployment, whatever, then the service still points to it. So the mapping logic between a stable interface and a pod, you don't need to write it yourself. Kubernetes handles it. Let's do that. I will expose it through a service, and I will use a cluster IP. So now if I have my script, I should fold those scripts. Otherwise, I will get lost. I need to make it big. OK, and now I have my cluster IP. So here, this URL is stable. If I kill the pods, I can still access the pod through this URL. Guess what? This URL is still internal. So if I try to ping this IP, it still doesn't work. Also here, I'm using a Mac, which means that even if everything runs as expected, I have an additional abstraction layer between the cluster and myself, probably on Linux, you would have less problem. OK, so now I have my service. What should I do is do a not port. So finally, at this point, we can access it through a not port. So this is the stuff. We make a request to any nodes. That's the point. And then we will be able to get through the service, which has an external IP, and the service will point to the pod. So how do we do it? I will delete everything, and I will now go and go to the not ports. And I have created a deployment for that, because now it will be easier not to go through the command line, but to use a dedicated manifest. So I have a deployment, which is exactly what I did before. We have one end. Oh yeah, thanks. Thanks a lot. Don't apologize. That's good feedback. Again, good enough, big enough. OK, perfect. So that's exactly what I did before. I have a deployment with one replica. It's the same NGINX image. The only thing I'll be adding is a configuration, so that the configuration tells its hostname and server address. And then I have this service, and now it's not a cluster IP. It's not port. So service not port is the most basic brick in how you can access a pod through Kubernetes. So let's apply it. OK, apply F deployment. OK, get pods. It's super fast. OK, magnificence. Now we can get the services. And here I have the not ports. Fun ports. As I mentioned, I have a layer between the cluster and my OS. So as I created kinds in the first place, I had to be a bit smart, and I had to actually export the ports from my container port to the OS ports. If you are using a standard Linux distribution, it shouldn't happen. So now I can finally curl localhost 3800. And I access my pods. Amazing. I feel you are not amazed. We finally got to the pods. Well, that's nice and good. But let's deploy a new pod. So we will just increase the size of our deployment just to see what happens when we've got two pods. Because the Kubernetes semantics is not that great in some regards. I will directly do this. Thanks. I think after 50 minutes I will get it right. OK, so now I have my two pods running. And we can try to do a couple of requests. And as you can see, if we repeatedly curl, then I have load balancing between my two pods, which is funny because the next object I will talk a bit about is called load balancer. So you don't need the load balancer to do load balancing from port. So for me, that's a very strange object. Because actually, as I mentioned, you don't need it to load balancing. I think the name is really badly chosen. But you use the load balancer object when, at least at the time when you needed to do additional stuff. And in general, you use it on cloud providers. So it's just an empty object. You use your load balancer object on the cloud provider. You provide whatever service you want and perfect. Fun part, you cannot obviously use it like bare metal. But there is an implementation called Metal LB that's supposed to work. And because of my Mac or my poor networking skills, I couldn't make it work. So I won't demo anything. Just know that there is this load balancer object that you can use when you are going to cloud providers, but you don't need them to do load balancing, obviously. Historically, getaways, because we are talking about some first load balancing and then routing, that's exactly the idea of API getaways. At the beginning, before the internet, first we had load balancing, and then we had routing. So load balancing is between nodes of the same type and routing is between nodes of different types. So here is another break where we evolve, and now we say, hey, we want some routing. So we want to route some requests to some pods and some other requests to some other pods. And here is, again, an object that is an abstraction, and you have many, many, many different implementation. So load balancing, you don't need the load balancer object, but for routing, you need the ingress object or something that we show you afterwards. At this point, as I mentioned, it's an abstraction, and you've got different implementation. Here, I've mentioned a few that are like you can use. Just a few words. I work on the Apache API6 project. So basically, it's an API getaway. It's an Apache project. It's managed by the Apache Foundation. It's built on a very simple architecture. So everything is open source, here, yes, here. It's built on NGINX, OpenResty, and then you've got a couple out of the box, Lua plugins. You can write your own. So this is a bit different. You need to do the routing. You need ingress. So at this point, it becomes much, much more complex. I will have two pods, one called left, one called right. Here, I only show you the left part, but because otherwise, it would be too complicated. But basically, it would be the same. You will enter the nodes. The nodes will direct you to the API getaway. Here, in this case, the API6 getaway. Then it will direct you to the right part. Then you will go to the right service. And again, you will go to the left part. So you will have the service and the pod of your ingress plus the service and the pod of your own software. Let's see how it works. Any question, feel free to ask. So I delete everything. I don't need them anymore. I will advance a bit further. And now, I have additional objects. I have, sorry, I will install API6. So here are the values that I will need. Again, at this point, this is like proprietary stuff. Every ingress has their own stuff. Here, I will be using API6. If you use another ingress, you will need another one. Nginx has one, blah, blah, blah. OK, so I will help install it because I'm like lazy. And this is the fun part. We need to wait a bit because it takes some time. Perfect. We can get, again, I will follow the script less issues. So let's see the services in all namespaces. It's too big. But you see that I've created a new namespace, an ingress API6 namespace. Then you've got the getaway itself. So basically, it's Apache API6. Apache API6 relies on a couple of, on the storage. So there must be a couple of nodes. Etcd is the one that is used. So Etcd is the same key storage value as Kubernetes uses. And then you've got the ingress controller. The ingress controller is meant to reconcile what you want with the reality or the other way around. So basically, you say what you want. And API6 ingress controller will check the state. It will say, oh, that's not what you want. And then we'll issue the commands to make what you want. At this point, normally, yes, of course, and it's ingress API6. So I need to wait a bit. I hope it's working. But as you can see here, Etcd is not working yet. So we need to wait until Etcd is initialized. It has three pods, because three is the magical number. There should always be a number of, I always miss between even and old in English, but one, three, five, whatever. And one is not enough, because if it dies, well, it's not very safe. So three is the new magic number. I hope it works now. No. That's the fun part of demos. So let's see. K describe pods. Oops. K logs. Nope. Yeah, of course. Uh-oh. Container API6 in pod is waiting to start pod initializing. So I hope that. So here we can see that Etcd is not starting, which is not super great for the rest of my demos. You want to play with fire. You get burned, of course. K get logs and namespace ingress API6. OK, it just takes a bit of time until all Etcd nodes as pods have started, then they see each other, then they create the cluster. Meanwhile, perhaps I have some questions, otherwise it will be like stupid to just wait. What do I want to tell you next? Nothing. Oh yeah, I can tell you that. So once I've created the ingress, I need to create the routes. The problem is the routes are not objects. So the routes are actually parts of the ingress itself, or they are like dedicated objects of a different type. So there are two ways to create ingresses. The first is like below the ingress object, which I find here. Nope, where do I find it? Yeah, sorry, I didn't create it this way. I created this way. Or the other way is to use, in that case, the property stuff, which is API6 root, under its dedicated namespace. And if you need to migrate to another provider, you will need to change everything. So that's the idea behind the next subject that I want to show you. But before, I just want to make sure that it works. Yes, ingress controller has started. API6 has started. So now I think that everything is working. Yes. So now I can apply my deployment, OK, apply F deployment, OK. And normally get parts. I have one left part, one right part. Let's curl each in turn, which, again, I should follow the script. So if I curl at the root, the route has not been defined. So of course, API6 itself tells me, hey, no route defined. But if I use left, then it directs me to the left. And if I use right, it directs me to the right, which is exactly what I wanted. There is a slide bug at the moment that I found when I was preparing. So API6 route is an object itself. We can see something. The bug is here, the URI is only slash left. I want to see slash right and slash left. I see only slash left. So it might be slightly misleading. We can check the configuration itself. The configuration is correct. We have both. Actually, what my colleague told me, I asked the developer. He told me, hey, at the moment, the Kubernetes API is lacking something. We need to wait, so it's not on our port. Just be careful when you ask for some information. Sometimes it doesn't exactly translate to the reality. So just make sure you have the right query at hand. So as I mentioned, the routes, either they are under the ingress object or they are proprietary objects. So we can do better. And that's the point of the new key on the block, which is the gateway API. So with the gateway API, the CNCF tries, or the Kubernetes people try to actually make a clean separation between the object abstraction and their implementation. So we can have proper routes that are abstractions for everybody, and then we have implementation. And migration should be easier. Also, routing, as I've shown you, is basically like for path prefix with the ingress. But you might want to do routing based on something else. For example, header. And that's not possible actually with the abstraction. It might be possible with the implementation, but again, it's proprietary. So we want this kind of stuff. So now let's install the ingress object. And let's remove everything. So the routes and the API6 routes. And let's uninstall the charts. And now I want to use the gateway API. So now I've got the new thing. I will first deploy. Sorry, I will first install the CRDs. So I have standard CRDs for ingress that are part of the Kubernetes distribution. At the moment, the gateway API is experimental, so I need to install those objects explicitly. And you can see those are like Kubernetes experimental objects. They are not proprietary. Then I will install API6 to serve as the gateway API. Then I can explain everything because it will take some time like before. So now what I'm doing is I will say, A, enable gateway API equals true. So that's pretty nice. And then the gateway itself, as you can see, uses like for the moment experimental namespace, but part of Kubernetes, soon there will be like not experimental anymore. We first defined what gateway we shall be using. So that's the role of the gateway class. We rely on the controller name. So that's what we are deploying right now. And then at this point, we say, so this is once per cluster. So you defined this once per cluster. I didn't show it, but for ingress, you can do the exact same stuff. Now I will create a gateway object instance. Here, I will create the API6 gateway and it references this one. So basically you say, hey, I will create this object gateway and it will use this implementation. At this point, you can create routes and again, those routes are under like a standard namespace that has nothing to do with Apache API6. And then I have the exact same stuff that I defined before. Like I have one HTTP route for left, one HTTP route for right, exact same stuff. Perhaps now I can check if it worked. So K get pods, all namespace, ingress API6 would be better. Yep, it's still one is missing, but it should be good. So now I can apply my pods, apply F deployments and my routes and everything. So K get HTTP routes. Yeah, of course. I will follow the script. Script is always better to follow. Okay, and I will kill that. So I'm a bit too fast perhaps, pod initializing. Again, sorry for that. Okay, now perhaps it should be good. I will apply again just to be sure. And too fast. What is not working? K get pods, K, K get pods, namespace ingress API6, this is working, this is running. K logs, it might be the case, everything is possible. So I deployed the deployment. Hey, I forgot the gateway. Who said that? Thanks a lot. I forgot to apply the gateway. Thanks much better. Of course, it's much, much better this way. Well, let's see if it's much, much better. Yes, it's much, much better. Come afterwards. You also get chocolate. So again, it's the exact same results. At the roots, we define nothing, so right now found. We define the left, it goes to the left. We define the right, it goes to the right. But it's not only that. As I mentioned, we can do much better. So now instead of like having the past prefix, we would try to do something that's not possible with the ingress, which is actually like dispatching on the header. So let's apply it. K apply, F get away. And now we have changed the header so that if we curl the roots. So here I'm saying, hey, like go to the left. Sorry, goes to the right. But if you have a match with a header too, with value left, goes to left. So let's try it. Curl local hosts. Okay, but then if we pass the header, with, as I mentioned, value two, left, then it goes to the left. That's not possible with the ingress at the moment. Well, with the ingress abstraction, again, in the implementation, it depends. And I think that's the end. So thanks for your attention. You can follow me on Twitter. If you're interested about the blog post behind it, there is something and I forgot to add the GitHub URL. So I will do it just afterwards. I will upload my slides, but I can do it right now. Config, and it's this one. Kate's access, done. So if you're interested to like redo the same step as I, everything will be on this GitHub repo. So you've got the first article about access Kubernetes pods, then I have another article about getaway API, and then you have everything in the same repo. Is there any question? If there are questions, you need to step up in front of the mic, which can be very intimidating. I know, but don't worry, everything will be fine. I'm always afraid when I have no questions, because either it was a really, really bad talk, you don't want to like shame me in public, or it was very good talk and nobody has any questions. So I never know. Of course I would prefer to be the second, but no question, really. So I hope it the second. I have some stickers. So come to me, I will give the stickers and for the gentleman in the back, and my friend here, you can come for the Swiss chocolate. Thanks a lot.