 Alright everyone, welcome to the talk. So today we're going to talk about migrating VMs to Kubernetes. My name is Luke Kaiso. I'm an engineer at HashiCorp and I work on console and our Kubernetes integration. Hi everyone, my name is Irina Shostava. I also work as an engineer at HashiCorp working on our console Kubernetes integration. Yeah, so let's just share a screen here and I'm going to get into the slides and we can get an intro of the talk here. So I'm going to present here. So yeah, this talk today is Migration 101 from VMs to Kubernetes. So what we're going to do in today's talk is we're going to have our two applications over here. We're going to have the web application and the API application and they're both going to be running on VMs. And then what we're going to do is we're going to deploy the API application onto Kubernetes. So we're going to go through everything you need to know about how to get these applications from what it would be like to deploy on a VM to deploy onto Kubernetes. And then what we're going to do is we're going to deal with the routing so that the web can actually route to API. And we're going to swap the routing over in a no downtime migration so that the API service is now running on Kubernetes. So that's like the overview of the whole talk. That's our goal. So to get there, we're going to start with like an introduction, a little bit of the setup and how we're going to do everything. And the interesting thing about our workshop here is that we're actually going to be able to run everything on our laptops. We have a full set of instructions for folks to either follow along so they can run the commands themselves on their laptops, or if you don't have a laptop that you have access to, or it doesn't have enough like a RAM or something like that, you're having some issues, you can just follow along with us because we're going to be running the same steps at the same time. So yeah, so to get to that point where we migrate from VMs over to Kubernetes, what we're going to do is we're going to first build a Docker image. So in order to deploy anything on Kubernetes, you need a Docker image. Then we're going to actually deploy into Kubernetes. So this is going to be talking about like the Kubernetes resources, the difference between VMs and Kubernetes. And also, of course, it wouldn't be a Kubernetes talk without YAML. So we're going to talk about what your YAML needs to look like. Once your app is deployed on Kubernetes, obviously we now need to route to it. So then I'm going to be doing the first part here and then we're going to switch over to Irina and she's going to be talking about the next parts. And so the first thing she's going to talk about is the routing. So how do we actually route from like on our VMs all the way over into Kubernetes? We're also going to talk about logging and metrics. Often you're going to be using your own logging metrics like whatever systems you had before you moved to Kubernetes. But we wanted to show you some of the cool cloud native tooling that is available out there and how to run it. And you'll actually be able to run it on your cluster and play around with it yourself. Once we have all our metrics in place, we're able to know whether our app is working as expected. We're going to perform a no downtime migration. And then basically what that means is we're going to have both the app running on VMs and on Kubernetes. And then we're going to swap over in a safe way. The last is to swap back without missing any traffic. And then finally at the end, we're going to do a quick demo of console service mesh. So we don't recommend, depending on your use case, it doesn't make sense necessarily to go into a service mesh. And one of the things we actually talk about at the end of the talk is about how you got to keep it simple and don't try to boil the ocean and use all the new cool technology. However, there are some use cases and problems that the service mesh does solve. And so we're going to give you a quick demo of what that would look like if you did go down that road. So before we do anything, we want you folks to start getting downloading the prerequisites if you haven't already. So this URL is going to be available in the chat right now, wherever that chat is. So I'm going to switch over and just show you what you need to do to get started. And then you can come back to us and we can start with a workshop. So we're going to go to this URL here. And we want to click into the prerequisites here. So what you need is you need Docker desktop. Let's see, here I have Docker running. You need to install kind, which is a Kubernetes in Docker that's going to allow us to run Kubernetes locally. And you need to install kubectl and help. So get started downloading those things. Maybe you want to pause the video until you have them all downloaded. Some of them will take a little bit of time to download. And also one thing we're noting is you need to have 4 gigs of memory, 16 gigs of RAM and 6 CPU or 16 gigs of disk and 6 CPUs. So when you get Docker running, you can go into preferences here and resources. And so like we say there, so 4 gigs of memory and 6 CPUs. So 6 and I have 6 gigs of memory. And then you don't need this much disk, but you do need 16 gigs of disk. And so that'll make, because we're running going to be running so many things on your laptop here, that'll make things a lot easier. So get started downloading those things. And so I'm going to continue here with the talk. So when we talk about like migrating to Kubernetes, adopting Kubernetes, there's a couple of different ways to do it. One common way to do it is instead of migrating an existing application to Kubernetes, what we do is we actually, we're building a new application and we're going to deploy into Kubernetes for the first time. So in this diagram here, we have our old web and API applications. And what we're doing is we have a new application coming, the Foo application. And we're going to deploy that onto Kubernetes. And one of the benefits of this pattern is that Foo isn't being called by any production traffic, it's not being used by any users. It's not dependent on by the web service. And so we can deploy to Kubernetes. And if it breaks or there's some issues with it, it's not a big deal, right? There's no downtime. When you're talking about what we're doing today, which is when we're removing an existing application that's actually getting full traffic over to Kubernetes, you got to be a little bit more careful. Once you're deploying this into production, there's actual legit users going through web and talking to API, right? And so if this breaks, there's going to be some trouble. And so that's why we need to be a little careful here. We're doing a no downtime migration. We might want to build in some even more complicated things around here, around like automatically failing over or something more breaks. So yeah, that's what we're going to cover today, but just note like there's different ways to adopt Kubernetes. Often, like if you don't have a new application to come in down the pipeline, this is going to be how you adopt Kubernetes. And even if you are adopting it this way, eventually you're probably going to want to move some of these applications over into Kubernetes. And so you'll need to tackle this kind of thing. So let's get into the initial setup here. So what we want to do is we're actually going to deploy our services here, the web and API on our local machine. And what we're going to do is we're going to have the web service. It's going to be running on localhost, localhost 8080. And then our API service is going to be running on localhost 80. And we're going to be able to access it using this host name, HTTP colon slash API. And the way we're going to be doing that is Etsy hosts. So I know what you're thinking, you know, this is supposed to be talking about VMs. And now you're running everything on your MacBook. Like what the heck am I getting out of here? Okay, hold on a second. So what we're going to learn today, a lot of the things we're learning, we can teach them in this environment here, which allows us to play around a lot easier. We don't have to get you to spin up VMs and spin up a brand new Kubernetes cluster and start paying cloud providers for it. And so if we look at what VMs look like, you can kind of see that they're actually quite similar to the setup here. And basically we can we can mimic almost everything on our laptops here. So if we look at what a normal VM setup looks like, we're going to have something like this where we have a set of VMs and the web service would be deployed to maybe three of them. And the API service would be deployed to another three of them. And then we're going to have a load balancer somewhere out on the internet that our users come in through. And the load balancer is going to, the request is going to come and it's going to find, have to find a way to split them between like three different web service VMs, right? So maybe you do something where you have like a DNS entry and then each VM here has an IP address that gets added to this DNS entry. And so you're doing kind of like splitting based on DNS there around around DNS. And the same thing for like the web service that needs to talk to its own API service, you might have a DNS entry for like API and then these VMs have their IP addresses registered. And so you're calling over here like this. And so this is very similar to what we set up over here where we have our browser coming in instead of load balancer and it's coming directly into this web via localhost 8080. But you know, there's not a big difference there with that. And then API, we're using a hosting here API, but it's not DNS. There's no DNS behind it. But behind the scenes, we're actually just using Etsy host, but that's basically just mimicking DNS. And again, because we're running on the same laptop, we do need to use different ports, but that's not a big difference between these two setups here. So take our word for it. We're still going to learn a lot today without having to do the VMs and you're going to be able to do it all locally in your laptop, which I think is really, really nice. So speaking of on your laptop, like I said, you can do all this on your on your own too, or you can follow along with us or kind of do the same thing at the same time. As of right now, this workshop only works on Mac, but depending on what we're saying in the chat right now, when you're when you're actually watching this at KubeCon, we may have got it working on Windows too. So just check the chat there and see if it works. So what we're going to do with here is hopefully you've got all the prerequisites set up. And so we're going to go to this first step here. If you if you're coming from the root of the repo, we just go to click one dash setup. Okay, cool. So in this in this step here, the setup step, we're going to start the web and API services running on our laptops. And then we're going to let the web service talk to the API service. So hopefully you've cloned this repo here. And so I have it over here. So that's the repo there. And so what it says is in one terminal, start web. So I'm just going to copy this. Okay, and we see now we have web running. So if you remember our diagram, web is running on localhost 8080 here. So if I go to 80, we'll see that we have the web service running it returns web here. But in its upstream calls, it's dependencies. It's saying like, Hey, I can't talk to API service. Well, we haven't even started it yet. So we kind of expect this not to work. So like it says, that's what you should be getting. So then we're going to go over here and we'll start the API service. So I'm going to open up a new terminal tab here. And I'm going to start the API service. Now here I'm using pseudo. That's because API is binding to port eight. So I'll type my password in there, which is 102. And so now this is up and running here, it's running on a localhost 80. So let's see if the web service can now talk to its API service. So there's still something wrong. So the issue here is that this API, this host name, it doesn't exist, right? This is no website out there. This is API. And so what I need to do is actually need to kind of hack a TNS by editing my local Etsy host file and create and creating an entry that says, Hey, when you get this API host name, actually, I want you to write it to localhost. So we have the instructions here like that. So we need to edit Etsy hosts. So I'm going to open a whole new set of terminals here. Oh, and I got to make it a little bit bigger for everybody. Okay. So edit Etsy hosts, again, enter my password, 102. Oh my gosh. Okay. And at the end here, I'm going to go back into my docs here. It says add this line to the bottom. So let's copy that. Make sure you don't add it with a comment. So what this says is when someone goes to HTTP API, actually route them to 127, which is just localhost, right? So we save that and we go back to our web service. We should now be hitting our API service. And I'm going to switch over here and just enter a couple times on these logs. So we should see a request come in here. Okay. Awesome. So we saw that the response changed here a little bit. So here we can see that it's correctly calling its upstream, which is its dependency. And then the name of it is API dash VM. And obviously this is going to change when we deploy onto case onto Kubernetes. But everything's working as we expect here. And we go over to our logs. We see that, you know, that's where I press enter way up here. And now we get like getting requested in here. So what have we just done? Well, when we go back to our diagram here, all we've done is we've got our browser. It's look almost 8080 is talking to the web service, which is then calling the API service, which is exactly what we wanted to do for this stage. Okay. So let's go back to our slides here. Okay. So we've got everything set up for for our migration here. And so now what we want to do is we want to start deploying onto Kubernetes. But before we can deploy to Kubernetes, we actually need to build a Docker image. So this is dockerizing and it's not real word, but that's what I like to call it when you try to stick your app into a Docker image. So if we look at what we had with the VMs, like I kind of want to motivate why Docker, right? So we look at what we have with the VMs, you're usually running one app per VM. Now, when you run a Docker image or Docker container, what this allows you to do is actually allows you to run multiple apps in a single VM. So I mean, I'm only showing two here, but you can run literally like 30 pods or they're called pause Kubernetes, but you can run like literally 30 containers on a single VM. If you have a really big VM, you can run even more than that. And so what this allows you to do obviously is have a lot less VMs. And also what it allows you to do is you can treat your VMs, like there's this cattle versus pets analogy, which I really don't like. But basically the idea is if you lose a VM, then it's not a special VM, it can run anything. So all you need to do is bring up another VM of the same type, and then you can run any Docker containers on it, right? So this is kind of like what Kubernetes does. It's a whole mechanism of Kubernetes. It's a container or orchestration. But before we orchestrate containers, what do we need? We need containers. So what exactly is a container? Well, let's compare it to VMs and Docker. So over here, we have VMs. And then over here, we have Docker. So a VM consists of your app. So, you know, the binary or its dependencies, it's going to have its libraries. So if you're running like a Node.js, you're going to have like a bunch of JavaScript files, you know, your known modules. And then it's the operating systems, Ubuntu or Red Hat or whatever you're running. And that's your VM. And then under the hood is actually sitting on a hypervisor and this eventually there's actual like real server with a real CPU underneath. But this is pretty big, right? And one of the things with this is that if you wanted to run app A on the same VM as app B, well, what if they're using different versions of Node.js, for example, then you're going to run to these issues, right? Because the apps aren't contained within like the app themselves. They're actually like, they have to kind of spread out over the whole VM. When we look at a Docker container, what we do is we actually contain the app and its binaries and libraries within a container. So within its own like name space, it doesn't even know what's going over here on app A. It doesn't know about its files or anything like that. It doesn't even know about its processes or even its network. And so what this allows us to do is because it's now contained as a container, right? This allows us to stick a ton of them all on the same VMs. They don't even know about each other until they get CPU throttle because someone's using a lot of CPU. But for the most part, you don't really notice it. So that's what we're going to do in this step two here is we're going to package our application which is currently running our VM on our laptop into a Docker image. So I'm over here in step two. If you're coming through there, just click on step two dash Docker eyes. Okay. So in this step, we're going to build a Docker image for the API service because we're only migrating. Let's go back to our premise here. We're only migrating API onto cube. We're going to leave web over here on the VMs. We only need to build a Docker image for API right now at least. So let's do that. So in order to build a Docker image, you need to write what's called a Docker file, which is literally just like a build script for your Docker image. So the first thing you need to start on is you need to think about what's your base for this image. So just like in order to run your app, you might need to be running on an operating system like a Moody or something like that. With Docker images, you still need a base. Now, technically, you could actually start with nothing. So you could just stick your binary into this image and all that would be there is your binary. But say you wanted to run a command in your container because remember, this is isolated. So you wanted to like maybe like LS or look at the files there. Well, you don't have LS in your container, right? You might not have CD. You might want to like curl something, but you don't have curl in that container. It's not going to work, right? So there are some, a lot of benefits to starting from scratch. It's called where basically you have nothing in the image, but your binary. But for this example where we're kind of messing around with a demo. And I think also for when you're first starting out, when you actually might want to run commands inside a container, I recommend starting out with like a base image that has some like utilities and things in it already. In this case, we're going to use Alpine. This is like a really, really lightweight Linux distribution. And it contains like a bunch of like really small kind of utilities that you'll need, but it's only like five megabytes. So it doesn't really add much to your Docker image size. So I'm going to copy that. And then what we need to do here is I'm going to do this in my editor. So I'm going to create a new file about Docker file. I make this a little bigger. We're going to copy that first line in there. So let's go back to our instructions here. So now that you have your base image, you got to think about something specific to your application, right? So a good place to start to know what you need to put in your Docker image is just look at the startup script for the thing you're actually dockerizing. So in this case, our startup script here is a start API Darwin. And if we look at it, it sets like, you know, user bin and bath bash. It sets these like bash like commands so that like the command will actually fail. So we don't really need that. And then exports to environment variables. And then it runs our binary, which is like the API service. So if we first start with these export instructions, well, in a Docker file, you can basically do the same thing using the M command. So what we're going to do is we're going to copy these into our Docker file. And we're going to change export to EMB. Now let's closely look at these. In this case, we're actually going to change this from VM to case, because we're going to be deploying this Docker image onto Kubernetes. And so we were kind of like, we wanted to have a different names. We know we're hitting it. In your case, you're not going to want to do that, obviously. You're going to want to keep like your app the same. And then the second thing here is this listen address. So if we kept this as 127, what would happen is when we deploy into Kubernetes, the application, the container, wouldn't actually be listening on any, any interfaces, network interfaces that you could hit from the outside, you actually have to be in the container to actually curl it. This is because Docker containers have like their own networking. So we actually want this to be zero and this to be zero. So that we're listening outside of us, because remember, we're trying to route into this container here. Most of your apps, they're going to be listening on all interfaces anyways, because if you were running in a VM, you would need to be listening on all interfaces. So you would probably have to make this change, but because we're doing this demo, we do. Okay. The final thing here is this binary here. So the first thing we do is you actually need to get this binary into the app. You would probably do this in your build script where you'd actually be building these binaries up. Now maybe you have like, if it's like node or Python or something like that, you're going to have like literally files. So you need to copy these files from wherever you're building it into the Docker image. So we can do it like this. So this dot slash bin is basically where I'm running my command from. So I'm going to be running it from like the root and you see over here, it's a little bit small here, but we have bin. So this is where the app lives. So we want to copy the bin over to app slash API. Notice we're using the Linux one, not like the Darwin of the Windows one. That's because we're going to be running our Kubernetes cluster on Linux. And so we need like a Linux Docker images. But you might notice here like slash app that doesn't exist yet. Probably that's not really a well known directory. So what we can do is we can actually run commands. So we want to run make your slash app. That looks good. The final thing here is we actually need to execute a command. So this is basically kind of like, you can think of it as like just a single process that we're building here. So like, what does that process actually execute? Well, obviously we want it to run our app. So this is where you set up the entry point here. We have slash app API and hopefully this should all work. So let's go back to our instructions. This is what your Docker file should look like. And all you do is run Docker build and then the current directory. So I'm going to switch over here to this tab here. So here we see that we have this Docker file now. So I should be on Docker build dot. It's going to send the current context of this directory over to the Docker daemon. And then it's going to run all our commands. So that was really fast. So this might take a little bit longer for you because you might actually have to be downloading Alpine. But you can see here our steps here. So we've got the from Alpine, we set up our environment variables. Everything looks good. We built this Docker image. So the very first thing we want to do is we want to run this Docker image. Before we load into Kubernetes and get all the way down that road, we can actually run this to see if it actually works. So we can go Docker run this image. And what we want to do is because on a Mac like Docker is actually running in a VM. It's like a hypervisor, kind of a little mini VM. I'm not even sure what the technology they're using nowadays is. We actually want to publish our port from our Mac over to wherever this container is running, whatever magic they have running. So we're just going to use 80 or 8888. And then we're going to publish to 80 because remember that's where the Docker image is listed. Okay. So there's some good news here. First of all, we have, we can see the log. So that means it's working. And we should go build it over here. 8888, I believe. And we can see, cool, we're hitting our Docker image and it's API dash case. This actually has like an ICU, I believe. Not slash you. So we have our API here as well. But the important thing here is our Docker image is working as expected. So that's that Docker build section here. We've done Docker run here. All right. So we're really close here. We have our Docker image build, but it only exists on our laptops. So what we need to do now is in order to run Kubernetes or have this running Kubernetes, Kubernetes needs to be able to pull this image from somewhere, right? So that's where this Docker, the idea of a Docker registry comes in. So what we can do is we can tag our image ID with like a proper name. And in this case, I have, I have a Docker Hub registry under El Qaiso. And then I'm going to be able to push this up here. Now, if you don't have a Docker Hub registry, you're actually in luck here. Because we're running kind, which is like Kubernetes and Docker, we're actually running that on our VM, on our, on our MacBooks right now, there's a way to actually load the image directly into Kubernetes. So this is kind of a little cheap here if we don't actually have a Docker Hub account. But for those that do that is definitely a little bit easier to push that image up there. So what I'm going to do first is I'm going to do this, where I'm going to tag and push into my Docker Hub. And then I'm also going to show what you can do with, with kind if you don't have a Docker Hub. So I'm going to control C here because this was the Docker container we were running. All right. So that's all done there. So what I want to do is, oh, I did control. Okay, what was I, okay, this is the, remember this image ID, you need to, you need this to be able to tag it. So Docker tag, and then Docker.io slash El Qaiso, or, but this will be whatever yours is. And then API and then V 0.1.0. It's the, the 1.0 of the app. So I should be able to take that there. And this hasn't pushed it anywhere. It just exists. Now, if I do Docker images, couple more lines, you can see here, we've just tagged this image. So now what I'm going to do is I'm going to do Docker push. I'm going to push this image up here. All right. So that's now pushed. And if you folks don't have a Docker, a Docker Hub account, what we can do is we can actually load this directly into kind. So kind is our Kubernetes and Docker system. So what we want to do is we have a script here that's going to start that up. So just run that. Oh, so what happened here? So we've got this error here is going to bind it out. This is already used. So because of the way we're doing this migration, we're actually binding to port 80 with both our, our Kubernetes cluster and our VM API. So you need to do this kind of funny dance here. We actually need to stop both our web and API services. So control see those. Start kind again. This will work this time. And then we're going to go back over and restart those, those VM processes. So that's, I know that's actually running now. So I can start our web and start API here. Again, the type of the password. Go back over here to kind. We see the control point here is starting. And then when this comes up, what we're going to do is we're going to load our Docker image directly into kind. And you can use any name you want, because you're not actually, like, like you, you folks will be able to push to Elkai. So because that's like my, my registry, right? But you could actually tag your Docker image. Like this tag command will work. You can put anything in here. It's when you push that the authorization comes in there where it says, okay, you can actually push there. But what we can do is we can actually use this command called here, this kind load. So what we can do is we can load our Docker image directly into kind here. This is going to bypass the Docker registry. All right. So that's been added there now. Awesome. So success. We've built a Docker image. We now have a Docker image up into Kubernetes. Yeah. And now we're ready to Kubernetes. So let's just quickly jump back into the slides here. So what we've done so far is we've built this Docker image, now we're ready to deploy Kubernetes. And just like it was Dockerizing now, it's Kubernetes. So let's just do a really quick overview of what things are going to look like into Kubernetes. We don't have a lot of time with this workshop. So we can't go into like everything about Kubernetes. But kind of the primitive building blocks of Kubernetes is the idea of pod. A pod is actually a collection of containers. So you can actually have multiple Docker containers running in one pod. But for a lot of the use cases, especially when you're first starting out, you're only going to have one Docker container running inside that pod. And it's going to be your app. So the one we just built. So for instance, that's what we'd be running in the pod. So these pods can be running on all of your nodes here. And you can be running like multiple pods on multiple nodes, just like we were talking about before. That's kind of the benefits of Docker containers. But you need more than just pods. Because the problem with pods is if I delete this node right here, that pod's gone forever. And Kubernetes isn't going to automatically restart it. So we need a higher level construct that's going to be something that's going to manage like looking like, okay, I need three replicas. And if one node dies and my replica was on it, then I need you to spin up that replica and schedule it somewhere else. So that higher level construct is called a deployment. So in this case, we have deployment here. And for instance, it's managing two web pods. Now, if this node here dies, it crashes. The deployment is going to notice that it's running in a loop. It says, oh, I'm supposed to have two replicas running. I only have one. I'm going to try to schedule it somewhere else. It's going to schedule over here. So those are the two resources like the core resources we're going to look at next is the deployment and a pod. So let's look over here to step three here. So at the end of this, we're going to have API is going to be deployed on Kubernetes. So we go back way at the end of this, what we're going to have is we're going to have API running over here in Kubernetes. We're also going to have it running on BMS. So if you haven't already started kind, make sure you do that and make sure you do that little dance with like the ports so that you have everything running. And you can verify it's working by running QCTL get node. Okay, cool. So it's up and running as expected. So let's first start with a pod. I know we're going to get to replicas next, but I want to talk a little bit about first. So it's a single replica of the application. It has like an API version here that almost every Kubernetes resource kind of starts with API version and a kind. So what kind of resources is it? It has some metadata, like a name, and then it has a spec. And then inside the spec, this is going to be different depending on which resource you're scheduling, but everything's going to have this like this first same kind of part here. So the spec for a pod is the list of containers like we talked about. In this case, we're only going to have one. We have a name for the container. Here's the image. So whatever image you had, you're pushing previously, this is what you want to have in here. And then we specify the ports. So in this case, we're going to be using port 80. So if I create this file here, I can then use a QCTL apply. This is takes into the YAML file, and it's going to apply this to the cluster. It's going to post it to Kubernetes and say, Hey, this is what I want to exist. Here he says pod API created. So it should be like a QCTL get pods. All right, we see it's listing here. We have API running. That's pretty awesome. We can get as logs by running QCTL logs API. We see it's running as expected. That's pretty great. And we can also port forward to it. We're going to do that in a bit when we get to the deployment. But remember how I was saying the thing with pods is not enough. So what happens if I do QCTL delete pod API, I go get pods again, it's gone. There's nothing like managing it. There's nothing restarting it. It's just literally gone. If I were to delete that kind note, it would never come back up. So we need more than pods. What we need is that deployment. So if we look at a deployment file, it's kind of the same thing, right? We have our API version and our kind. We got our metadata. Now we're adding some labels with a little bit different. And then we have our spec. And then, you know, then the deployment stack stuff, the specific is that is that's what comes next. So here we're going to only have one replica. Obviously, you'd want to set this to more in your production clusters. But in this case, we're running locally. So we only want to have one. This is a bunch of metadata. And inside here, you'll notice that this section here looks pretty familiar. We look that we had a group here, right? So all this is is like saying, okay, well, what do my pods look like that are being managed under this deployment? So you can, you can see how these concepts are being mapped here. So this is the exact same thing as I had for my pod. So let's create this deployment.yaml file. Paste that in there. And then I'm going to run kubectl apply-app deployment.yaml. Boom. Our deployment is created. So I should be able to go kubectl get. And then the get command at this, the third argument here is going to be the resource type. So whatever resource type you've learned about, you can use it to do that, type it in here. So in this case, we're going to do get deployment. We say we have our API deployment and it's got one pod available. It says, so I should be able to go get pods or pod. Here we say we have our API pod running, but it's got this kind of crazy prefix or suffix here. This is because the deployment's actually managing that and it's giving it like a random ID here. So we should be able to do kubectl logs and get the logs there. That looks good. And then the thing we're talking about before, what if this gets deleted? So if you delete pod, it should be deleted here. But when we do, I'm going to control C that it's hanging, but it's actually deleting in the back end. If we do get pods here, you see like how I thought it was deleted. Well, if you look at the ID, we deleted this TC774. This is a new pod that's been spun up 12 seconds ago. So the deployment is now watching that. It's like, oh, that pod got deleted. I got to have another replica. It brings it back up. Okay. So we're running Kubernetes. This is awesome. The final thing we want to check is like, can we curl this app? Right? So we haven't set up the routing. That's what Irene is going to do with us next, but there's one way to kind of directly route directly to a pod and not without having like all the routing work from the outside. And that's called port forward. So we can do kubectl port forward. The pod ID. And then just like we're doing with Docker, we're publishing ports over, we're going to publish our 8888 over up into that pod's 80. All right. It says it's forwarding over there. Let's see. Awesome. So it's working as expected. We're talking to API dashed kates here. And remember, that's the Docker image we just built. And it's got like the body of hello world and everything. So everything's working just as expected. So go back into our slides here. This is what we've got so far. We have web running. It's talking to API on our VM MacBook. And now we have API deployed over into Kubernetes. We put it into the Docker image. We've kubernetesed it. We have our YAML and everything. And so if we look at like, what it would look like if you're doing this in VMs, you would have kind of your setup here. Nothing's changed, right? But now you have API deployed into your kubernetes cluster. So obviously you're thinking like, okay, great. Now what? How do I actually route requests over to that? And so that's what we're going to talk about next. And then I'm going to swap over to Irina who's going to take it from here. All right. Hey, everyone. So yeah, I'm taking over from Luke. Let me go ahead and share my screen. I'm still here though. Thank you, Luke, for running us through all these steps. We're going to start with routing. And just as a reminder, as Luke has mentioned, you still should have all your VM services running on your laptop at this point in time. And you should have your kubernetes deployment running. So I'm just going to make sure just for my own sake that I do have all of that and I'm in the same state as all of you. All right. So let's talk about routing. And really when we're talking about exposing your service externally in VMs, what I imagine is having a load balancer. And just as we showed before, you look went over this, like you can have your web VMs maybe in some sort of autoscaling group, as you would in AWS. And it's all fronted by a load balancer. And you can have the same thing with API. And so, for example, if your web service needs to reach your API service, they'll typically do it through a load balancer. Now, let's talk about how would you expose a service externally in kubernetes. And kubernetes really has this concept of a service. There's three types of services. The first one is a cluster IP. And a cluster IP service is meant to be used only for things that need to talk to each other within the same cluster. Each service gets assigned an IP, but that IP is kind of a virtual IP. It doesn't work outside the cluster. Only kubernetes knows about that IP. It's not resolvable in any way. Then there is a node port service. And that would work great if, for example, you have your nodes that you can reach externally. The one limitation with that is that there is a specific port range which doesn't really work for our case because we have a service that runs on port 80. And our web service calls the API service on port 80. And if we want to do that zero downtime migration without restarting our web service, the node port service really will now work for us because we don't want to change anything. We just want the web to automatically switch to the new API service. And the last and the third type of service is the load balancer service. That one really depends on your cloud provider. Most cloud providers have load balancer service. It would definitely work for our use case. However, on kind, load balancer service is not supported. And for that reason, we're not, we're going to use something different called an ingress. And so an ingress is another abstraction in kubernetes and is really building on top of the service abstraction. And what it allows you to do is you can, it's mainly for HTTP services. You can expose your HTTP routes externally outside the cluster. The one benefit to ingress over a load balancer service is that you can have your ingress have only, basically you can have only a single load balancer, fronting only the ingress controller instead of having a load balancer per service. And that is really great because if you're trying to save on like, say infrastructure costs for load balancer, using an ingress could really be a better option there. So we're going to show you how to use ingress for two reasons, is that it may be beneficial for you in the future, but also our infrastructure doesn't happen to support load balancers. So at this point, again, to recap, we have our web running NVMs, calling our API NVMs on our local machine. We have an API deployment, which is just a single plot with a single replica. And we're going to end up with a service, an engine next ingress that we will be able to reach locally. All right. And with that, let's dive in into workshop step four. So right now you're probably on step three. I'm going to go ahead and click next to the step four. So the first thing we're going to do, as I said, ingress built on top of Kubernetes service. So we need to create a service first. And since we don't need any other kind of special kind of service, we're just going to use the cluster IP, which is the default type of service. And as you can see, the way structure is you have a selector, which in our case is just using labels. That is how our API deployment is labeled. That is how Kubernetes will pick pods for the service to front. And then we're going to choose our ports. The first port is our kind of like a quote unquote front end port. This is our service port. And then the target port is the back end port. This is the port that the container is listening on. So let's go ahead and grab that and create our service.yaml file in our workshop. And let's go ahead and apply that. So our service is created. Now let's make sure that we can actually talk to our API through that service. So first, a couple of basic checks. Let's go ahead and check that we do have that service in fact there. And let's also check that we have endpoints for that service. And really endpoints will show us that there is a pod that it was able to select with that selector that we've specified for the service. And as our last step, since we've created a cluster IP service, we want to make sure that we can actually talk to that service from within the cluster. And for that, we're going to create a pod running inside the cluster that will just curl our service at API port 9090. And remember that our front end port for this community service is 9090, not 80. And the DNS that we're using in this case is just the cube DNS. Cube DNS by default will resolve to that IP that will work from within the cluster. So let's go ahead and run that. So this will take a few seconds for the pod to be created. But once it's there, hopefully we can actually reach our service and talk to talk to our service. All right. So this is our response. We have an API case. This is our case deployment within Kubernetes, the API deployment within Kubernetes. And so it works. Great. Now we're in a good shape to add an ingress on top of that. Let's go ahead and delete that test pod that we just created called test API. We will not need it anymore. And for ingress, we're going to be using an nginx ingress controller. We have provided the file for you to install all the nginx CRDs, the custom resource definitions, as well as the ingress controller. So just go ahead and cd into that folder and run a cube CTL apply that nginx ingress.yaml with hopefully the correct command. All right. We want to make sure that we have everything running. So since there is a controller, the nginx controller that's running, so let's go ahead and check that we have all the pods there. And once that ingress controller is running, we should be able to create an ingress object. So let's take a look at what it's going to look like. So here, again, we're just going to say kind ingress. And in the spec, we're, like I said, this is really used mostly for exposing HTTP routes. In our case, we only have kind of like one main route, which is just a slash. And for the back end, we're going to specify our service that we have just created and tested. And then our service ports, which is like the front end port for the service now turns into the back end port for the ingress. So let's go ahead and grab that. Check that our ingress is now running. And let's create our ingress object here. Let me cd out of this because we already have an ingress object here, but I'm just going to do it in the main one. Here we go. Let's apply that. So now it should be good to go. We can double check that we do have an ingress controller there. We now should be able to just curl a local host and be able to reach it from our local machine. So at this point, we are ready from the routing perspective to route, make sure our web service can talk to our API service on case. So that is all good. But as our next step, we want to make sure that our migration will actually go smoothly. And for that, we are going to add logging and routing. So I'm going to switch over to configuring logging and routing. And really we want to do that so that, you know, like Kubernetes already has logs like Luke has shown. But those logs are going to be gone as soon as you restart those instances. So you're really not like accumulating logs. So you want to kind of set up your platform so that you're in a good place and you have your equipment with all the knowledge that you could possibly have so that your migration goes smoothly. All right. So for our next step, let's go ahead and dive in right in actually and continue working on our logging. I'm going to go ahead and hit next here. Cool. And for the logging infrastructure, we're using Elastic, Kibana, and Fluendi. And I'm going to go over these tools and explain exactly what each of them does because some of them take a bit of time to install. And I'm going to use that time to talk a little bit more about each of those tools. To get started, let's first create our separate namespace because we want to have all of our logging infrastructure be running separately from our applications. So let's create our logging namespace. Our next step is to create a Kubernetes operator for Elastic. This is something that Elastic provides. Basically, it just allows you to have special custom resources for deploying and managing Elastic and Kibana clusters. So let's go ahead and do that. Install the operator. Cool. And now we're ready to create our Elastic cluster. So this will be an Elastic server running in case. For this, we have modified this slightly here from the example that they have under docs because we're running on kind and we don't have that much memory. So we are adjusting resources to only run with one gigabyte. Let's go ahead and copy this and create our Elastic dynamic here. And let's go ahead and apply that. You can actually pretty nicely check the health of the Elastic search cluster. Again, this is because of how Elastic is managing their custom resource definitions. But we can just watch that and wait for the changes to be applied. In the meantime, I'm just going to go over and talk about some of the logging infrastructure and how that works in case. So when you're thinking about logging, one second, I'll just talk like this. When you're thinking about logging in VMs and logging in Kubernetes, it's really not that much different. In VMs, you probably have some agents running together with your application and they're forwarding your logging directly to some sort of logging platform. In this case, it's really not that different except that everything is running in Kubernetes pods. So here you have your three Kubernetes hosts and you have some pods running on there. What you add is a log forwarder to each host and that log forwarder will look at all the pods on that particular host and you can configure that to forward logs to your specified log storage. And then that log storage in our case actually also is running on Kubernetes, but it doesn't have to be. It could be external. It could be running in the cluster. It really doesn't matter. All right. So that's kind of the basics. It's very similar to VMs and I'm going to explain a little bit more about like what each piece does as we're continuing to install our logging stuff. All right. So it looks like our elastic server has come up and it's green. The next thing we're going to do is deploy Kibana and Kibana is really just the UI that's kind of fronting the elastic cluster. So let's go ahead and create Kibana Digamal and then apply that. We can check the health of the Kibana instance in a very similar way. We can watch that. And these are probably I want to say the most memory intensive things we're installing because all of them are like packaging a lot of things and their images could be quite large. So watch out for that Docker memory that you've allocated. And just if you're running into any issues or pending containers or Docker image pool errors, just make sure that you have that enough memory. You go back to the prerequisites here that Luke has covered in the beginning and just make sure you have enough memory. So yeah. So what we have so far is the elastic cluster, which is our log storage right here. Additionally, we're installing a UI in front of it because really elastic server isn't great for visualizing and searching logs just on its own. So Kibana will help us with that. All right. Looks like we have our Kibana running. We can double check the health. It's all green. It's all good. So our next step, we're going to log in into our Kibana instance and just make sure that we can access things that we're allowed to access so far. Let's grab our password first. I'm going to copy that. And let's port forward the Kibana service. This will be running on port 5601. I'm going to go ahead and start that. You have to say accept the risk because we're using your custom CA. So we really know what we're doing here. And once the UI loads, just use the elastic username and then the password that we've copied. And really when we go to the Kibana discover thing, like we should be able to, before we can start using Kibana, we have to create an index. So it can index the data that's stored in elastic search server. And when we try to create an index, we're seeing that there is no data yet. And this is something that we would expect because we haven't configured that forwarding part. So far we just have the storage and the UI. So let's go ahead and do that. For the forwarder, we're going to use a stool called Fluent D. And the stool is really kind of standard for Kubernetes and in general the cloud native space because it allows you to first of all parse and forward logs of any kind of format. And then on the back end, when it forwards logs to some system, it really is pluggable in many ways as well, where you can configure it to send logs to any common log storage log solution. In our case, we're using elastic and it supports that as well. So let's kind of take a look at this Fluent D daemon set. And really this daemon set is just something that is a standard you can look at. You can go to Fluent D docs and take a look at that. But I just wanted to show you all these environment variables that we're setting here. And those are really the only things you have to modify to provide your elastic host password potentially like various SSL configuration. And that's pretty much it. And I also have to note that this is a daemon set and the daemon set in Kubernetes is something that runs on every host. And as I mentioned before, this is the architecture that we're looking at for, you know, we're looking for something that can run on every host and then look at the logs. So let's go ahead and apply that daemon set. I'm going to keep Kibana running. I want to interrupt that. I'm just going to go to another terminal window and go to, I need to go into logging. And then keep CTL apply Fluent D. Once applied, we want to make sure that it's all good and running. It's not available yet. But we can check the pods. So that should be pretty fast. All right. It's already running. So we're good to go. Now that Fluent D is running, it should, as soon as it starts running, it should start forwarding logs immediately. So here we can just go to our Kibana UI and check for new data. And we can see that it now detects that we have some data. So we're going to define our index called logs-dash. This is just the default name. You can change that if you wish. But in our case, that's what we're using. Go to next step. For the time field, we're just going to use the defaults and then create our index pattern. Next, if we go to the main UI, we're seeing all the logs are coming in from all kinds of various Kubernetes system components. But we want to make sure that we're seeing our logs. So let's go ahead and check for logs that have our API label. Let's run our query. And great. So we're seeing our app logs in Kibana. We're good to go. Let's move over to the next step where we're going to configure metrics. And with the metrics, we kind of have a similar sentiment where we want to make sure everything is running and everything is working while we're trying to migrate our service to Kubernetes, especially that we want to make sure it's your downtime. So that's why we need to instrument the metrics in Kubernetes. So without further ado, I'm also going to dive right in and then I'm going to cover some of the metric infrastructure a little bit afterwards. Let's create our metrics namespace first. It'll be very similar to how we did a logging. I'm going to interrupt my Kibana UI. We don't need that for now. And for metrics, we're going to be using Prometheus and Grafana. Prometheus is a metric server and then Grafana is the UI. Similarly, how elastic is the server and then Kibana is the UI. And to install these things, we're just going to use the Prometheus help chart and the Grafana help chart. So let's go ahead and copy these lines to install the Prometheus help chart. We're not changing anything there. We're just using all the default configuration. So that should have kicked off the install. Let's go ahead and do Grafana right away as well. Grafana doesn't need to depend on Prometheus. So it can be running in parallel even if Prometheus isn't ready. So let's go ahead and do that. And let's make sure that things are running. While these things are creating and getting ready, I'm going to switch over to our slides and talk a little bit about the metrics infrastructure here. So first of all, in the model that Prometheus is using for metrics, this is called the pool model where the server is pulling the app for the metrics data every so-and-so seconds. For that, the application itself needs to expose slash metrics endpoint. And then the Prometheus will query that endpoint every x seconds, which is a configurable amount. In our case, we're using whatever is default on the help chart, but you can look at the home values to see how you can change that and tweak that for your use case. And then furthermore, we have a Grafana instance, which is fronting the Prometheus server. That Grafana instance will help us display the logs and graph things in a beautiful way, hopefully. All right. So let's check if things are running. Looks like all the things are ready. As our next step, we are going to note that in the output of your Helm install for Grafana, I'm just going to scroll back a little bit. It tells you kind of how to log in. So we're just going to follow this first step where we will grab the user password and copy that. And then in the next step, we'll just port forward the Grafana UI, so we can access that locally. We will go to local host for 3000. The username is admin, and the password is the thing that we copy. So this is our Grafana UI, but it doesn't know about Prometheus yet. So our next step is to configure that Prometheus data source. Let's go to configuration and add data source. Prometheus is the first suggestion conveniently. And really the only thing we need here is the HTTP URL, which is by default HTTP colon slash slash Prometheus server. If you're using the Helm chart, it really won't change for you as well. Cool. So it's now working. We should see just kind of the default Kubernetes metrics. And I'm just going to show you something just to make sure that it's working. For example, we can look at response times of Kubernetes REST client just to get a sense. Looks like it works. And the response times are looking great as well. So our next step for metrics is to change our application so our application is going to start to send metrics. Right now our application isn't configured to expose that metrics endpoint that I've talked about. And we need to make a few changes to it to allow that. So I'm going to go to our... Apologies. So I'm going to open our deployment dynamic. One second. There we go. So in our deployment dynamic we have to do a few things. First of all, we have to enable our application to expose the metrics endpoint. And in our case, all it's doing is just taking an environment variable. So we have to add it under the containers stanza here. Okay. There we go. Hopefully this will work. And then next we will need to add annotations to our pod. And as Luke was talking about how deployment and pod are kind of nested within each other, we have to make sure that those annotations are on the pod metadata, not on the deployment metadata. So let's go ahead and do that as well. This is our pod metadata. So that should enable the Prometheus server to look for this app's metrics endpoint and scrape them. So let's go ahead and apply that. I had a feeling this will happen. If this is happening to you as well, you can just grab this ready deployment diamo, use that instead. And that is all because YAML is hard. So as you can see, we have our old pod terminating and the new pod starting up with this new configuration. We should also go to our app and just kind of try to generate some metrics by just going to local host. Oh, maybe not this local host. But the one that's running in kinks. Let's just kind of do that a few times. So we generate some new metrics. And now when we go to Grafana, we should be able to see our app specific metrics. So I'm going to refresh that page. And one of the metrics that we have is how many times has the server service started. So let's take a look at that. And we have that metrics. That metric has come in. It's called a service. Service started total. We should see that it has started a total of one times, which is true. And the last thing, which I'm not quite sure that this will work quite yet, but we can look at the response times that this app will have currently has. It doesn't look like we have that quite yet. Let me try another query. It's possible that we'll just need to generate a little bit more metrics. But that's okay because we know that the metrics are convenient because we've seen that other counter that has started. All right. So at this point, you have logs and you have the metrics configured. And we're ready to migrate. All right. So remember how Luke has mentioned that we're just going to switch our DNS. Right now, just to recap, we still have our web running. And we still have our API running locally. So if I go to local host, port 8080, we should see that we're still pointing at VMs. And if I go to the UI part, you can see that it's still running in VMs. So now to migrate, we want to make sure that basically, we just want to point that DNS at the new IP. And in our case, we're using this IPv6 because any IP that is not a 127001 will default to that basically the one that's listening on all interfaces. Because remember, we have one that's listening on IP127001. And then we have another one, which is the kind ingress that's running on all interfaces. So basically, we just want to choose an IP that our machine understands, but the one that is not going to route to this VM one. This is not really, of course, something you're going to do in real VMs, but these are the constraints that we're operating in. So let's go ahead and change our Etsy hosts. So you should have this line that's pointing at API. I'm going to change that to use the IPv6 local host and save that. And now if I go to my web, I'm seeing that it's now pointing at API case, which is great. Awesome. We've switched over. We're now, we haven't restarted our web at all. It's just pointing to our new Kubernetes service, which is great. And the last part is, let's just double check that our metrics are coming in from web going to API. So again, here, I'm going to kind of refresh this a few times because to mimic kind of a real backend service that is probably calling the API continuously. And as the last step, I'm going to check that these metrics are in Grafana. I do not know if they will appear. Hopefully they will. Cool. So what this is doing is checking the request, the rate at which the requests are coming in in the last five minutes. And we can see that all of a sudden this has picked up to a new value of 0.06. So yay, this is how we know that it's running. Our metrics are showing up great. We're all healthy and we have migrated to Kates. So at this point, let's just kind of go over what we have just accomplished. We had our API and our web and our API pointing to this 1-2-7-0-0-1 port and then we switched it over to the one that's pointing to Nginx. And in the VM world, you would do something similar where you'll take your DNS entry for API.company.com and you will switch it over to, let's say, your Nginx load balancer as shown here. And I think we've already covered step seven. So where we are right now and what we did, let's recap that. We have built a Docker image. We deployed that to Kubernetes using a Kubernetes deployment. We've added a Kubernetes service and a Kubernetes ingress so that we can reach it locally. We've added logging and metrics to our app and then finally performed that zero downtime migration where we've switched over our one of our services and pointed it at the one running in queue. And as our last step, we're going to cover the console service mesh. Luke, do you want to talk about that? Yeah. Before we get into that, let's just go to the next slide there and chat a little bit about what we did. Let's do that. There's a couple things to think about here. I know this was kind of like a constrained example. And so let's kind of talk about it at a higher level, kind of general lessons and things about adopting Kubernetes that might be useful to new folks to the ecosystem. So I've personally done a Kubernetes migration at two different companies. And I think I just want to give some advice to folks who are doing it for the first time. So keep it simple. The second you start using new tooling, you get really excited and you want to use all the new tools that exist in the ecosystem. So like the stuff we just showed you, like your Elastic search and your Fluid D, you want to use Prometheus and Grafana. You want to use a service mesh. You want to use all those new things, right? And so what I caution folks is keep it simple. Don't try to set up this brand new beautiful platform in the sky. Focus on what you really want to do, which is get your app onto Kubernetes. And I would encourage you to use the existing tooling. So if you already have something you're using for metrics, don't switch to a new metrics provider. Use whatever tooling exists in Kubernetes to get your metrics into your existing metrics provider, okay? If you already have an existing logging system, use Fluid D but then get it into your existing logging system. Don't switch to a new logging system. And the reason I recommend that is because you really want to be focused on what provides the most value to your organization and that's going to be having your app running Kubernetes. Once you've got that process smooth and oiled out, you have your build pipeline and everything set up, you can deploy where there you know your migration is good because you're looking at metrics and everything, it's a lot easier to start moving things over and picking that up. And eventually you can switch to that new tool that you really wanted to use. But if you start jumping and trying all these things at once, when something goes wrong, it's going to be like there's about 100 things that could have gone wrong. You're not 100% sure like what part of it went wrong. And so I think like I really want to caution folks like keep it simple to start, kind of focus on providing value, getting your app into Kubernetes, getting that first migration piece done. That in and of itself is going to be a ton of work. Don't be going me wrong, because like if you're moving from BM to Kubernetes, this is a whole new setup, things you have to deal with, especially around builds and security and access. We did make it look pretty easy here, but there's a lot of things that we didn't kind of show you what you need to figure out. So keep it simple. Don't try to use all the new shiny new tools unless it actually really makes sense for your use case. Irene, do you want to cover the next two? Yeah, and definitely one two plus one, things that you've just said, like the tooling that we've shown is not necessary, is just showing an example. Definitely keep your existing tools. One thing I wanted to highlight is just the Kubernetes ecosystem is a really rich ecosystem that has a lot of tools. And if you're using something out that it was different from what we've shown, like the likelihood is that that tool or an integration with that tool already exists in the ecosystem. So I would definitely encourage you to look and find things that are already implemented. And then the last piece that I wanted to highlight is security. And we kind of I've alluded to that where we have our last step where we're talking about a service mesh. And really a Kubernetes is a service mesh in some ways, but it doesn't have any of the security features of a service mesh. Right now everything within our cluster are nginx ingress that's talking to our ETI service. All of that is in plain text. And if you want to increase like a step up the security, especially the security and then they're working within the cluster, then I would highly encourage you to look into something like a service mesh because it can add like this kind of mutual TLS encryption by default. And this is, you know, like definitely don't do it as the first step, but as the next step in your migration, you might want to consider, you know, thinking about things like security and zero trust networking to just kind of increase and harden your cluster and the services that are running in that cluster. Yeah, absolutely. Okay. And that's that's a great segue. I'm going to talk about that quickly just going to go through the console service mesh next. I'm actually going to pause here so I can get caught up with all the work Irene has done. And then when I resume the recording, I'll be all caught up and we can just try out console service mesh. All right. And we're back. I've got everything caught up now with all the work Irene did. And so we're just going to do a really quick demo of console service mesh and how it might be useful if you have like a larger deployment, you have a little bit more complexity involved. So we just share my screen here. So I got caught up to here. Okay. So what we have is we have our browser talking to web is talking over to engine X and it's talking to over the API. So like Irene said, there isn't right now, like it's all done in plain text. So web is talking to engine X over plain text and engine X talks to API over plain text. And so one of the benefits of the service mesh that it can bring is mutual TLS. So the idea we have TLS that goes across the whole connection here. And the other thing that a service mesh can bring is what basically it does is it sets up proxies in the front of everything. And so these proxies are now programmable. You can, you can change the requests to do whatever you want with them. And so what I want to do is a really quick demo here of we're going to swap out engine X for a console ingress gateway. We're going to put a proxy in front of the API service here. So it's going to be part of the mesh. And then we're going to show you some of the cool like routing capabilities we have when this API service is now part of the mesh. What we're going to do is we're going to have API like the really buggy service have lots of errors. Then we're going to use our routing rules to say, okay, well, if there's an error, just retry it. So this might be something useful in your migration if you if you were finding like, obviously, you should look into why there's lots of errors. But in this case, you might be something useful to like kind of a stopgap measure. And we have all this documented here in workshop, workshop step eight. So let's go over here into step eight here. So in this section, what we're going to do is we're going to install console on our Kubernetes cluster. And then we're going to configure it. So the first thing we need to do is actually need to uninstall engine X. That's because engine X is kind of listing in place there on port 80. We could do this, we could have our ingress gateway list on a different port, but then our nice routing where the web, the web service doesn't know it always talking for 80, that wouldn't work. So in this case, we're just going to take the quick road here and still be engine X here. So that's uninstalled engine X here. And now we're free to install console. So the first thing we want to do is we need to add our helm repo. So a console is installed by a helm. And then we're going to create a values.yaml file for our installation. So this is what's used to configure your helm installation. So a couple of things to note here. One, we're using one replica for the console servers. Now, usually you want to use three, but you want them spread across three different nodes. So if one node goes down, the other two are up, they still have quorum, everything works fine. In kind, there's only one node. So we're going to set this to one for this example. Here, this is a connect inject. What this does is it says, I want you to inject a sidecar. So a sidecar is just another container. Remember how way back when we talked about how pods can have multiple containers? This is where this is really useful. So we can actually inject a proxy as a sidecar container. So it's just a container that runs in the pod. And what this is doing is it's going to automatically inject it. So you don't have to set it up in your deployment.yaml. But it's going to, when it sees that pod come, it's actually going to automatically inject our proxy into there. This controller here is allowing us to use our custom resource definitions, which is newly out in the console beta. And then here we're setting up our ingress gateways. So this is going to replace our engine X ingress controller. So we're setting up our gateway here, and it's going to be running on port 80. So let's save that file. And then we're going to run a helm install. And don't try to QCTL apply that because that's not going to work. So we're going to run a helm install here and we're going to point to that. So I'm going to open up another tab over here and just so we can just watch that. So watch QCTL. And it's a bit ugly, so I'll make it a little smaller. But basically we have our console services coming out here. So this is going to take about two to three minutes. What is happening? It's a little bit sometimes a little bit slow because we're running on kind. But what we're getting, if we look at the components here, let's open this up a little bit wider. Okay, there we go. So let's look at the components we get. So here's API. It's running. It's doing its thing. We haven't made any changes to it. Then we have our console server. So this is where all our data is stored. And this is what we said we only want one replica of. Then we have our ingress gateway. This is currently in an error state because the rest of the cluster is kind of up and coming. So you can see here like this console here. This is what's called a console agent. And the ingress gateway relies on that console agent. So this needs to be running for the gateway to actually work. You can see here now that the console agent is ready that this ingress gateway is coming up. So this is what we're actually going to be, it's actually going to be listening on port 80. This is what we're going to hit to get it all working. So I'm going to make this smaller here. This should soon be marked itself as ready, but I'm immediately going to make some changes to it. So we don't really have support right now for listening on a host board for the ingress gateway because it's made more for like running a cloud. So we're just going to do a little bit of a QCTL patch here to actually patch that deployment so it listens on the great port. So I'm going to run this command here QCTL patch and it's going to patch that deployment. So it's running on the expected port, which is port 80. So then it's going to be bound to the same port as nginx. And so we'll actually be able to come through it. And so we can run this rollout status command and we can watch to make sure that that rollout is actually complete. But what I'm going to do is kind of fast forward here, so I don't want to take too long here. So while that's all coming up, we can actually look at the console UI. So we're going to do a port forward here and it's going to port forward a localhost 8500. So if we go over here to the localhost 8500 slash UI, we should be able to see our console UI. So this is a list of all the services that are in our service mesh right now. If there isn't any, we can see our console instance, which is running as we expected, and then our ingress gateway here is starting to come up. We have one instance coming up and then there was that new one coming up on port 80 that we're kind of waiting for it to be running. Yeah, so it should come up soon and then we'll be good to go. So now we look over here, we see that we have our list of services, but there is no API running. So we go back to our notes here. What we want is we want API to be part of the service mesh. Now it's not going to get automatically injected with that sidecar. We wouldn't want a console service mesh just injecting everything really nearly. So you actually have to annotate that service. It says, hey, please inject me with a sidecar. So to do that, what we're going to do is we're going to use that patch command again and we're going to add an annotation that says, hey, inject me with a sidecar. So let's run that patch command here and we can see here that API is now starting to come up and you can see here it has zero three. So this is actually the number containers. So previously we had one, now we have three. So what's going on there? So this is the injection, right? So we're getting our sidecar which is running our proxy and we're also getting another sidecar that's like a helper container that helps with dealing with a lifecycle of these pods. So if we go back now over to our console UI, we should see API and here we do. We see API and it's connected with proxy. So it's all up and running as we expect. It's now part of the service mesh. It's got its proxy running next to it, but we're not done yet. We need to, just like we had to configure nginx to route to a service via the ingress object, we need to configure our ingress gateway and console to route to the API service. We don't want to just expose any service. We have to actually explicitly say that we want it. So the first thing we're going to do is we're going to set a protocol for our API service to HTTP. Then this is so we know what protocol it's using and we can actually provide better metrics that way because we know that it's using HTTP and not TCP. So we're going to just apply that. Okay. And then we do Q2TL service defaults just to make sure that's working. And it says it's synced true. So that means that work. Then we're going to configure our ingress gateway. I'm going to, I can show you what this JSON is in a second. So let's see this. I'll go to that. So we're going to wrap it. So here we're saying, hey, ingress gateway, listen on port 80 and you're using HTTP protocol. And we say, okay, we want you to route to API. When you get a request in for API, make sure you route to API. So this is going to configure the ingress gateway to route to our API service this running. So if we look over here, we should see ingress gateway here. We should see that it's configured to route to API. So that's all this needed. So we've set this all up here. We have our ingress gateway routing to API with this proxy. So we should be able to go to this here and actually see that working. And we do, we see now that we're running through the ingress gateway. We've replaced the engine next ingress gateway. Okay. So you're thinking like, hey, so what? What's the point of replacing the ingress gateway? You know, that was already working for me. And I'll just show here, we're hitting our web service. And you can see here that we're actually getting this new header added here. So this is actually the proxy that's adding this header. So you can see this actually going fully through. So one big benefit we have already is that we now have TLS going in between the ingress gateway and our application here. And the certificate for this is totally provisioned like by console. So we want to deal with like provisioning certificates and everything like that. It's all happened automatically. And if anyone is listening in your cluster to traffic, then now it's encrypted. So they can actually like speak that traffic. But I also want to do a really quick demo of kind of the power that you have now that you have these proxies in front of everything. So what we're going to do is we're actually going to cause our service to have failures. So luckily, the service has support something where you can actually set an error rate. And this is our API service I'm talking about. And, you know, what it'll do is you set it to 0.5, 50% of requests to the service are just going to fail. There's going to fail out, right? So let's do that. So we should see the API start to come up here with the new, oh, excuse me. So we can roll out. We have the new one coming up here. And now we make requests to it. We're actually going to see half of them fail. So this is probably not ideal, right? So if we do this curl again, we see it's not healthy. And if we look over here, I think, yeah, I think I know what's happening. So if we curl here, okay, so this is what I want to show you. So if we curl here, we can see that this request succeeded with 200, but then this request failed with 500. I curled again. You can see here again, it's getting 200, curled again, it's failing with 500. So this is really not ideal, right? And so this is where kind of we have the power of the service mesh. So why don't we just tell the proxy that's running over here. The Ingress Gate is also a proxy, right? Let's just tell it to retry requests to have a 500. I mean, it's a bit of a band-aid fix. You should probably look into why your service is 500, but this could be really something that's valuable for a short-term fix or something like that. So all we need to do is create a new CID called a Service Router. And we look into it spec here, it's saying, I want you to retry it up to three times whenever you see a 500. So let's apply that. And we'll do QCTL, get Service Router, make sure that apply properly. So it says it's synced to true. So now let's go back to this curl. We're not seeing any errors. So this is pretty cool. You can see that the duration is actually changing. That's because like, so that one, you know, that one took a while. That's probably because that one we tried three times, right? So that's just a really, really quick demo of kind of the power of the service mesh. There's a lot of other things you can get out of it. And it's one of those things where it, depending on the complexity and what you're, what you're getting at, we did talk about keeping it simple. But if you find yourself having to like, manually provision a bunch of TLS certificates, because you have a rule of your organization that all the requests have to be empty LS, then the service mesh can actually, it's complex to add it, but then it's actually solving you, saving you from a lot of complexity. So depending on the trade-offs there, it might make a lot of sense for, for your organization. Cool. So that's the end of it there. You can find us on Twitter at Ashesdava and at Elkison. Did you have any final words, Arina? Yeah, thank you so much. That was awesome. And thank you all for coming to this workshop. And yeah, feel free to talk to us on Twitter if you want afterwards. And we'll probably be in the chat right now talking to you. Yeah, absolutely. All right, everyone. Thanks for listening and have good luck with your Kubernetes.