 Appreciate you joining our live Q&A today about debugging Kubernetes or what to do when it goes wrong because it will. And join with me here today. I'm Tim Stahlhead of PreCells US here at Apia and I'm joined by Graham and Salman. I'll let you two introduce Graham if you want to go first. You're first on my video over here. Yeah, hey everyone. So yeah, Graham Coleman. I'm a PreCells engineer for Apia. Long background in IT, Kubernetes, distributed computing. So kind of spent many years at Red Hat working with OpenShift and prior to that working kind of in the integration and JE space. I'll hand over to Salman. Excellent. Thank you very much. Welcome everybody. It's good to see some familiar faces. Yeah, we're welcome again. Yeah, awesome. So my name is Salman Iqbal. I'm a solutions engineer at Apia. I think it's my title has changed recently. I'm not sure but I am a solutions engineer at Apia and I have been working with Kubernetes for the last three and a half or four years and I come from developer background. I've been a developer but I've been doing like Kubernetes work for the last three and a half, four years. I mainly focus on machine learning inside Kubernetes. So try to run Kubernetes workload at scale. Yeah, so that's me. And teaching people Kubernetes. And teaching people Kubernetes. Yes, that's what we do too. And talking at meetups. And just doing a few things right there. You can go and search for Salman. You'll find him everywhere. Not really. Cool. So yeah, and I'll just let you know this is an informal. So this is live Q&A. Feel free to ask any questions. Come off mute. You'll notice that everybody is allowed to talk. So if you have something while we're doing this, you can throw in the chat and throw it in the Q&A. You can unmute. Feel free. We'll make this a discussion. So you know, with that, I'm just going to go ahead and hand it off. We'll go ahead and get started. Yeah, I think, sorry, there's a few things we thought we'll discuss. But before we do that, one of the main things that I think we're going to focus on is what actually happens when you deploy an application in Kubernetes. So we're going to focus on that when you deploy an application in Kubernetes. What does it look like? And more importantly, what are the things you need to watch out for when you're deploying an application? So we're going to deploy a simple app initially, see how that goes. And we'll talk about some debugging techniques while we deploy it. And then Graeme and Tim are also going to talk about some other issues that might come across some errors that you might have seen always like crash loop back off and whatever these errors might be. So everything I run into every time I deploy something. Basically, it's what he's saying. Yeah, so every time you're trying to deploy something, because there's so many moving parts, it might seem like there's a lot. But hopefully we'll try and we'll do a demo. And we'll talk about that. So if I, I'm going to share my screen for a few minutes and we'll go from there. If you're expecting something else, please let us know. We're going to talk about that too. Whatever you need to discuss, we can talk about. So let me just bring that up and make sure I can see everything in here. Okay, move that. Right. Can you, what can you see? Yeah. Can you see the right screen? Perfect. So so real quick, we have a Kubernetes cluster. We have a control plane, and we have two worker nodes. And then basically the control plan and the worker nodes, we, when we submit any of your applications that need to be deployed, we deploy them, we submit them to the Kubernetes control plane control plan receives a quest, and it deploys tries to deploy anything deploy workloads inside your worker nodes. So you could have as many workloads as you like on inside the cluster. But in this case, I'm showing this to have a local cluster, which is running, you could use anything you'd like, but that cluster itself just has a single node. So that's what that is. And what actually happens when we deploy? Where are we going? Here's the bit. So here's the thing. What does the structure of a Kubernetes application actually look like? You deploy, you create what's known as a pod, as you might be aware. And inside a pod, you define what kind of image you'd like to run. And we'll show an example in a few minutes. And you can deploy that pod. Usually we don't deploy a pod. We deploy what's known as a deployment, because deployment has some information about what we're about to deploy. So the name of the image and how many replicas we would like to have. And then deployment looks after that pod. So if it crashes, it will bring it back up to the desired state. That's fine. It will run itself. But what about if you have multiple replicas? If you have multiple replicas, how do we decide where to send the information? How does one application that's running inside a pod communicate with another application? This is where we bring in what's called a service. And a service is you can think of it as an internal load balancer. It decides where to route the traffic to. And then that's all internal, though. What if you want to access the website outside of the cluster? That's where we create an ingress. And this is what we're going to do. I'm going to create a deployment. We'll have one replica initially. We can change replicas if we need. And then we're going to create a deployment. And we'll talk about once you do a deployment, how do you know what you've deployed is correct, or how do you debug the issues? Does it sound good to everybody? Let us know in the chats. Anything from you, Tim, and Graham. I was just moving the screens across. Excellent. Perfect. So I am going to do this. Clear that. Make it a little bit bigger. So I have a cluster. If I do kubectl get nodes, you can do that yourself as well. So this is a cluster that's running. Perhaps we can stick this information that we talk about in an email as well. We'll send it over to you later on. So I have this cluster. It's just one node. And I haven't really deployed anything inside this cluster. I can do kubectl get nodes, kubectl get pods, or deployments, or anything like nothing is deployed in the default namespace. So it's not there. We're going to go ahead and create a deployment. So I'm going to bring up the code. We are going to run a container. But which container? Well, let's just try and run the container itself first. So this is Stefan has created this container. Shout out to Stefan, pod info. So if I can just, without Kubernetes, imagine this is just a static page. It serves a page which you will show in a second. I can do this thing, docker run. And I can try and access the pod container first to make sure it's all correct. If there's something wrong, we'll figure out. We can see there's something. We can have a look at the container itself. But here's what we can do. I can use this flag dash p to connect my machine to what's running inside the container because the container is isolated. And I can do this dash docker run dash p flag. And the container itself, when it runs it, the website runs on this port 9898. So we'll do that 9898. So now this is running. And what is done is attach the port 9898 inside the container to port 8080 on my machine. So if I were to bring over a, just bring it here, localhost 8080. I can see this page that's been displayed. It's got just a static page. It's nothing special. But I can check the container is running right. But we know this is the container. And this is what we're going to deploy inside the Kubernetes cluster. So here you go. And let's open this deployment file. This is what a deployment file looks like. And there's some information in here. Ignore this for a second. We'll come back to it. But at the top, it's a big enough, I think it's big enough, right? You can all see. At the top, we've got some information about what kind of resource we're creating. We're going to create a deployment. If I come down to the bottom, this is perhaps the most useful bit, what kind of image we would like to run. So this could be any image that you're trying to run. This is going to pull this image from the container registry. And then a couple of things in here. We've got the name. We have image pull policy. And more importantly, what port does that website run on? Because it's a website. And then what we've got in here is a bunch of labels. We'll explain labels in a few seconds. Why do we need them? And this is where things usually go wrong around labels. And what we've got in here is some metadata information. So what we're going to do is create this deployment. And then we will check what happens when we do this deployment. We're not running... If I wanted to deploy multiple replicas, I can just come in here and change the replicas to four or whatever. But we'll just deploy one replica. So that's the deployment file. If I do cube CTL, apply minus F deployment.yaml, it's going to come through. And it says that deployment has been created. So I can check that. Just make sure it's all correct. This is this... You got to do this. It says, yeah, there's one pod, it's up and running. That shows it's a good sign that stuff is running. And I can check the pod itself. So Salman, quick question. So it says ready one slash one. What does that mean? Excellent question there. So because in a deployment, we can define replicas. So in the deployment here, we can say how many replicas we're running. In this case, if you don't define the number of replicas, it will default to one. So all this is saying is that you asked for one and you have one pod, which is ready. That's the desired state configuration for Kubernetes. It brings you close to state. If I change this to two, if only one was up, we'll say one out of two already. Does that answer your question? Yeah, that was an excellent question. I follow on how do you know how does Kubernetes know it's ready? Oh, excellent. Very good. This is really good. How does Kubernetes know your container or your pod is ready? There's two things inside Kubernetes. There's some probes in Kubernetes that you can use. There's a redness probe and there's a liveness probe. There's also a starter probe, but there's a redness probe that you can configure. For example, you can say, well, when your application starts up, you can check if the process is up and running. That's where you use a liveness probe. But your application could be up and running, but your application might not be ready to serve traffic. Perhaps it needs to load some data from a database and upload it in a cache. So you can configure this redness probe to check against the application and say it's ready. Now, you didn't see me configure that at all. If I don't say it to it, they configure a redness probe, it will assume it's up and ready. We're lucky in this case because it's just a static page. It's ready to serve traffic. But there's something that you can look at to check if your application is ready or not by configuring these probes. Maybe that can be a topic of discussion for next time. Yeah, because it's kind of one of the things, if you're debugging and it's never ready, so your container is not in a ready state, take a look at the redness probes that has it had any configured. It might be that something's configured that's pointing to something that's not loaded or it's a static web page that is trying to reach as the probe, but that's not loading. So it's a type of thing you can kind of first look at. Why isn't it ready? Yeah, that's excellent. Yeah, so I think that's yeah, that's perfect. That's a really good example. So this is liveness or redness, a similar check, but I'm going to use this liveness example as Graham was saying. You define in your probe, so this is liveness probe. You can define the redness probe and exactly what Graham was saying, you have to configure a path in your application. You have to give an endpoint, but this is the endpoint that we're going to use. And in that, you can write any logic you write that you like. And usually written, in this case, you're going to return a HTTP code 200. If you get a 200, that means it's all good. If the process is up and running and returns 200, you know, like, oh, it's live, but you can write any logic you like. And as Graham says, check if this is correct. If the path is correct, if the port is correct. And also, sometimes your application might take a little while to start up or a little while to get ready. So you can add in some delays in the beginning. So it doesn't start checking until it's actually ready to be checked. So that's the probes. Yeah, that's very good. Very good. Could that be something like, you know, you had dependencies like a database or something like that. You're waiting for that. You know, if you're deploying all this at once, you're waiting on that to come up, give it like 20 or 30 seconds for that to finish doing what it needs to do and then spin up and kind of go from there. Yeah, I guess you can use it for that case. One of the things that people tell people say, not to do is make these market services reliant on other things. So if they fail, you know, you will end up with this cyclic dependency. This is not ready. So that's not going to be ready. So something else is not going to be ready. So yeah, something you have to watch out for. But yeah, that's a good example as well. Okay, excellent. Keep them coming. If there's any Q&As or anything like that, please let us know. Well, and I'll say real quick, I apologize. I realized the chat was disabled for some reason. So thank you for that. I went ahead and updated that so now everybody should be able to... I was wondering why I was so quiet. Also as well, come off Meet, if you want to just ask I'm just feel free, as long as Tim's enabled people to come off Meet. Go ahead and check, Tim. Go ahead and check. I'm doing that. I'm doing that. So our pod is, let's go back to where we were. That's a very good discussion there. So if I had cubes to get, I have pod which is running. I can also check the logs of the pod to see if your application is logging anything to standard out or standard error. Then basically you'll see something that looks like this. It's just showing you like that's also another way of checking everything's good. And this is what Graham was also talking about, the status that's running. That's the liveness propelling us. And then we've got this ready state like actually the pod is ready. Now this tells me it's ready. But I need to be able to check what I've deployed is actually correct. But as you remember from this diagram, I can't really see this thing from outside the cluster unless I deploy a service or deploy an ingress. And that's what we're going to do now. We'll deploy a service and an ingress. But what if like in a container you just saw, I could do this port forwarding and check it out. But can we do something like that pod? And we can. In a pod, we can port forward using kubectl. So we can just at least test if the pod is running correctly. I mean, all the things that we talked about here looking okay, because we got the pod that's running, I can see some logs, there's no errors, but we can check this. So I can do like this, I can do kubectl. And I can do something that looks like this. This pod is running on port 9898. So I can do kubectl port dash forward. Then what kind of instance resource I'm trying to port forward. So I'm going to say pod, and then I can stick the name of that resource, which is pod info. And then I'll pick a random port on my machine. Let's pick 8083. And then which port this container is running on. I know that because when a build, when the container is built, and it's logging as well as telling it, it's actually listing on port 9898. So if I run this command, if I open a if I open a browser, and if I go to localhost 8083, I should see that page like you saw before. This is 8082. This is 8083. We check that deployment in here. So let's do this. Localhost, local missing an L. So the first thing you do in troubleshooting Kubernetes is check the spelling of your URL. 100%. Yeah, 100%. Yeah, so this is this is basically we got all we got the website that's running inside the pod. So that's all we've confirmed so far. We haven't gone all the way yet, but that's we're just building up. So that's confirming. Yeah, the pod is running. Okay, now what we're going to do is deploy our service because we need to do something like this. We need to map this up. And how do we map it? Well, let's just have a service YAML file in here. You can see this service YAML file, it's quite straightforward. There's a few things in here. The service itself needs to target a port. So let me show you another slide here. Basically, this is what we're looking at. We have a pod and we have a service. And the pod is running on a container port and the service is running. Service has what's known as a target port. We need to match these things. I mean, to make sure these two are matching in our configuration YAML files. So let's go back to our YAML file. This is saying target port is 9898, which is correct, because that's what the container port was, 9898. We're matching that. That's another thing that you can watch out for. When you do a deployment, things don't quite work. You get an error. Perhaps the ports are not mapping correctly. So have a look. Just make sure the ports are right. And one thing if I can real quick while you're doing that, we had a question come in. And I'm a little late to it, but Michael had asked, what role does the describe pod play in your health checks? Very good. Let's do that. So cube CTL get pods. Very good. Actually, so one of the things you can do with this command line tool is you can describe all kinds of resources. Describe the SCR ID pod and then the name of the resource. You can even also do a slash. So let's do that. Let's describe this pod and let's see what kind of information we get. If I scroll up a little bit, it provides you more information, more detail than what you do with just cube still get pods. You got this information in here about like the name, which name space is running it, container ID and stuff like that, blah, blah. And it's also giving you some information about is it ready or is it not. But if you go to the bottom, there's some, sometimes you get this information in here about the events that have happened. So it started, it pulled the cube, like there's a component, I think that's a topic that I'll probably discuss next time around or what are the different components and inside Kubernetes. I think we might have a video on our YouTube channel. I'll find it and ascend it in a few minutes. But this gives you some more information if it errors for some reason, some of the information, you might see it in here. I can't remember exactly what kind of information comes in here. But you might see it, Graham, you might remember some. Yeah, it's just the, so Kubernetes has an event collection system. So there's events get posted by all of the components inside of Kubernetes and all the resources. So you can get it, you can just grab all the events from every name space across all the cluster or narrow it all the way down to what are the events that this is a pod you've looked at, that this pod has sent out. So this, if the pod is sending out events, so there's some standard events that come out of just the pod spec, which you'll see there. So the cube that's kind of started container, created container, and things like that. And you also get something coming out of, you'll see errors. So if it's, if it can't pull the pod, so it's coming out of the, or it's pulled the pod, but it can't start it successfully because it's errored. So you'll see things like that in just as events being being pushed out by the, by the pod. Yeah. So that image pull back off is another error that basically tries to pull the image, which maybe it doesn't exist. Maybe you don't have a, maybe you misspell this thing in here instead of pod info or wrote something. Maybe I don't have the right version. Maybe I don't have access to the repositories trying to pull it from. This is coming from Docker, which is open. So that's fine. The way it works in Kubernetes, you'll try and pull it, and then you'll get, get this error call image, pull back off. Basically, it slows off trying to pull an image and then completely stops pulling an image, I think after 10, 10 minutes or something like that. So that's an error that can happen. So if you've not seen that before, that's an image is just the Docker, the Docker container image that's in a repository somewhere. And that's just the address to it. So it's a, because it's defaulting to Docker hub and trying to find that, that, that you are, that address for that Docker image. So if you misspell that, you'll see it as an image pull back off because it just can't find the image because you've misspelled it. So you can take a look at that. Perfect. You kind of wish they would have just said, can't find the image. Yep. Yeah. Because the history of that, it's that part of it that we can go talk to. Because there is a back off, right? So it'll try and pull the image, but it'll back off for a default time period and then go and try and pull it again, just in case there was a, a network issue or a communication issue or something. So we'll keep on trying for, and you can configure that in once you get into the depths of how your, the container orchestration system works. So which is why it's an image pull back off. It's not a can't. It is a can't, but I can't yet. Well, I'm going to try. Yeah. Just try again. Give it a little time. Try again. Okay. Excellent. So, so far what we've got is we've deployed a pod. We have a service that we're going to create in a few seconds. And we just need to make sure these two things match. There's another thing we need to make sure that matches, which is if I hop back in here. If you look at the service, so deployment, a service could point to multiple deployments or it could point to a bunch of pods that are listening into a deployment. How does it know which pod to send the request to or which container specifically send the request to? In here, as you see, we don't really specify the name of the deployment anywhere. All we do is specify the name of the service, which, which I call the pod info. The way it picks the pods that it needs to send information to is using what's known as a selector, and a selector you select the label. And labels are defined in here under this bit here, selector match labels, app pod info. And what do you describe define here? You have to define here as well. Now, usually I just copy and paste a YAML file and change the things that I need to change. So that's what we follow. But that's how it picks the pods. Now these this label is is a deployment label. But this doesn't have to be the same. These two have to be the same. I just kept the same for the simplicity of it. But that's basically two things we need to match. We need to make sure the ports match and we need to make sure the labels match. So if something is not working, the deployed things are not working, just check. Maybe the ports are not right. Maybe the label is not correct. Just as a step back from that, Salman. So guess for the guys on the guys and girls on the call, it's kind of the so a pod just will just run in a across all of your the worker nodes within your cluster. Doesn't matter where it is. It'll run somewhere. You can't you don't know where it's going to run. The scheduler for Kubernetes will run your pod somewhere on any one of the nodes. So the service is a way of decoupling where that pod is going to run on any one of the servers on the worker nodes. But Kubernetes will understand where it's put that pod and the service is a way of addressing. So you can address the service and the service will know where the pods are running in the cluster. So you need that service to be able because if a pod disappears and moves on to another node, Kubernetes knows about it. So so the service will change its load balancing to point to where it knows the pod has been moved to, or you need to know is the service address internally. So it's a way of just decoupling. When a pod disappears and moves to another node, you will always address the service which understands where where things have moved because the scheduler will update the information in its internal database. So to that addressing, we're not using real addresses for the service to link to the pod, we're just using a label. And Kubernetes does everything through labels or everything. Most of the clever things it does through labeling. So it knows when a pod has moved, the service will try and we'll link up back to the pod with that label running on whether it's been moved somewhere else. So hence why the label bit's really important. Perfect. Exactly. Yeah, so we've got these labels here, which is which is what we're trying to select from now we'll create this service. So let's do that. And then how do I know what I've created is correct, like it's fine or not. So well, I can do this same thing, get service, pod info. That's the one that I created. Kubernetes has its own service for doing its own stuff. That's already running. So that's now created. And we tell it to run on a specific port. It could be any port you can pick anything you like. So that's what that's why it's running on now. How do we check if it's all running? Well, you don't remember I did port forward before. I can do that again. But this time for a service pod info. I'm going to pick another port 8085 this time. And this time the port of the service itself is 3000. So if I run this, it will do similar. So if we go localhost 8085, and if we still see the page, that means what we've done is wired everything correctly. Just to prove that not every port is running everything. You can see this 8086 doesn't have anything. This is a real thing. I've deployed stuff on my local machine to run like that. So now, sorry, go on, Grim. You're going to say something. No, I was just laughing. But you don't turn nothing else. That's important. Yeah, it's important. Yeah. So basically just making sure everything is not correct. So far what we talked about is what can go wrong. So the bits to watch out for labels, as Grim was saying, very important. And the port itself is target port. Make sure that's correct. And you have different types of services. We're not going to go into that today, but you have different types of services. You can expose them externally too. How do they work internally? That's a matter for another time. But that's what this service is. Just like an internal load balancer, as Grim was saying, if the port goes missing, we have multiple replicas. Which one do we pick? Let the service decide. Abstraction. Now the thing is, I need to be able to access this outside the cluster. It's all well and good. We try and do port forward, but imagine it's a website that people need to access. You can't give everybody QCTL, come on and do it and say, hey, just port forward it. That's not right, because you have to set everybody up on that. And this is where we use ingress. You can think of as an ingress as an external load balancer. So we send a request to it. And then in here, we write some rules using this ingress YAML file. You can give it any name you like, the file itself, but this is important here, kind. Now you have annotations which do additional features for ingress. But the more important thing is the rules in here. I wrote this HTTP rule that says, for any request that goes to the root of this, send it to the service called podinfo, which is running on port 3000. So what we're doing is trying to match this basically. Service is running on a port, and then we have an ingress with service.port, and we need to match that. Once that's matched, and the name of the service too, of course, you saw in the YAML file, name of the service. Again, thanks for watching out for, if it's not working, check the port, check the service ingress, check the port number, check the service name, see if it's all correct. Let's go in, and if you have multiple rules, you can write multiple rules in here. You can say, if somebody goes to forward slash login, send them to a different service. If somebody goes to forward slash, you can add multiple, every time you see a dash, that means it's a list, so you can keep adding them in multiple, multiple times. So if I apply minus F ingress.yaml, now the way ingress works is it actually spins up in the cluster itself. It will create another component that will look after all the requests that go in. So you can't really port forward an ingress, but you can port forward this ingress controller. We're not going to do that, but if you need it to, we can check it. The way in Minicube, you can access stuff is you can get the Minicube IP that is an IP locally, which is exposed, and I can access the website. So if I type this URL, and we see this page, that means we've configured all of this correctly, and everybody's going to clap, and we'll say we've done a deployment, talk to the bottom, ingress, service, and a pod, everything is correct. So if it doesn't work, we can try and figure out why it didn't work. Oh, it worked. So it's got a pod in for itself is clapping, which is good. So what we've done is we've gone through, and you can ping this or whatever, just show you that. But we take, we create a deployment, we create a pod, and then we have the service, and we have this ingress. So that's what I wanted to share in terms of that. Right, there's other things also that can go wrong after this deployment. Graeme, what else would like to mention? Or any questions, by the way, I don't know if... Yeah, I have no questions. Does it all make sense? I have no questions first. Yeah. Yeah, come off mute if you've got a question about anything, specifically, if you've got any questions for, but maybe not anything. Nothing personal, just mainly Kubernetes and what we've just seen. My question is, what is that thing supposed to be that's clapping? It's open to interpretation, whatever you think it is, what you want it to be. Nice. I'm afraid of it. Don't even be afraid of that. It looks so cute. That's why I'm afraid of it. Hello. Hey. Yeah, so thanks for the opportunity. My name is Graeme. This is the very first time I'm really having to interact with or see somebody interact with Kubernetes on-premise. The first time I put it on the chat there, I actually interacted with Qubes on Google Cloud. So it's actually a good one. However, I came across MiniQubes all-kind as a part of... I was playing around with the VS Code and I saw MiniQubes all-kind on VS Code. So if it's possible to actually provide... I'm trying to respond to Graeme's comment here. I wish you could actually expand on it. Which is most preferred MiniQubes kind and what is actually the difference? Yeah, sure. So they're both locally running Kubernetes instances. So if you're using GCP or AKS or EKS, that's Kubernetes running in the cloud for you. So your options are if I want to run something locally on my laptop, so I can do some just really kind of quick dev testing and doing something local, then you've got a few options. So you can actually just install a Kubernetes cluster from source into... If you're running Linux on your laptop, fine. You can run it into there or you can create a VM with Linux on it and run it in there. But don't do that because that's horrible and you're getting just in a world of pain. So the other options are... So MiniQubes is exactly the same but it's being packaged up into a VM and has just a kind of control around MiniQubes where you can just start up MiniQubes and it will create a Kubernetes cluster inside of a VM. I think it uses Vagrant, but it will just give you access to a Kubernetes cluster. MiniQubes is quite good that it gives you additional extras on there. So for the Ingress, for example, that Salmon was looking at, you can deploy an Ingress controller inside of MiniQubes using the MiniQube commands. And there's a ton of stuff that MiniQubes will wrap outside, give you wrapped outside. But that will just run on your laptop. So you can use it and deploy things on your laptop. Kind is similar. So kind will run Kubernetes. It stands for Kubernetes in Docker. Kubernetes in... What's the N? I think it's K in D. There you go. Kubernetes in Docker. So all you need is a Docker runtime. That's whether you use Docker or whether you use Podman, whatever. You can run kind inside of that runtime. And that will spin up a Kubernetes instance inside as a Docker container on your machine. So it's just a great way of just getting Kubernetes locally. It's pretty quick to spin up and spin down. The drawbacks of it is that it will consume a lot of your resources on your laptop. So if you try and deploy any reasonable size application inside of Kubernetes in one of these environments, it'll run like a dog. So you've just got to be careful with what you run inside of it. It's great for exploring, great for poking around because you can't do anything. You can't damage anything. But it's got its limitations. Does that make sense? Yeah, it does. And then it takes me to my second question, if you would don't mind. Many times a number of environments, I've seen them use hybrid environments. So you have this locally running locally. And sometimes you just want to make it work with the cloud infrastructure. Does this come in handy when you're trying to do that? And how easy is it for you to actually make it work in a hardware setting? So when you work in hybrid, so there is no direct link between what you do on your local machine and moving that over to another Kubernetes environment. So the way that people do this is, so first of all, you've got your Docker images and your Docker. So the actual things you're running, that will be in a registry somewhere. So wherever you shared that Docker image, if you're moving between your local environment and your cloud environment, they both need to be able to access that container image. So wherever you're building application code and pushing it as a Docker image, that has to be shared between the both. Now, getting things deployed into that Kubernetes environment is all about the manifests that Salmon was just going through. So the deployment, the pod spec, etc. So if I created and a deployment manifest on my local machine and tested it, which pulls in from a container image that I know is shared with my cloud environment, then that's portable, right? You can move that directly from what you've deployed on your local machine into running the same file on your cloud environment. So that's the kind of the shareable artifact that might go into your CICD process or use tools such as Helm or other things that will kind of build, manage your Kubernetes deployment. So that's kind of where you kind of move between the environments. You can use the same manifest, right? It'll pull the same container image in it or run the same things. So that's kind of the shareable part of it. So is Helm like a template that you use for Kubernetes or something? Yeah, yeah. So if you're just starting out, then probably avoid Helm for your own thing. Use Helm to go and grab, you know, I want to deploy a MySQL database, go and use a Helm chart and use that to deploy your MySQL database. But for your own, if you're starting out and just building your own app from scratch, just create the templates, create the manifest, so the deployment and the service and the ingress. Create all that separately. You can go into templating it once you've got to do this at scale and you want to just get an easy way of templating deployments and services. That's my advice anyway. Thanks. Thanks so much. Yeah, I'll say when I started in K, I tried to start directly with Helm and just immediately got lost. So that was good. Yeah, once you've learned what a deployment looks like and what types of information you need to put in there, the metadata you need, and once you learn the ingress and the service and just getting used to those artifacts, then it's easier to go to Helm because you kind of understand a bunch of it's just boilerplate. And that's what Helm's there for because you can boilerplate most of it. And there's only a few things change. But you need to understand that to start with. If Raheem, thanks for the questions. I posted a link for the webinar we did last week. It was around Helm. Why do we need it and how to get started and how to do it. I'll just post this on YouTube so you can check it out whenever you have time. Actually, it only went to us. So there we go. Oh, did I? Not put it to everyone? Oh, right. Okay. Thank you very much. But yeah, and also something to note too, if you have ideas or of topics you'd like to see, we already had a couple submitted earlier. Feel free to put them in the chat. We're, we are absolutely open to doing what you want. So I'm giving you what you're asking for. So otherwise, we're just going to come up with something each week. So if there are things you're generally interested in or like to deep dive on or have questions about, please throw them in the chat and we'll add them and add them to the list. And Michael, I see you have your hand up. Please feel free to come off mute and ask. All right. All right. Thank you very much. Thank you. Since we're on services, thank you everybody. Just trying to ask for some, for some countries, you basically wouldn't be able to deploy in the cloud because of the GDPR process of some countries in Africa. So the question is if we're not on the cloud and we're in premise, do we have any third party type of ingress solution we could use for deploying? I know, I know that MENGINEX has something that basically same with them. The eBall has done something about it, I remember, but is there something else we could basically deploy? That seems to be pretty complex deploying the MENGINEX ingress solution. Please just throw more light on that. Thank you. Yeah. So I think it's probably, there's more than ingress. I think it's the whole platform that you'd need to take care of. That kind of depends on, well, mostly depends on where your data sits. So wherever I'm processing data, if I'm in a region where I'm not allowed to go to the any of the public clouds and you've got to be looking at what can I do on-prem, what can I do in a self-managed or finding a hosting provider in your region that keeps the data in somewhere that's within the regulations of that environment, that country. So it's not just ingress. I think you'll need to deploy a Kubernetes on-prem solution. And there's a few out there. So you can look at things like OpenShift and Tanzu or just do it yourself, Kubernetes. You can download and install and run and manage Kubernetes yourself. Why anyone want to do that? I don't know. But sometimes it's, actually, there's no reason why you'd want to run your own, build your own Kubernetes. Because people like Red Hat and OpenShift and VMware and Tanzu and there's a bunch of others that kind of add a whole heap of value and stop you from creating a mess of your own DIY Kubernetes. But it's complex. So you've got to understand what you've got to understand what your data is and what your data processing sits. So in which case, if it's in a data center or on data somewhere where you control, then obviously you'll need to have access to that and it'll flow through your DIY Kubernetes or on-prem Kubernetes. You can't use public cloud. I feel sorry for you, but you've got to look at some of the other products on the market. I think just to add to that specifically around Ingress, everything that Graeme says is absolutely correct. You can still deploy your Ingress controller on there. The way Ingress actually works is you have an Ingress pod, let's say it's Nginx, and you usually have a service that you can expose externally. So it'll be something like NodePort, which just opens a port on the node or load balancer, service type load balancer. And if you do a service type load balancer on the cloud, it actually provisions a real load balancer in front of the machine and wires the load balancer to the Ingress pod. If you're on-premises, how do you do that? Well, so you might have come across this project called Metal LB, that if you're on premises you can, because what you need is a service type load balancer or NodePort. This project called Metal LB, I'll share the link in a second. That allows you to have this service type NodePort or load balancer on-premises. And that probably solved your specific problem, that's how you solve it, using that project where you can expose that. I'll share the link to Metal LB in a second. Thank you. Hopefully, I'll share in the right place. Let's try again. All right, everyone. That's Metal LB. Check out that. It's service type load balancer. In this case, you just pick that. So those Ingress controllers like the Metal LB or the Engine Enx or Traffic or any of the load balancers are really just load balancers, right? They're just deployed into or not a Kubernetes cluster where you can have a physical hardware load balancer if you want. It's kind of you can configure that an F5 load balancer, which just sits in front of your Kubernetes cluster and bounces across what it knows in the Ingress. Because there are plugins for that. NodePort and it would no be based on health check if that service is running on it or not and add it or remove it from the pool. Yeah, you're taking me back about 10 years, but there's some industries in finance which have got. From some of the trainings I've seen, basically they tell you where you have the load balancers that sit on the cloud that you pay extra costs for them. So how does the Ingress play in public clouds? Do they provide an Ingress there? Or it's basically you pay extra costs for that or you can deploy your Ingress in the public cloud also? So basically you have to, the Ingress controller itself runs as two parts as a controller which is running as a pod. You still have to expose it outside of the cluster. That bit needs to be exposed. You usually end up with just one load balancer and you have to pay for it. You've got to pay for that load balancer and then you can write all your rules inside it. So you have to pay for one load balancer. What you don't want to do is imagine you have your services. You don't want to expose every service with its own service type of load balancer because you'll end up with like 50 load balancers that's just useless. That's why you want to expose them through Ingress itself. So when I was doing MiniCube, I actually have to run this command where I have to install Ingress in my cluster because Ingress doesn't come by default. So you pick your type and you can configure it and when you configure it, it'll spin up something, a load balancer that will make sure that when the requests are coming from outside of the cluster, they can go to the Ingress pod and then the Ingress pod looks after where the request needs to go next, what needs to happen. Does that answer your question? Silence, I'm going to assume yes. Yes, it's kind of answers but the question actually are you allowed to deploy your Ingress in the cloud or the cloud already has an Ingress controller in place? Yeah, so it depends. Yeah, you have to choose when you're deploying it. You choose which component you're going to deploy and in all the cloud providers, you can pick your own Ingress controller because it's just the pod and you can deploy multiple Ingress controllers in your cluster. It doesn't matter. It's like any other deployment. So yes, you can deploy anything you like inside. Some cloud providers got their own. So for example, in Azure, you can use their application gateway as an Ingress controller and that works. But yeah, you can deploy whatever you like. It's just a normal deployment at the end of the day. That's what it is. You can deploy what you want in the cloud. Thank you. No problem. Good question, Alan. Yeah, absolutely. Okay, I think I'm going to share one more thing around this debugging thing, which was there's a lot more that can go wrong. And then maybe we can start wrapping up on that. Our friends at LearnK8 have put this block together. It's like the Kubernetes training thing. And I'll share this. So you can check this PDF. Actually, let's open it in PNG. So when we were doing a deployment, you saw, am I sharing the right screen, by the way? Perfect, excellent. So this is also going to walk you through some of the steps that you have to do when you create a deployment. And when things go wrong, check the pods are running, why they're not running. Maybe your cluster size is too small. That's a problem as well. You run out of resources. And then this is one of the tricks that we're doing, do a portfolio, check if the pod is running, check if the service is running. And it's going to take you through a lot of stuff in here and explain a number of things. Check if the service is running, check if the controller is running. But at some time, it's going to come around and say, just have a look at Stack Overflow. If nothing works. That's my favorite part of this diagram. So check Stack Overflow. Copy and paste in there. I find out what's wrong. But definitely check this blog out. It's quite useful. I know people have printed this and put it on the walls and stuff to debug this. When you're starting out new, this is a very useful, useful resource to use. I'm going to put it in chat again, just to make sure I do it in the right place. I think Tim's already put it in there. Oh, you put it in there. Perfect. Thank you, Tim. Thank you very much. Yeah, so I think there is definitely a flow of, I mean, I've certainly got it. If things aren't working, I just get into a routine and go and check my pods, go and check the events, go and check the logs on the pods if it looks like that. Go and check, if I've got a replica controller that's running, go and check the replica controller, get the events from there, describe the things that I'm looking at. And that will give you, any of those things will give you a good clue. If it can't be scheduled onto a node because of the node sizes, then it'll tell you in one of those things. So you'll see it in the events that the scheduler is having a problem scheduling the pod to be running on any of the nodes. So that workflow is a great, useful tool to use, and I've seen it and used it, and it's kind of, you just get into a habit of understanding all the moving pieces. Once you start getting lots of moving pieces in terms of applications deployed with, you know, maybe with different networking and different storage, persistent volumes, and you know what, there's 101 things that could go wrong, or 1,000 things that could go wrong. So it starts getting really hairy. Once you get down to the levels of, actually, I've checked all these things, I still don't understand why my pod is not running. And that's, for me, that's the difficult thing in Kubernetes. It's when I've got a really complex deployment, it's exponentially large, the things that can go wrong and I need to check on, because there's 50 things to check with just one pod running in a deployment. But use that workflow, it'll help you dive into and diagnose the problems pretty well. There's some other stuff, actually, just as we've got time, that it's probably worth looking at, maybe not for this Q&A, but looking at things on, so quite often you'll deploy a container image that just doesn't start, which is nothing to do with your Kubernetes configuration, it's something to do, you've got something wrong in your container that doesn't start, it might run locally and you go, hey, it runs locally, put it into Kubernetes, hey, why isn't it working? Yeah, it works on my computer. And doing things like, there are other tools, you can, Kubernetes tools you can use that will attach to a running container, so you can take a look at it and debug it before it explodes and isn't working on the machine. There's a, probably on the Kubernetes docs, it talks about it, so kind of init containers and debug containers, which I'll pull the link out, if I can find that. Chat amongst yourselves while I find this link. I'm just enjoying the sound of, it sounds like you have a mechanical keyboard. It's an, it's just an old keyboard. I'm not trying to be trendy, I just have an old keyboard. They all used to be mechanical back in the day. Yeah, yeah, I didn't see what was cool about it, it's kind of, it's my keyboard. Okay, so here it is. I'm not used to the Zoom share screen. Showing my iPhone via AirPlay, that's quite cool. So can you see the, can you see the screen? I'm showing that. Yeah, yeah. Okay, yeah, so, so yeah, go and take a look at this, this kind of debug running pod, so it's got a lot of the things we've talked about. And kind of this is the, this is the one which is always kind of gets me as difficult. Debugging kind of pods, they're impending state, right? They've done something, they're not quite ready, state is impending. And if you scroll down to, we didn't even look at this kind of debugging with container exec. So if your container's running, you can, you can get into the runtime of that pod and kind of have a poke around to figure out what's wrong. And these things here are quite cool. So using an inferential debug container. So if I've got a container that doesn't have a, anything that I can attach to, then I can use an ephemeral container that's, that I can attach with that running container, which I'll be able to then kind of use to poke into, because it's sharing the process. So go and take a look at some of these pages, because there's some really interesting things in here as well, so I'll post that. Just got it. Okay. Cool. Thanks, Tim. I figured I'd be useful. What was useful? What was useful? So I guess we've got five minutes before we wrap this up. So I guess any other questions from anyone? Or again, any ideas for content you'd like to see or information you'd like to have, please feel free to throw it in the chat or, you know, reach out to us at any time. We're more than happy to take your suggestions and, you know, make some, make some good video around it here for you. All right. Okay. Well, with that, we really appreciate everyone joining us today for our second live Q&A. Again, we'll be doing these weekly and we'll post them on our usual outlets via email and socials. So feel free to follow us on LinkedIn and YouTube and Snapchat, probably, and all the other things that we do. I don't believe you're on Snapchat. You forgot TikTok. There we go. Have we got a TikTok channel? No. We don't need one. But yeah, that's interesting. We actually, we do have a question came in. Do we have a Discord channel? We do actually have a community Slack that is out there. If you pop to, I will get the link to it right now. Let's see here. There's that in community. So we just redesigned our website and now I need to remember where we put it. Well, there it is. So we've got this. Let me grab this link here real quick. It is at the community.slack.com. So if you feel free to pop in there, join that. All of us sit in there so you can feel free to ask any questions in there and all of us see it and we'll be more than happy to respond.