 Hello, everyone and welcome to Cloud Native TV and the search magic show. So just before starting this is an official live stream of CNCF and as such is subject to CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. So basically please be respectful of all your fellow participants and presenters. So Cloud Native TV runs amazing shows. So make sure you follow the Cloud Native TV Twitch channel where you are seeing the live stream. And this is the search magic show where we talk all about Kubernetes and related certifications. And this is the sixth episode in the series until now we have covered a lot from the curriculum, from why certifications are important, what are they, then the installation of the cluster using cryo, then what are deployments, pods, objects, different set of objects, then obviously the scheduling things, how you can schedule using node name, node selector, taints and tolerations, services and ingress, how they work. And today we'll be talking about another very interesting section from the certification, very big one that plays a very important role in the exam as well, which is Kubernetes troubleshooting. And obviously it plays, if you go by the curriculum, so it is around 30% from the questions that comes in the exam. So 30% is big chunk of the exam. And so you should be knowing in and out of troubleshooting. And who better than you are seeing on the screen can tell you about troubleshooting. So very glad that today I'm joined by my very, very, very good friend, David, aka Rockode. Obviously, if you know the cloud native ecosystem, cloud native world, CNCF about live streaming, then you have probably heard of David and his YouTube channel. So there is very interesting series that Rockode does, which obviously he'll explain much better. But that has given not only him, but all of us as well, the great troubleshooting skills for Kubernetes have been on the show a couple of times. So I know like, you know, what it takes and how things go, how the flow goes. And actually it gives you the exam feeling when you are on the show, because you have to, you know, kind of fix the clusters and they are broken. So you get to troubleshoot live and the same feeling you'll be having when you are giving the exam, when you will be asked to troubleshoot something in your cluster. So David, welcome to the stream. Welcome to CNCF certs magic show on cloud native TV. I know you also run a show. So we already know everything. But please introduce yourself and you know, your show, the clustered to the community. All right. Well, thank you very much, Sam. It's an absolute pleasure to be here. I've already had to do some debugging and fix my name live. I keep forgetting that I changed my name recently. But yeah, as Sam said, and David, you'll know me across the internet as raw code. I am CK a CK a D not yet CKS certified. And as Sam said, I have a show called clustered, which is super, super good fun. It's a show that will help you learn how to debug and troubleshoot them worst, worst problems in the Kubernetes space there is. And we're seeing some pretty wild breaks these days. I'm not going to spoil it, but feel free to go to rawcode.live and check out some of the episodes. And I hope you enjoy them. All right. Awesome. So what do we have today for the for the community with respect to the troubleshooting that they can, you know, learn quite a few things? Yeah. So we'll just kind of, you know, there's a few things here in the curriculum document that we have shared on the screen. Just a few things that everyone really needs to be familiar with. I think it's important to highlight and I'm sure Sam's covered this before on previous episodes, but, you know, the CK a exam is about the administration and operability of a Kubernetes cluster rather than, you know, deploying and working with Kubernetes. So you really do need to know how to debug and understand all of the controlling components. And we're going to be taking a look at that today. So you can see from this list, which is not exhaustive, but it's quite, you know, it covers most of the things is that we got to be able to evaluate cluster and load logging. We want to understand how to monitor the applications. You definitely need to understand container logging. And my favorite parts troubleshooting application failures, troubleshooting cluster component failures and troubleshooting network. These are things that really you can read all you want, but the best way to learn these things is to get hands on, check the tires, play with all the components and fix some real world issues. Nothing like a Kubernetes cluster on fire to make you learn things a little bit quicker. And that's what we've got for today. I've gone ahead and prepared two Kubernetes clusters. One of them is healthy and Siam and I will go through it, take a look at all the components, have a bit of a conversation, and then we'll pull up the broken cluster and we'll see if we can work through it issue by issue. Feel free to throw your ideas into the chat if you want to guess along with us and help us fix it. And then we'll see how that goes. Does that sound right, Sam? Yep, that sounds fun. Also, if you make the stream kind of interactive and you keep on suggesting some of your cool ideas on how to solve a particular issue that we are on, then we also have two coupon giveaways, which is a 50% off on your certification exams, which is a good deal. So make sure you are just chatting and just making it interactive. So that's pretty much it. And in the end, just pick two winners randomly. All right. I love that first comment. It's going to be DNS. It's not DNS today. I can assure you I broke the thing. I know that doesn't mean that we won't encounter a real issue in DNS does cause problems, but let's hope not. All right. So it's part of the, I'm using my actual cluster to automation for today. So we have access to our teleport session, which will allow me to connect to all of these nodes. And I'm just going to jump on to the control plane here. So the control plane is where our API server is scheduler and a whole bunch of other things run. And we'll talk about them in a little bit more detail as we go. Now, one of the first things that everyone should do when they are operating a Kubernetes cluster is probably just type keep control version to see if your client and your server match up like so. And what we'll see here is that we have a client version. Is that fun? Okay. For you, Simon, join it bigger. Yeah. I think you can increase the font a bit. Let me just reload the page. So scroll thing of the way. Okay. So we've got our version here and you can see our client is 122, but we did not get a version from the server. Now I know the server is online. The thing that is missing is that we need a kube config. We can do that through here. There's a few assumptions here that are being made, but I think it's safe to say that these days, most people are working with kube admin clusters, which means that kube admin is going to provision you an admin.conf inside of the slash ETC slash Kubernetes directory. And we can run our version again. And you'll see now that we actually get a client version and a server version back. All right. Something else you should probably be familiar with is you want to be able to check your current context like so, and you'll see that we don't have one. And that's just because we're relying on the kube config flag right now. And we can use an environment variable so that we don't have to duplicate that every single time. Cool. Yep. And the kube config context is really plays a very important role when you are in the certification exam because different questions are based in different contexts. So make sure you are always switching the context before attempting a question. Yeah, great advice. So because we've exported our kube config environment variable, we now can run current context. And in fact, we could just run config view. And we can see all the details of the clusters that we have access to. Important to remember that a kube config can have multiple clusters, users, et cetera, defined inside of it. And thank you. I think that's relatively new, the redacted here. And I don't remember seeing that before. That's pretty cool. Okay. So what runs on our control plane nodes? Well, again, because this is a kube admin cluster, we can pretty much consistently rely on a kubelet running as a system deservice. Now, the reason that you can rely on this is one, the kubelet is responsible for asking a container runtime to run your containers. So it's unlikely that your kubelet itself will run inside of a container, although not impossible, of course. And because it's a kube admin cluster, what we're going to see is that all of our other control plane components are started by the kubelet via something called a static pod, which we'll talk about again in just a second. But if you're ever running into any problems on a Kubernetes cluster, system control status kubelet is your friend. You want to be able to make sure that it is active and running. You don't want to see this with any sort of restarts or an active line. That would typically be bad. And we can see we get some log information here. The status command is not the best way to work with the logs in your cluster. You don't want to use journal control. And then depending on who you ask in the technology world, you're going to get 15,000 different options for what to use as flags. But I'm a fan of using XE or flu. Thank you, Fresbo. I'll pull that back a little bit. And we can see here that journal control dash flu will allow us to pull us out our logs from the kubelet. This looks pretty healthy. I'm not worried about any of the errors that we see here. This is our healthy cluster, so I'm pretty confident. Okay. So we have a kubelet through system D, but we don't really have anything else yet. So what's next as part of our control plan? Well, we can go to our XE Kubernetes directory and you will see that we have a manifest directory here. This manifest directory is really important. This is where all of the static manifest live. By static manifest, what we mean is something that the kubelet is going to be responsible for starting when it starts. So you'll see all the other control plan components are here. We've got HEDCD. We've got the API server. We have the controller manager and we have the scheduler. And I'm also running a kubet here, which I need for bare metal ingress. Let's run. Now, the kubet system namespace is where all of these static manifest will live. And we can see that everything here is running. Now, there's some little syntactic or weird things you'll see across the documentation and header messages, particularly if you start to look inside of the kubelet logs. When we refer to a static manifest, that is the YAML file that lives in the static manifest directory. You may also hear something called a metropod. So what the kubelet does is when it has a static manifest, it will actually log that with the API server as a metropod. So it's not a real pod, but it is a pod, but you still see those two words used back and forward. Okay. Now, if we need to understand what is happening with our system, we need to be able to use the logs to debug problems. There's two really important directories that you should be familiar with. Farlog containers is my favorite. It's where all the active running container logs are. So if you see a file in here, the chances of the container is actively running and you can tail any of these to see what is happening. Here is our API server log. Another important directory, especially for containers that are no longer running is the pods one. So the pods directory will give you a pod name and then some identifiers. And I don't know if we have any in here, but certainly we will see this in our broken cluster, but you'll have multiple IDs for the same pod name, particularly when they are stuck in a crash loop back off kind of pattern. Okay, we're going to look at a couple more tools before we dive over to the broken cluster. Now, I said earlier, I guess it doesn't matter where I am director, I said earlier that the cubelet is responsible for asking the container runtime to start a pod or a container. So the cubelet does not start any containers at any point in time ever. It merely proxies a request to that runtime. The runtime that is primarily used these days, at least what I've seen is container D and container D ships with a few commands that are going to be invaluable in debugging any sort of broken Kubernetes cluster. The first one is CTR. So this is just a tool for trying to understand what container D is doing or what assets it has available. Common command to run would be CTR images list. Now, there's a few weird things to understand with using the CTR command is that it's not by default Kubernetes aware. You actually need to tell it that you want to read from the Kubernetes namespace. And this is not a namespace you should confuse with Kubernetes namespaces, which I know can get a little bit weird. But we can actually say Kates. K9s. Kates.io images list. And these are all the images that is pulled inside of this namespace for running inside of my cluster. But this is something I see tripping people up often. And they run CTR images and they're like, oh, my cluster is running more images than this. Where are they? And it's just that namespace toggle. And you can actually list the namespace as well with CTR NSLS. Now, CTR is a bit more low level. You may want to work with something that is slightly more aware of Kubernetes. And for that, we have try control. You may, like most people, try control PS and be worried that nothing is happening. And that's just because there is a little bit of configuration that is needed to get this command running. It just needs to know where your container D socket is, depending on your operating system or Linux distribution of choice. The chances are it's going to live in a var run container D container D dot suck. And from here, we can run a cry control PS to see a list of all of the things that we have running in our cluster. Cry control is pods aware. So you can run pods and also get another look at it from here too. So those are just some of the control, that's all of the control playing components that we have available on the machine, some tools for working with the containers and trying to work out what is actually happening, the access to the logs and how to configure your cube control command. I believe that is everything that we're going to need to know to move into our broken cluster. Sam, is there anything you would like to talk about there before we move on? Yep. So the cry control and providing the runtime endpoint is very important and I think you should just keep it handy somewhere just so you can directly copy paste it and run that. And there was a question like why it is important to go to the file system rather than using the cube CD log. So there might be times when the Kubernetes cube CD will get pods, it will get nodes itself won't work. So control plane is down and things like that. So for that, you have nowhere to debug. So general CD logs or you know that they can give you the first level of information and then you can move to the file system, which is the EDC manifest and the var log, a cubelet, those are some of the kind of directories where you can see the containers, you can see what all things are happening. Obviously, there are a lot more things, not a lot more nasty things that attacker can do that can be done, but generally these would be some of the initial places that you'll be looking at. And rightly said, like cubelet is not something that would run the container. So that is very important. You should take care of that. Like cubelet is sending the request to the container runtime and container runtime, you know, and in turn is running your containers. And even in that, if it's container D, then container D itself will not run. It's actually the run C behind it, which actually runs the container. So it's, it's, you know, different levels which are there. So that is also a kind of good to know thing for you. Yeah, definitely. I'll add one more thing to that, although Siam smashed it and nailed everything there. But yeah, you may not have any APIs ever and no mother live on desk is as critical. Also through the logs or Qt control logs command, you can access the current logs or the previous logs, but you can't go any further than that. So you may again, if you want to go back a couple of pods or containers, jump down to the file system to get them. I was handed to know where they live. All right. We had the effects one. Let's go. Just pretend you said yes. There we go. That's better. I was getting worried I broke teleport. Okay. So I'll zoom in one more. Reference the page just to get rid of that bug. Okay. So we have a control plane we hope going to run version and we can see we've got our client version, but we don't have our server version. So we know how to fix this, right? We're going to export a Qt config. Do you think this is going to work, Sam? Let's see, hopefully. So it's still failed and we got an error message that the connection to our server now was really important in these messages. The first one said localhost 8080. This is the default. This means you don't have a Qt config configured. This is an IP address and a port. That's an indicator that we do have some Kubernetes context, but we're not able to speak to the server. So that means that something is definitely wrong here. Russ is asking if I stuck through my own rules, I guarantee it. I did not use any Unicode breaks, EBPF, or any naughty things I do not like. So we need to fix our first message here. We'll give the audience 30 seconds. It's really easy. I'm going to leave that message there for 10 seconds. Yep. So please post the kind of next steps that you think should be done in the chat. Yeah. What do you have for lunch today, Sam? Yeah, I had just, I don't even remember what I had. That's the toughest question you can ask someone in the day. All right, Russ and Nero with the answer before anyone else, the port number is indeed incorrect. So you'll see this is looking for port 6334 and that is not our standard Kubernetes port. So we know that our cube config is as admin.conf. My cursor nicely started exactly what I made to break. We fixed support number, we run version and now we have something working. So that's a good start. Good catch, Russ. All right, you got a favorite command, Sam? What do you want to run next? Yeah, let's run QC, we'll get nodes. It worked. There we go. The things look good. Now, this is standard. Yeah, I just read your reminder, Sam. I'm going straight for it with the pods and this is, I'm going to play standard cluster drills today. So let me give you a bit of context before we move on. We have this, well, we're supposed to have a deployment called clustered with a pod called clustered showing up here and we should be able to browse to it and that is currently unavailable. We can also see that our API server is actually broken and the reason I did this one, other than it just being funny, is that it kind of highlights that static pod manifest matter pod semantic and it's not really a pod. But right now this is container creating, but we are creating the API server, right? So just be careful of that. You can see that we don't have a scheduler. Schedules are not important. I don't really care. So we'll see if we need it. But we want to get our application running. Okay. So what's our next step here, Sam? Put you on the spot. Yep. Let's, we can go to the journal CTL and see what is happening with the cubelet and stuff. Yes. We have a lot of error messages here and one particularly important one. So I don't know if you've seen that, but No, it's scrolling too fast. We have an error message that we have admission controller denying all modifications to our cluster. And this is one of my favorite examples of something that we have to talk about when it comes to debugging and the Kubernetes API is that there are two different types of admission controllers. Most people are really familiar with dynamic admission controllers, which are validating weapon configurations and mutating weapon configurations. But the API server historically prior to dynamic mission controllers did everything through built in components that were compiled into the API server binary. So we've got that to fix. So we need to check out the static manifest for our API server. Where do those live against? I am. Yep. And the manifest folder. Oh dear. We don't have a manifest folder. Sam. What are we going to do? Sorry. I'm just having some fun with this session. I hope you don't mind. It's important to understand how all these components are configured as well. So the static manifest directory is consumed by the cubelet. So we need to understand how the cubelet is configured. In fact, rusts are straight in there with the chat as well. Yeah. So we need to understand how that's configured. So one of my favorite commands is to run a kube control cat on a service and it will show us all of the different drop-ins within system D that configure this service. The one that we are interested in is the cubelet configuration arguments, which we can see as a YAML file in var lib cubelet. So if we pop this open, we'll see that we can change the authentication and the authorization methods on our cubelet. We can change the C-grip driver, cluster DNS, cluster domain. And we've got a whole bunch of other stuff. And when we get down to the bottom, you'll see that we have the static pod path. I was just being cheeky and moved everything into root manifests. So we will, I guess I could just fix it there. Here are our manifest directory. I can't remember if I updated it. No, no, I deleted them. Okay, cool. So we're going to work with this directory. Now, we've seen that we had an admission controller that was denying all modifications to our clusters. We definitely need to fix that. So if we want to modify the admission controllers, we can come into the API server. So this static manifest is very much like the other one which, well, it's kind of like the cubelet configuration. Now, everything in Kubernetes is configured via YAML. We can see that this is a pod manifest. We've got our commands. And the one that we are interested in is right here. So we have this enable admission plugins. And we can see that there is an always deny admission controller, which has no real purpose for anything in the world ever, ever, ever, ever, ever. The reason is here is just to really cause me a little bit of pain. So what monster changed that? I know Russell. So we're going to remove it. We're going to save this. Now, when you make modifications to a static manifest directory, or any of the YAMLs in there, the cubelet will automatically detect that change. And over the course of around 30 seconds, we'll remove the old container and start a new container. But I'm hoping if I time this right, we won't see. Oh, now I bet it's going to be there. Back is too slow because of my typo. We have to see the moment there where the API server just wasn't there. And now this one has just started. Okay, what's next? So now that we have fixed our API server, and now it says running instead of container creating, we still have no scheduler, which is something we may have to look at. Not important right now. So we want our clustered application. Oh, it's not here. I'm not entirely sure why. Although we did run get deployments. We can see that the cluster deployment exists. It's got one of one up to date available. I mean, that looks pretty healthy, right? And if we run get replica sets, we have a replica set and that looks pretty healthy. Let's describe a replica set. I don't see any other messages, Sam. I don't know what's going on. Anyone in the chat, you've got 30 seconds to drop in an idea of what could be potentially causing this situation. Russell, I mean, I may have to ban you financing. You've been getting them all so far. All right, 10 more seconds. Okay. So we're going to plug on and fix this one. Yeah, no, also tick talking. Thank you all. So we're going to pop open. Where does this one live again? The controller manager. So let's talk about the responsibilities of the control plane components here. We have the Kubelet, which is started as a system-based service and is responsible for sending messages to the container runtime interface to start all of the containers that we need. We have the API server, which is essential in a CRUD interface in front of our NCD backing store, which stores all of the events and requests, et cetera, that come into our cluster. We have the scheduler, which is broken, which has some responsibilities we may talk about if we fix it. And we have the controller manager, which is a super controller of controllers of controllers. And did you know you can disable any of the controllers within the controller manager through the configuration? So if we pop this open, you'll see we also have another pod configuration, which is running the controller manager. And it takes a parameter down here called controllers. The star means just run all of default controllers. We can add on a couple of extra ones like the bootstrap signer and the token cleaner, but you can also remove them. And in fact, you have this weird syntax where you can do dash replica set and dash namespace. This disables the controllers that monitor the namespace and replica set custom resources and handle any of the configuration and reconciliation that has to happen behind the scene. So we can remove that, which is going to bring back our namespace and our replica set controller. And in fact, I might leave the namespace button in because it's a nice visual way for me to create a namespace and you'll see that nothing actually happens. So we'll save that, we'll run ps and maybe I'll get lucky to catch this one. See, we have no controller manager right now. So the kubelet has detected that change is removed the old process. And there we go. Third time lucky, the kubelet has started a new cube controller manager with the controllers that we now requested. And in fact, that should be enough that if we give this a few moments, I'm hoping we may say a pod created. Ta-da, there we go. One more problem solved. This is a very broken cluster, I've got to say. So we have a pending thing here, which is probably not very good and something else that we're going to have to debug. Oh, I bet I know what it is. It's the scheduler, right? It is good catch. So Russ is asking, so what was the clue in the describe? Was there a clue in the describe? There wasn't a clue in the describe. I said there was nothing in the describe, but I think if you would have done the ps ox on the controller, then probably you might have seen the admissions over there like the minus namespace or hyphen namespace over there and the hyphen replica set over there. And you could have got to this particular point, which the editing of the controller YAML file. Yeah. Disabling controllers through the controller manager configuration is really hard to pick up on. There's not really any error messages. There's nothing really, the system just appears to function completely normal. And because it is, it's just not reacting to that change. Change. Really, you just have to get familiar with the static pod manifest. Know what to expect in there. There are a few red herrings, which we can point out one of them. There's stuff in here that looks weird, that rarely is weird. So remember in the bind address, we want that to run on 0.0.0.0 or maybe the local IPv4 address, understanding which authorization modes are defined by default. You're going to see some insecure port configurations, not in this one, but in the controller. No, maybe the QPAC. You can see a port zero here. So there's just some of these things that you pick up over time and you think, okay, that looks weird, but I know it's completely normal. And I hope my dog is not deafening you. All right. Okay. So we do have a scheduler bug. So I'm on purpose, not going to fix the scheduler. And I'll show you what the problem is. It's trying to run scheduler 124. There is no Kubernetes 124 yet, so it's just not going to work. However, to demonstrate what the responsibilities of the scheduler are, they're almost all involved. The scheduler really doesn't do anything. Lessons for pods being created and it adds one field to the spec, which is the node that should run on. Now it has some abilities to understand what's running on the nodes, what constraints need to be applied. So I mean, that is important and you should never bypass the scheduler, but if you really need to, you can. So we're just going to modify our cluster deployment and we're going to jump down to our spec here. Add an old name. Yeah. You've done this before, haven't you? Yeah. I'm just going to copy this. So if we save this, you'll see that our cluster thing has to be scheduled. So the scheduler is important. It does work out the best place to run, especially with tense, tolerations, constraints, all of these things. But sometimes it's also important to know that you can break the glass when things go really, really wrong and schedule that workload that you need to as quickly as possible. One thing that I forgot to show is we can create a namespace via the API server. Oh, it does show up there. Dammit. Never mind. Forget I showed you that. So I did disable the namespace controller, but I'm not sure what happened there. I didn't expect that to happen. Okay. So we have our application. So let's see if it works. Thinking about it. It's not going to work. Okay. I think we have a networking problem. So someone was right in the DNA stuff, right? All right. So this, I think it's important to understand what Teleport is doing right now so you can understand what the actual problem demand is, the real cluster. So clustered is exposed as a node port service. Teleport is trying to read it right to that node port service. The node port service not working will tell us that we have an ingress problem to our cluster. So I will give the chat 30 seconds. What do we look at next to debug an ingress networking problem coming into our cluster? I'm not going to sing the TikTok song this time. All right. 10 more seconds all. Yes, rush. You're unbanned. Go for it. If so, network policies spot on again there, Russell. So Kubernetes has a concept of a network policy and we can see here there's something suspiciously called deny whip. So we can dot O YAML or deny whip and we'll see that this is actually blocking all egress traffic. So while it is naughty, I don't think it's a culprit this time and I think we've discovered a bit of a red herring. So Russell, feel free to have one more pop, but you were close. So I've deleted that and we can jump back over here and I don't expect this to work, but maybe I'll be surprised. Okay. So it wasn't a network policy. Now the other thing that is important here is that with network policies and CNI implementations that we have these days is that the standard Kubernetes network and policies are not always the only network policies within a cluster and Siliam, Calico, they all bring their own adaptations. So in fact, if I run get CNP, which is Siliam network policies, we have this other thing called untitled policy, which we can O YAML and this blocks all ingress and all egress. Again, we don't want this. So untitled. Wish I'd given that a smaller name. Sneaky Siliam. You are right, Russell. However, this may be enough for our application to work. So this is us with our version one image running on our Kubernetes cluster. So I'm sure this is going to be really easy Siliam, but we're going to modify our application. I'm going to stop using my K alias. And we're going to pull version two and then by magic is just going to work. Maybe know why it's not updating. And now we have to dance. We have now fixed our broken Kubernetes cluster through various debugging techniques, understanding the control plane, knowing where logs live, and understanding the different implementations of the CNI and CRI. I've added one more cheeky break, but didn't restart container D, which I'm going to show you because I think it's really funny and something that people do on clustered all the time. So let's pop onto one of our worker nodes. And we're going to run a container D config dump. And if we find image aliases. So if I restart a container D, we would have seen that our cluster could not pull the V2 image. And that's because instead of hitting ghcr.io, it would have went to docker.io where it would have failed. This is really handy, the container D matter setup for having pulled through caches and keeping things locally even though your manifest referred to a canonical implementation. Great feature, but very easy to trip you up. And I hate everyone that's used on clustered. Thanks. That's me, Siam. We have fixed our cluster. Awesome. So I think that was really great. Some of the fixes, some of the concepts that were discussed during the fixes, which would definitely help you to understand the control plane components, how they behave, where they are located, and how you can play around with different set of configurations and options with respect to the controller manager, scheduler, API server. And so I think that that's really, you know, insightful, David. So thank you for bringing all the broken cluster and explaining the concepts first. So that would have definitely been educational for everybody who has attended live and who will be watching the recording later. And yes, if you want to watch more debugging things just like David did today, this is getting done all the times on clustered. So this is what actually happens. People see and guess what happens, what not happens, what works, what doesn't work and they try to fix. And it's kind of, you know, in one hour, we try to fix something and it sometimes does get fixed. Sometimes you have to take the hints. So it's okay. That's because there are some nasty things that people do with the clusters. So that keeps on happening. But in the end, it's all about learning. So we hope you learned something from the Kubernetes perspective, from the certification perspective, and also from the day to day perspective of your jobs that you might be using in your debugging in general, when you are working with Kubernetes. With that, I think I'm, so David, who should we give the vouchers? I think Russ has been very active in the chat. So one voucher goes to Russ and even Frisbo was active in the chat. So another one goes to Frisbo. Looks good. Yeah. Okay, before we finish, this stuff is really hard. You know, it's only through sharing our knowledge and experimenting and breaking things intentionally. Chaos engineering is a really important part of adopting cloud native in Kubernetes. It's really, it's best that you learn how to fix these situations and all the fires that can happen before they hit you in real life production. So, you know, get creative, start breaking stuff and best of luck. Okay, so Russ is saying he doesn't want to go for the certification. Then another person who commented was AJ50500. I really don't know who you are. So AJ0500, if you are in the stream, then please reach me out on Twitter so that, you know, I can hand you over the voucher and Frisbo as well. Please reach me out. I'll give you the voucher, which is a 50% discount coupon on the certification exams. So with that, I think thank you all for tuning in to the source magic show on cloud native TV. Do not forget to click the follow button because that is important and there are a lot of shows that keeps on going on and even tomorrow there's a spotlight life with GRPC. So do not miss that and it keeps on happening all week and trusting shows. So make sure you follow that and hope you learn something new today. So thank you so much everyone and goodbye. Thanks, sir.