 Liveness and Readiness Probes One of the most appealing features of Kubernetes is the self-healing of applications. This means that if Kubernetes attacks that one particular instance is unhealthy, it will be terminated and Kubernetes will create another healthy instance. Kubernetes can also check if your application is ready to receive requests from a load balancer. If not, Kubernetes will remove your not-ready-yet instance from the load balancing pool. To allow Kubernetes to perform this task for you, you need to specify Liveness and Readiness Probes. Guess what? A Liveness Probe checks if your application is live, and a Readiness Probe checks if your application is ready to receive production requests. Kubernetes already checks if your container is running, and in addition to that, you can use exact probes, TCP probes, and HTTP probes. With an exact probe, Kubernetes executes a command inside your container, and the probe is successful if the command exits with status code 0. With a TCP probe, Kubernetes connects to your container in the specified TCP port, and the probe is successful if the port is open. With an HTTP probe, Kubernetes sends an HTTP request to the specified port, and the probe is successful if the status code of the reply is between 200 and 399. So when should I use a Liveness Probe or a Readiness Probe? There is a lot of discussion regarding these best practices, but as a general rule, you can think this way. If your application is having a problem, and restarting the instance will solve the problem, this particular check is a good candidate for a Liveness Probe. If restarting won't solve the problem because your application is misbehaving because a dependency is offline, like a database, this particular check is a good candidate for a Readiness Probe. Let's take a look at how Liveness and Readiness Probes work in Prats. You can see in my terminal here that in the center of the screen, I'm just performing a curve to my deployment, to my service and my deployment in my backends, and in the bottom of the screen, I'm watching the number of pods that I have in my current namespace. So what happens when I try to change the version of my application? This is a legacy Java application, and you know that Java applications usually take a lot of time to start and to be warm, to be able to receive production requests. So what happens when I try to update the version of this Java application? I have version one, I'm changing that to version two. You see that as soon as the new version is put into the load balancing pool, I have a lot of errors here, connection refuse. And why is that? Because my Java application is live, but it's not yet ready to receive production requests. That's why I'm having a lot of these connection refuse errors. And as soon as my Java application is ready to receive the production requests, I won't have the errors showing up here on my screen. So how can we solve this issue? We can configure the liveness and readiness probes properly in our application. So let's take a look at this deployment YAML file. You can see that my YAML file contains the liveness and the readiness probes here of my application. So I have the liveness probe here, which is an HTTP guest, and I also have a readiness probe, which is also an HTTP guest. So what happens when I try to configure my deployment with these parameters? You see that my deployment was changed, my pods are being created in the bottom of the screen, and my curve won't be interrupted because the production requests will only go to the pods once they are live and ready, which means they're already warm being able to receive the production requests and reply successfully. So I'm waiting for the first one to be ready and running. Yes, you can see that the message changed, but I didn't have any connection refuse requests right now. And the process will go over until all of my applications are already updated. Thanks for watching. Don't forget to like this video and subscribe to our channel.