 What is the importance of observability in today's, you know, once again, cloud-centric, kubo-centric world, and how we've seen the evolution of observability over the years? Well, I guess observability as a broader term, you know, for me, observability is also, you know, you write a piece of code and now you want to be sure that the code does what it should do. So writing a test, for example, to me is part of observability because it checks whether a piece of code does what you want it to do. So, but once the code is running, you know, in a server environment, for example, you often have that deaf-prod parity gap where your production environment is different from your local environment and you cannot possibly replicate the exact situation that's in the production system someplace else. Where we're getting better there by using declarative automation and blueprints and, you know, have a staging system that's structurally the same, but it will never be exactly the same. So if you want to know what's happening, you need to be scientific about your way of writing code and the way about debugging, you know, running systems as well, operational systems as well. So just recently, I've spent some time with a friend of mine. He was looking for a bug and he's, you know, I watched him and I saw him making guesses and I said, you're making guesses because you don't actually know what's happening. So you need to find those variables in the system that you want to observe in order to deduce whether your hypothesis that you generated can be true or false. So you need to be able to check that and observability is a key ingredient for that. So this is human operations. You need to have a good observability of your system. Your system needs to emit information that will help you making, as the operator, differential diagnosis. So you need to understand, for example, what's the kind utilization of my file system? Oh, it's 90%. Well, in about 10%, you won't be able to write temporary files anymore and you will run into undefined behavior. So if you don't have that observability, that's a very basic example. You would see systems going from healthy to unhealthy in a few seconds and now the system is unresponsive because nothing in the operating system can proceed anymore and you would have to figure out with a non-responsive virtual machine or a pod what's actually going on. Well, maybe you have a declarative technology in place that would kill that thing and bootstrap it and once again, if you want to find out what happened, you need to have logs or anything else. So in this case, avoiding the problem means observing resource consumption, in this case, the file system utilization, or continuously having alerts or countermeasures even being automated. So the next level would not only be make those attributes of the system for differential diagnosis open to humans but also to automation because if you think about declarative management, I'd say of a database, you describe, you wanna have a Postgres database, version 14, whatever, three replicas, that virtual machine size and you put this into your Kubernetes cluster and say, well, do that for me. There's basically an operator running an agent, a software that perceives the environment, sees, oh, there should be a new database or there should be a database being modified that's already existing and acts to make the current state of the system equal to the desired state that has been described by the user. And once this has been described, you wanna keep it that way. So if a host goes down, taking down a pod, let's say a Kubernetes node died and you wanna have to create that pod again, you want this to happen automatically. So now if you think beyond basic cell feeling and pod magic that happens with using Kubernetes anyway, if you think about, my favorite example is Postgres but you can use other examples. If you think about, for example, asynchronous streaming replication, you're susceptible to a replication lag. So that secondary is not up to date with the primary because, for example, you have issues in the network. So a healthy network would be a prerequisite for having a small replication lag. Now, so monitoring network health could also help to predict the replication lag and watching at the replication lag would help to predict cluster failures when the cluster manager decides, well, that's not tolerable anymore. So in that sense, if you think about full lifecycle automation, which we at NNINES aim for, observability is a key ingredient because you need to make the data available to the agents wherever they are. Could be the operator, could be a process co-located in a container running alongside your database, could be something else. But you need that information and you need to tie it into automation, making it available to humans first step, making it available and utilizing it in automation second step. And that's why we, for example, invest a lot more in services like LogMe where there's an open search-based analytic server underneath for logging purposes with a fancy dashboard and ways to look for logs. And Prometheus, so that you can observe things, send alerts and do all that. And it's just a few examples. So sorry, whenever you automate a data service, you also need to make, it's interstate accessible in the terms of observability so that you can hook into that automation and do other automation again, like onion skinning. You wanna build things on top of that.