 Hello everyone. Welcome to Kubernetes maintenance track. We are a structured logging working group and we'll be going through the things that we have done in Kubernetes to make logging in Kubernetes more contextual and structured and how you can leverage that to better monitor your Kubernetes clusters. I'm Shivant Shivant and I'm accompanied by Miang Jio and yep, let's get started. So who are we? We are part of segmentation which takes care of instrumenting all the Kubernetes components, be it API server or any other communities component. Segmentation takes care of instrumenting the like providing the traces, metrics and events logs and WJ structured logging takes care of developing and maintaining all the libraries that are needed to have a structured and contextual logging in Kubernetes. So the target audience for today's talk is people and the public cloud providers who are managing Kubernetes, teams who are managing Kubernetes on on-prem, developers who are building OSS or SaaS solutions for monitoring agents and they want to build something to monitor Kubernetes and the contributors who are contributing to Kubernetes itself. So the agenda includes going through some introduction around what structured and contextual logging in Kubernetes is, some recent developments and a demo to understand how you can leverage the changes and set up a monitoring of Kubernetes. So to start with structured logging, there are a couple of things that we had to do to make log in Kubernetes stable, starting with designing the log schema. So it's basically a message with the key value pairs in communities that we have developed and the K log library that is used in Kubernetes depends on models on the log R. So here's an example of the changes that K log has on top of log R so that anybody who is writing or contributing code to Kubernetes can use some methods so that they can just have some details around that pod or any Kubernetes object. We also have introduced logging format in Kubernetes. So instead of just a text logging format with like you can configure in your Kubernetes cluster and all your Kubernetes components so that each Kubernetes component is logging in a JSON format. Here's a sample JSON format example. For JSON, we use Zeppar and for structured, we still use K log. Let's take a look at how contextual logging in Kubernetes is. So effectively, the meaning of contextual logging is adding some additional context in your logs so that the context is remained through all the logs that are there. So for example, how we have done it in Kubernetes. So there's a global logger which is being replaced by logger log our instance into functions and the context is maintained in individual logging by providing the actual context. It's built on top of a structured logging and it enables the caller to provide some context. We can use a logger as a key concatenated by dot so that we know which like we'll understand this by example, like how the logging is done. We can also have some key value pairs added in the contextual logs so that there's some context maintained. We can also change the logging verbosity and for unit test. For example, what happens in Kubernetes CI? If the jobs are running parallely, if the multiple unit tests are running parallely in Kubernetes CI, things fail and we don't know which unit test is failing. With contextual logging, we can actually see which particular unit test when they are running in parallel are failing. So for example, let's say a Kube scheduler starts a pod and in the Kubernetes code, it would create a new instance for the logger and then the Kube scheduler would initiate some plugins. Then another logger would be initiated there but if there's no contextual logging, we don't know which plugin is associated with which particular log but with contextual logging in place, that information is there. So here's an example of a contextual log. We have the message attempting to bind port to node and then there's a logger information. So bind dot default binder and we know which particular pod is logging this. So with this context, if someone is trying to establish monitoring on top of Kubernetes, there's a lot of information there and they can bring in some automation so that they can better manage their Kubernetes clusters and components. So let's talk about some recent developments that are gone into it. So we went beta in version 1.30 in Kubernetes with contextual logging. The focus now is to carefully extend APIs in staging repos so that they support contextual and structured logging. This implies adding alternative APIs because we cannot break the distinct Kubernetes code and the log check can now enforce that we use the newer APIs in Kubernetes. More help is needed from the fellow contributors so that we make this possible. There's another thing that the S-Log support is now there. S-Log got added in Go 1.21 and with that there is interoperability with S-Log. Which is now provided by LogR. S-Log is a new standard library package which is comparable to LogR as the design was partly derived from it. But it's not a full displacement since there is no logger in context and no log helpers in S-Log. So in Kubernetes, we still continue to use LogR. Yes, full interoperability is supported. LogR can turn us S-Log handler into a LogR instance and vice versa. The only missing piece here is, I mean the missing piece of puzzle here is to solve the S-Log default, like make S-Log default inside Kubernetes binaries and there's a PR pending for merge from Patrick for that. Key log package updates, they are not merge update but now set with set S-Log logger, we can enable S-Log as backend in Kubernetes. Yep, Kubernetes bypasses legacy code. Kubernetes itself deprecated most of the key log flags a while back. Now in version 1.t0, it also bypasses the code implementing them, which improves the overall performance. Some updates around key testing package. So some work is happening in the Kubernetes test-utils key testing package to turn that package into a test helper that works with both GoTest and DjinkoTest. Help needed here as well. Let's see these things in action because that's when it will make more sense how an end-user can leverage all of this. So we'll go through a multi-cluster setup. We'll change the default logging configuration from text to JSON. We'll see some Kubernetes components logging in JSON format, control manager, scheduler and API server. We are using OpenTelemetryCollector as a logging agent. You can use any logging agent to collect the logs from Kubernetes clusters. We are using OpenTelemetryCollector, Filog receiver, so that we are also able to inject some of the resource attributes from the service. We'll use Loki as a data source and Grafana for the UI. On a high level, this is the demo that we are going to see. OpenTelemetryCollector with Filog receiver is installed in the Kubernetes cluster. It's collecting logs from the Kubernetes components. Everything is being sent to OpenTelemetryCollector running as a backend to ingest all the telemetry data. Grafana Loki is OpenTelemetryCollector is exporting everything to Grafana Loki and Grafana is using Loki as a data source to see the logs. We use Khan to simulate her multi-cluster. Now, it is cluster one. We can collect her KUPA controller manager, KUPA API server, KUPA scheduler, port logs. Default the log format is text format. We can see the log output, equal value, equal value. Now, we can change the log format from text to JSON. Then, we can collect JSON logs from Kubernetes components log. We use our KUPA scheduler as example to change the log in format. We can say the KUPA scheduler is not ready with a second for the port to restart. Now, it is running. We can say the KUPA scheduler log output is JSON format. Now, it is JSON format. We have installed OpenTelemetry Collector agent to collect Kubernetes components logs. OpenTelemetry Collector agent will send logs to Grafana Loki to collect the logs. Then, this is configuration. This is Loki address, and we ingest some attributes to Loki. We can use this label to select, such as the namespace name, pod name, and so on. You can add some attributes called operating your needs. Now, we can open the Grafana links to visible the logs. We use Loki as the data source. We select the class name equal to class 1. Select the label pod name equal to scheduler as an example. Then, run query, and then we can say the log output. Sorry. This is one pod. One pod named another KUPA scheduler. Sorry. Wait a minute. Now, it is a KUPA scheduler pod. We can use some failure to collect the logs and search. Now, we can create a deployer named NGX. Then, we can say NGX logs in scheduler logs. Now, we can say the result container NGX. If you want to search more condition, you can use Grafana. We use contextual logging. In KUPA scheduler, we attach the pod object information to logs. We search the NGX and we can get the information we need. The demo is a bit over. Sorry for the hiccups. If you are an end user and you want to figure out how to observe better, there are references that are there in the slides that I will upload to the schedule link. If you are a contributor and you want to get yourself feminized with the community's code base, then probably it is the easiest way to get involved with structured and contextual logging. That way, you would be able to contribute and also understand the community's code base. If you are someone who is building observatory solutions and you want to build something to monitor communities effectively, then there are resources in the Slack. There are things in progress and you are always welcome to join us. There are some resources on blogs that we have published and some of the performance tests that we have done. Take those things as a reference to start pointing your communities effectively. Questions are welcome. Thanks for coming.