 Hi, I'm Michael Ducey, Senior Manager and Manager of OpenShift Black Belt here at Red Hat. And I'm here today with Nirav. Welcome, Nirav. Hi, I'm Nirav Ducey and I'm a Manager of OpenShift Black Belt at Red Hat. And today, we are going to talk about Azure Red Hat OpenShift and how customers can capture and forward logs within ARO. So, Nirav, one question we get from customers are what kind of logs exist in our ARO cluster? And I guess the second question is, is how do we get them off there? But let's take the first question. What kind of logs do we have? So, we have three types of logs. We have cluster logs, or I would call it infra logs. And these are logs related to your control plane, your worker nodes. All of that logs are captured within OpenShift. So, these ways, think of them as your operating system logs that might be on those nodes that are running your workloads. The second type of logs are your audit logs. So, all the activities that are captured from your OpenShift API. So, for example, if anybody has requested for a workload or somebody's doing an upgrade, all those activities are captured into these audit logs. So, essentially, anything going through the OpenShift API that either an automated process or an end user is interacting with the API that will go into the audit logs? And then the third type of logs are your container logs or your application logs. So, these are logs generated from your application, and these are captured within your application logs. So, these are the logs that the developer's going to care about because it gives you insight into what's actually happening inside of their application. And I'm assuming it's a lot of standard out from our containers that are running inside of our... Yes. So, now that we have these types of logs, how to show us inside of an ARO cluster how we can set it up so we can forward these logs. And to maybe let's say, in this case, let's use Azure Blob Storage. Okay. So, here we have our ARO cluster, and within our ARO cluster, we have pods running, and these pods can be your application logs or your logs related to your worker nodes. And then basically everything that's running will be in your ARO. So, the first thing that we would do is install our low-key operator. And what low-key is is it is a log aggregation system. So, it is a log aggregation system. So, in order for us to capture this log, we would use vector to capture the logs and use low-key for indexing these logs. So, the logs are going to go to vector. Vector is basically that stream aggregation, and it'll create a single stream and then forward those over into low-key, which will do some basic level of indexing for us before we ship them off to storage. So, once you have that, the next thing is you need to create an Azure Blob Storage. So, we are using Azure Blob Storage, but you can use any storage. And once you create this, it will create a secret and an access key. So, you need to now have the secret configured. I mean, the keys can be configured within the OpenShift namespace, or you can use OpenShift secrets, or you can use an external secret like Azure Key Vault, where you can basically take those keys and then configure that within the low-key stack custom resource. So, it would be the no key first time of the source within the operator. Okay. So, now low-key connection actually has the connection and the ability to talk to that storage, right? That's right. The next thing is you would install the cluster forwarding, I mean, cluster logging operator. And this will be, basically, it is an API management, so it has multiple API management to collect and forward these logs. So, you would create a custom resource and consider everything from collecting, forwarding, and everything will be in this custom resource where you can take those logs and forward it to the storage. So, here, for example, there are three types of logs, and I just want to store my application logs because my developers can access and query from, in this case, it would be Azure Log Analytics Workspace, for example. But let's say, for instance, my security team, they have their own log aggregation tool. And, you know, these audit logs sound like something I want to necessarily store in the same location where my developers have access because this is all about who's doing what to the cluster and get information on other people's namespaces, user IDs, and those sorts of things. So, what if I wanted to just take the audit logs and ship them off to my security team that might be using another log aggregation system? Yeah. So, you would go through the same process, but the only thing is, basically, you will consider your secret for this storage. And then, forward those logs, just the security logs from there. So, there are multiple ways you can collect and store it to different locations. And then, I would assume for my infrastructure team, I could also have the infrastructure team get their logs in a different location because they might have their own tooling that they want to use. That's true. So, you don't have to send all the logs to everywhere. So, there can be third-party tools that you might have your enterprise tools that you have standardized and you want to use those logs and query them. You can always send those logs using both these operators. Great. Thanks, Naurav, for coming by and explaining how we can forward logs into, in this case, Azure Blob Storage, but how we can just forward logs in general off of our ARO clusters. Thanks a lot. Thank you for watching as well. If you have any questions or would like to learn more about Red Hat's products and services, you can always visit our website, of course, at redhat.com.