 I hope that everybody is already wake up for this morning, right? So let us sign up for this CubeCon. So let's start about the project updates in Fluentbit. I heard that some of you were aware or using the project in a heavily intense scale, while others still are new to it. My name is Eduardo. I'm one of the creators and current maintainers of the project. And Fluentbit is just a log metrics and traces agent part of the Fluent ecosystem in a graduated status for the CNCF. And Fluentbit, what it does, pretty much moves data from left to right, while allows you to process the data in the middle. It's an agent reading in C language, pre-performing, and tries to be very optimized for cloud native environments. We support sources like from Prometheus metrics, open telemetry, log file system D. And we support many also open destinations where you can send your telemetry data. In the Fluent ecosystem, more than being and dropping replacement for solutions, it tries to be a solution that provides you a no-bend or locking, where you can just integrate the solution in your environment and take the advantage of moving telemetry data. It's a project agnostic. We integrate with Prometheus with open telemetry on the other side, and it's fully open source, as I mentioned. And who's using Fluentbit at a high scale? Well, major cloud providers and companies around the globe. Today, we are announcing the release of Fluentbit 2.1. That, for us, is a major release that comes with exciting new features, and also it implements most of the things that community has been asking for. One of them is hot reload support. So finally, Fluentbit support, hot reload, people was complaining that it was never implemented properly, and now it's there. So you can deploy it, change the configuration, trigger a SICK app over HTTP or through a signal. And it will work right away. We also have the capabilities to convert from logs to metrics. And there are a couple of applications that ship metrics as logs. But actually, you would like to expose those metrics, for example, in Prometheus format. With this new capability, now you can accomplish that. Now, from a data metrics collection standpoint, for Linux, we have a metrics collector for Linux. Now we have extended file system support. Also, you can read metrics from a text file. But also, now you can configure the different collector for Linux at different intervals, which was not supported before. Now, for Windows, we enhance the whole WMI system to collect metrics. Now also, we support the capability to scrape at different intervals. And also, we have support for Windows ARM64. This is quite new. I don't know if you're using that environment, but we support it. Have you seen that Podman is like a new way to run your containers? It's been a project pushing by Red Hat. Now Fluembit is able to scrape the Podman container metrics. So if you're running Podman, you can scrape all the metrics by using Fluembit. In Fluembit until version 2.0, we didn't support the concept of metadata in logs. Usually, people tend to put the metadata inside the log. For example, your Kubernetes labels, your Kubernetes annotations. So we have adapted the schema in order also to support OpenTelemetry, because OpenTelemetry payload separates what is metadata from the content. So as a first use case, we used to support, for example, a timestamp and event. Now we have the timestamp plus metadata plus the event. This change doesn't add any breaking changes. It's pretty smooth for all your environments. So if you're not using metadata in logs, that's fine. If you're using it, now you have full support for that. And one of the exciting new additions is the concept of processors. I mentioned that Fluembit has been able for many years to process the data, but in a way that was pretty straightforward. This is a global overview where you have an input in the live, you have processing the metal, and you have the output that connects to different backends. In reality, this work in a multi-process environment, multi-threading environment, where you have the input plugins in an input thread, the output in different threads. And now with processors, we allow this. Instead of just filtering the data in the metal, you can do all the processing in a separate thread without routing. And you can kind of create a stack of data processing for the data that you are consuming. Actually, this can be reflected pretty easily in a YAML file. And we're going to share more information about this in the maintenance track tomorrow at 4.30. And Fluembit has been deployed heavily in the latest years. And now we are hitting $6.3 billion in total, which is insane. So thanks for the community. I hope these projects are useful for you. And I'm going to hand over the presentation to the next speaker. Thank you.