 Hello. Welcome to, I think, the last session of the day, the first day at KubeCon. So my name is Eduardo Silva. And maybe I would like to know, who's your first time here at KubeCon today? First-timers. Oh, that's great. Cool. And do we have any new Fluentbit users who's not using the technology already, or are all users? Somehow. Happy or not, you're users. OK. Lot of questions. OK. Both. That works. Awesome. So to get started, today we're going to introduce the technology. We're going to talk about the stand of the project, where we're going, and how do we fit in all this huge ecosystem that, for new starters, it's got sometimes a bit complex, right? You don't know where to start. Now we have, I don't know how many projects we have in the CNCF, but when we joined, we were six projects. And now I lost account. But today we're going to focus on Fluentbit, what is called the telemetry agent. My name is Eduardo Silva. And the creator of Fluentbit, a CNCF maintainer, has been with the CNCF for a long time, and a founder, also this company called Caliptia, an actual CEO. Let's talk about the problems that we solved in the ecosystem. And I think that's the most important thing. When you start doing this journey, for example, most of you might start with logging. One of the challenges that we had is you don't want to wake up one morning and say, I want to do logging, and feel really happy about it. Actually, you want to solve another problem which is about data analysis that ended up that you want to solve logging, right? So if we look backwards, our goal is to get insights. And to get insights, you need to collect information from applications, from the running systems. And when you get an issue, or you talk to an ops person, or if you're an ops person, the first question is, hey, do we have the logs? Or give me more logs, right? But you might understand that in today's world, getting more logs or more data doesn't mean more value. Doesn't mean more answers, right? And we need different approaches to deal with the scalability of data that we have in our systems. If you think about applications, all of them store the data in different places, usually in the file system, right? With a different file, for example, in GeneX, creates an access log for HTTP requests, an error log, CIS log has its own endpoint. But if you go to, for example, a window system, you will find that, hey, there is an API, a way to extract logs from that not common operating system. And when you want to go deeper and understand how to extract information to do troubleshooting or do analysis, you might end up with three major categories, right? Data lives in the file system where you read a file, or maybe you just read a pseudo device or pseudo file system that exists or is exposed by the kernel on that moment. I'm talking about kernel log messages or a TTY interface. But also, there are other parts where we can collect information from. One of them could be SystemD or JournalD. But usually, those services are not like you just run cat, the command, and just consume the data. Actually, you need to use JournalControl or JournalCTL, or you just need to use the C API to open the journal from SystemD and get the data. Same as for Windows events logs. And also, in this world of distributed systems, data does not only lives in your computer, also in remote endpoints. If you think about the Kubernetes cluster, and you want to get events from your cluster A, your data is not in your node. It might be in the API server, and you need to retrieve this over HTTP. Or also, somebody wants to send you information over the network to check the status of what's happening. For example, if you're running a firewall in a hardware, that hardware is going to ship Syslog messages and somehow you need to capture that information so you can do data analysis. I'm talking about network endpoints. And as I said, the goal is just to collect data. But when you go, in reality, you face these things. You have logs everywhere, different operating system, different ways to collect information, and most of users just walk away, or just say, oh, I'm going to just troubleshoot just maybe I'm going to collect a fraction of the information that is available. But the approach is not scalable, so we need other ways to solve this problem. And imagine that you got the information. You got the information, and now you got them in a rate of bytes that you might understand. You have an instructor log, an example, where we have an IP address, a timestamp, and some information about the HTTP request. When you read this, you understand it. Somehow, your brain was trained over the years to say, oh, this is an IP address. This looks like a timestamp, and this looks like everything else. But if you just pass these bytes to the computer, and you try to do some analysis, hey, it's just some array of bytes, and that's it. Unless you provide something more meaningful or with some structure that allows you to do a better analysis. Ideally, we want to convert all the data that comes in a structure way to a structure. I'm not saying that JSON is the best format ever. I'm not saying that. I'm saying that we need some representation that says, hey, there's a key that has a value that has a meaning. So we can extract that value from it. And in general, the whole world around logging, it's about converting unstructured data from some place, going through a part in a processing process, and then convert it back in a structured way. So our final goal of data analysis gets more easy. And if we face reality, your data is not in one server. They are in virtual machines. They're in containers, firewalls, or any type of machine that you can imagine that can generate data. Well, now we have machines in your kitchen that generate logs, right? That's insane. And you need to collect that information. Maybe not you, but the companies that monitor those things. I don't know why I want to interstate it in the logs from the microwave. But yeah, there are devices that ship that information. And having this mechanism of you writing scripts, you writing every single script to collect data from one endpoint, it doesn't work. For a company, developers jump from one company to the other. Nobody documents anything. And somehow, you need a scalable and unified way to solve this problem of data correlation at scale. And imagine if you solve the problem, you're going to send the data to a database because you want to run queries. You want to extract the value from that data. But if you don't have control, if you don't have a very scalable way to accomplish this, you might end up paying a couple of boxes to your favorite vendor for data that maybe is not relevant. You're just sending everything. And I know that, yeah, we have Splunk users here. I know you don't want to raise your hand. I get it. That's fine. But you can take the best of that world of best. You're laughing because it's right, right? And not blaming Splunk or any other vendors. I think if we abstract from the problem, what we have is like, we don't have control of that. You start using a product, and it's like, hey, use your credit card. Don't look at the bill for months. What would happen? If you don't have control, you're going to get in troubles. And when moving data and processing data is the same thing. Yeah, you don't care until your money is told you, hey, why we're investing $2 million in a database? Well, we don't have a strategy. Now we need to implement a strategy. And this is the goal of this session. And to solve login challenges, we need to think how to deal with different sources, different formats of data that is structure, not structure, and also think about volume. Because companies and we who are developers, we try to write a lot of software with a lot of print F log messages that ended up in a Splank database that you ended up paying for. And then sometimes it's data that you don't need. And this is where FluentBit comes in. And FluentBit is a unified solution for telemetry data that goes beyond the scope of logs, also does metrics and traces. And FluentBit was created around 2014. It's written in C language. Yeah, we can talk about that later if you want. It has a fully pluggable architecture that allow us to connect different sources with different destinations by doing different type of processing. We provide more than 100 connectors built into the same binary. And it's fully cross-platform for Linux, Windows, Mac, OS, and BSD. Yes, there's banks and many institutions that runs Windows at scale. And they need to solve the same problem. And also when dealing with data pipelines, that's how we call when you start solving this process, you need to have provide ways to level up the user and allow them to bring the business logic into the pipeline by writing, by putting their own rules or policies for the data that is going to be collected. You no longer want to send all the data to your database. That is not working. It's not scaling. You need to have something in the middle that take control of your data. And sometimes writing custom components is a solution. And also FluentBit is a vendor-neutral solution. FluentBit is a graduated CNCF project together with FluentD. So FluentBit is used everywhere, and it will be there for many, many years. So I got this question many times this morning while we were in the booth around vendor neutrality, around open telemetry and how the FluentEcosystem fits around this. And one thing to remember, graduated project, and it connects to multiple ecosystems. Many vendors and sometimes projects that are not part of the CNCF, others, they try to have a pitch in general that to solve the problems in production, you need to replace everything what you have and just use this solution. This is a magic booth. It doesn't work like that. Actually, if you go to production on any environment, you will find three type of databases, different aging collectors, you know? And if you want to solve this at a scale, what you need to do is to have a solution that can integrate to what you have in place. Because migrate this infrastructure tool from one to the other is really expensive in time and investment. So having the right approach is important. FluentBit can interoperate with Prometheus, interoperate with OpenTelemetry, with FluentD and with many others. So from a very high level, FluentBit is a telemetry agent that connects many sources with many destinations and in the middle can do a lot of processing, right? This is what allows you to take control of your data, right? And as I said, we support logs, meters and traces. And maybe you have here about this company, have you heard about this agent fatigue or maybe you have 10 agents in one machine and you have the same thing in, I don't know, 100,000? Yeah, this is amazing. So maybe you have this in your environment where you have Fluent, you're using Fluent to collect logs and also you have, for example, Prometheus, not exporter to collect the operating system metrics. And then you have Prometheus scraping the metrics from the system by using this approach. Now, what happened some time ago is like talking to the Prometheus users, they told us, hey, we love Fluent, Fluent is on every machine, why we don't implement some Prometheus experience inside FluentBit while we can reduce the number of agents that people have, right? And Prometheus focus more on storing metrics, providing the PROMQL, everything that is about storage, right? And from a scraping site or data collection, we can reimplement some collection mechanism inside FluentBit. And what we did was to come out with a solution where this is currently your scenario, we don't have one, we are not here, right? Usually you are here, but you have multiple servers with multiple copies of the same agent. And we come out with this solution where now FluentBit can replace node-exporting metrics. If you want to collect metrics from your patent system, you can use FluentBit the same agent that you have today, and now we'll give you an experience, which is more clean, and you have to manage a list agents. So the way that it works, we have built a metrics collection for Linux for Windows, so the same collector in Windows, now we can script metrics from Windows systems and expose those metrics to your Prometheus endpoints or to your open telemetry endpoints. So no matter what you have, FluentBit is able to scrape the same metrics with the same labels or dimensions and give you a unified experience. When talking about the standards in the industry, it's good to understand what companies use, right? For logs, we all know that it's fluent that can historically process on structured logs in a schema-less fashion with a lot of processing in the middle. But also, we support metrics, right? We integrate with Prometheus, as I just mentioned, we can send metrics to Splunk, if you want, or we can send metrics to InfluxDB. All these features are built in FluentBit today. And from the Prometheus experience, we can do Prometheus scraping, so if you have one application exposing Prometheus metrics, you can use FluentBit to scrape those metrics. We can do not export it, as I just mentioned, but also the data collection of metrics in different operating systems. And when sending, exposing the data out, not that FluentBit doesn't index or store the data for a long period of time. It does only a short buffering and then ships the data or expose the metrics somehow. By using Prometheus remote write, which is the push protocol of Prometheus, or through the Prometheus exporter, which it open-ups an HTTP endpoint and can be discovered and the information can be scraped from there. Also, FluentBit fully integrates OpenTelemetry. OpenTelemetry is becoming the new standard to ship telemetry data across infrastructure, right? But OpenTelemetry is a protocol, is a spec, and on top of that specification, there are easy case to instrument applications. There's one project called the OpenTelemetry Collector, which FluentBit overlaps, but also there's endpoints that are able to receive OpenTelemetry. Fluent, I need to be fluent. I need to connect to all type of protocols, all type of standards, because that's what people need in the environments. And we support OpenTelemetry in the input, OpenTelemetry in the output. But you can do fancy things. For example, you can collect Prometheus metrics and send them out as OpenTelemetry metrics. We have all those layers of conversion internally. From a developer experience, we can extend FluentBit as a pipeline by using Lua scripting, Golang, and now we just ship Wassum. Talking about the experience of logging in Kubernetes, because I know that this is one of the biggest topic, it's always good to remember how this works behind the scenes. Kubernetes is just a group of computers where you have different roles. You have the worker nodes and you have the master, which orchestrate where applications get deployed. So imagine that if you want to do logging that this scale gets really complex, right? You're not going to SSH into a machine to try to extract information. If you have a pod that generates a lot of information, somehow you need to collect it and be able to send that back to a database so you can analyze the information. Usually a pod or a container that's running inside a pod will write information to the standard open interface. That information was going to be stored back by the container engine into the file system in a file, right? And that information gets encapsulated. So for example, if the same example of Apache Lock Message or Nginx, you know the HTTP request will get encapsulated and a JSON message with some metadata like the stream and the timestamp or the time where that message was generated. But that's not enough, right? If you look at the message, that's fine, but also you need some context. A, what is the pod name? What is the labels that are assigned to that pod? Where is that information? And that information doesn't live in the node, lives in the master server, which is here, outside, right? So how do we solve this problem? What we do is like, ideally we want to take the original information and augment that information with everything that give its context to something like this, right? Yeah, there is a lot of information for a single line. But when you're doing data analysis, where is your final goal? You want to query your data, not by the service name, but by some labels. A, show me all the pods that color equals blue and you want to get all the information from all the nodes. And the way that it works is deploying in Kubernetes, deploying flow embed as a demon set. Do you know what a demon set is? Okay, a demon set is just a pod that runs on every single node of your cluster, right? You see the flow embed in there and flow embed gets access to the logs of every single pod. And once it collect the logs, it goes to the API server and collects the metadata and then reassemble everything for you. This happened behind the scenes. You will not notice how this works, but it works and that's good. And once the data has been assembled together, it can be sent to your favorite fancy database for analysis. So you can imagine that there's a lot of complexity here, right, getting the data and sometimes you got data that is really, really noisy, applications that are really noisy. And now we're going to dive a quick, I know that we have a lot of questions by going to talk about a bit of architecture and internals. When looking at flow embed as a telemetry agent, you're looking for something that has an input, sub an engine and has some outputs. In the input side, and internally we can expose these as plugins so you can extend it. We care in inputs and we care about IODs, connectivity with UCP, TDP, UDP Unix pipes and sockets. The engine cares about buffering, data serialization, routing, retries, which is really important because when you hit the output section of a pipeline, when you want to send data to a destination, it's not like, oh, it works, it's always connected, network never fails. Yeah, DNS fails the whole day, right? So if you're going to send the data to an endpoint and you cannot reach that endpoint, what is the agent gonna do? Lose the data? No. You need to have some retry logic in place so the output plugin, when it tries to establish a network connection, format the data, it tries to deliver the data and if that is failed, it needs to tell the engine, hey, I failed, maybe I should retry in a couple of seconds. So you try to minimize the times that things can go wrong. So FluentBit is written by for reliability, right? Want to make sure that you don't lose data and we have buffering mechanisms to accomplish that together with the retries and the scheduling. Think about FluentBit as a, imagine now a data pipeline. Now we have more concepts in the screen. It's like, first we have the input of the source where the data is being collected from and it's not just to collect and send data. Usually what you want to do is to do filtering or do data processing in the middle, right? And we have the new concept around processors that were shipped on FluentBit 2.1, the previous version, right? And we'd allow you to process the data in the input and the output. And ideally, for example, you collect the data, you process it, you enrich it, maybe you discard data, those get converted into what we call events. Events can be anything, a log, a metric, a trace, and then those events can go to different destinations. And the most common use cases is that you send the data to two or multiple places. Yeah, if you want some example, it's like, we have users who send data to Amazon S3 because it's really cheap. They want to store everything in there. But everything that is more real-time queries, for example, they send it to Elastic or Splunk, but they don't send the whole data. They reduce the data. They apply a different type of processors here. So that means that you can have some real ability but also allows you to reduce your cost in the infrastructure and the software that is around it. Internally, we don't use JSON. We use a data-acceleration format that is called Message Pack. And here on the screen, you can see how we can optimize between reducing the number of bytes needed, but also what is more important when you're reading JSON, you don't have a starting and an ending point. For the computer, it's just bytes. But if you have a binary format, you can understand... Well, the computer can understand when a key starts, when the key ends, or where the value ends, or how many sub-keys you have. So you don't need to do parsing byte per byte. You can just skip around the array of bytes, which is more efficient and, of course, is fastest. When we collect the data, we group those events. They accelerate the Message Pack in the concept of what is called a chunk. A chunk is just a group of events or records that has attached the same tag. A tag is like a label. You can set your own label, which are used for routing purposes. And when the data is collected, existing a chunk, it needs to be somewhere. By default, it's in memory. And then you start creating chunks. In memory, you get more data in between before it gets flushed. So who can tell me what would happen here if I continue ingesting chunks? The buffer fills up, and what happened after that? Yeah, it stopped processing data, but before that, you try to use more memory, and your best friend, the kernel, will say, oh, you're using so much memory, so much, but it's time to die, kill. Okay, so can you fully rely on memory? No. Is it useful? Yes, you have to use it. But you need to play with some limits. Even if you set a container that you will use, I don't know, 500 megabytes of limit, and you try to exceed that, the kernel is gonna kill your container or your pot. So memory is the fastest mechanism, but it's not persistent. What about if you're just running in memory, and you get a power outage? That data is lost. But also, yeah, it's faster, but you can be killed by the kernel. And then we have another mechanism, which needs to be enabled manually, which is in a hybrid mode with the file system. In file system buffering, we have the same concept, where we're going to start creating chunks in the file system. But if you're familiar with databases, the architectural databases, or how do they work, maybe you are aware about the concept of memory-mapped files. And how does this work? Memory-mapped files, okay, if you have a file in the file system, and you want to read its content, or you want to write data, you need to invoke a couple of system calls. Open this file, set in a position, I'm going to create a buffering memory, and now I'll read a portion of that data in my buffer. Those operations are really expensive if you're doing that 1,000 times per second. This is not just one type per second, this is 1,000 times per second. It's really expensive. But memory-mapped files is an operating system primitive at least in Linux and our operating system that allows you to have a reflection of a file content as a memory region. You incur in less system calls, your data is available right away in memory to be used. And we use this mechanism or concept where we call chunks are up on down. Chunks down means that the chunks are in the file system has not been mapped up in memory. But we're going to use the chunks, they are going up in memory. So if we want to control how much memory FluentBit use to avoid the case of being killed, you had to enable file system buffering, and this hybrid mechanism is going to play with all the chunks. So if it's getting to a limit, it's going to put certain chunks that are not being used at that moment down the file system and to control your memory. This is a very advanced technique that has been used by databases for many years. And you have that in FluentBit right out of the box. Today, FluentBit has been downloaded more than 11 billion times. And it's not wrong, it's not millions, it's billions. And this is insane. And 80% of these has been in the last 18 months. And now, why? Well, everybody's moving their workloads to Kubernetes. Every time that somebody deploy a new node in a Kubernetes cluster, hey, they need to solve login, they need to do a metrics collection, they need a telemetry agent. And FluentBit is the default one. So I would appreciate a thank you to our users, because yeah, this project has been around for almost eight to nine years, if I'm not wrong. But there's a lot to go. And today we are announcing the next version which is called 2.2. 2.2, we are improving all the processors that we have in FluentBit. We are extending this, because I didn't mention that FluentBit, I don't know if you are in the version two, but we support multi-threading. In the past, FluentBit was single thread, right? But A, in today's workloads, that's not enough. Everybody has more than 16 CPU cores. We need to take advantage of that. So we are doing a lot of training inside FluentBit and processing now can happen in different threads. That means that you get more power, you can process more data than previous version of a FluentBit. We got a new filter that is called this info that auto-paint information to your records, like hostname, operating system, kernel version. We got, I don't know, a hundred users that does this manually with weird scripts. Now we're just shipping a simple filter processor that does this right out of the box. The Docker metrics collector used to just support C-groups version one, now it supports version two. So it doesn't matter where you're running, if you're using Docker, we can collect the metrics from your containers if they are running on Docker. If you're using Potman, yeah, we have a plugin for Potman two that supports the same primitives. Configuration engine, if you are using FluentBit, maybe you're using the classic configuration format which is on the left. But recently this year, we launched a new support for Jaml. So if you have your Jaml tools, now you can start using it with FluentBit and you can have your more friendly configurations and you can do your GitOps things and make things more reliable. Prometheus MonoExporter, we just launched a new processor metrics collection for processes that are running on Linux. For every single process, we can collect the metrics besides the general metrics that we have before. We ship a new node exporter in the FluentBit for Mac OS. Why you wanna use this? Well, we have some companies who are deploying FluentBit in their employees' laptops and they want to collect the metrics from their environments. So, well, it works. And also Windows, you prefer operating system. Yeah, you can collect metrics from Windows too by using the same agent and you will have the same experience. Note, with collecting all these metrics, you can continue using your same Grafana dashboards, everything that you're using for visualization. You don't need to change anything. We are deprecating the old Grafana lucky plugin that was written in Gover Grafana. We're working with them and now we have a, well, we're standardizing that's official, the new lucky integration. Now we have a new plugin to collect Kubernetes events. The previous plugin was to collect due enrichment of the records but now we can collect a Kubernetes event for audit logs or anything around security. And we are keeping a little change that bring us a six-time performance improvement when you are changing filters for data processing. So if you're using two, three filters, which is normal, now you wish to process in one example, just to give you an example, 15 messages per second. Now you can, 15,000, you can do 93,000, which is a little improvement. So I hope you enjoy this. The other, I will give you 30 seconds to scan this QR code because we have a community member that right now is writing the new Fluentbit book for Kubernetes and we sponsor Manning and they will give us the early access for free. So if you scan this QR code and you fill the form, you will get access to the book. And the good thing that Manning works like, it gives you one chapter per month until the book's released. So you can start consuming the content, provide feedback before the book's finally released, so it will be here. Just take a photo of that and you can fill it later. Good? Okay, awesome, you took the photos. And in observability, try to be fluent, my friend. That's the way to go. Don't do a dropping replacement for everything because at the end of the day, it will be a problem. It's a limited data, it's really complex to manage and you need a strategy. So try to use the right tooling for that. Thank you. We have a few minutes for questions, but also, since this is the last session, we have more time of the record to talk. So if you want to chat or blame me around, we break something in your environment or if you need something else, please take the time. We have a microphone, if you want to do a question in public, that works, thanks. Hi, thanks, great talk. I want to ask, so I think you had partially already answered this question about overlap between FluentBit and OpenTelemetry Collector. You mentioned something about being graduated projects more mature, possibly. I want to ask also related question, did you have any comparison of performance between FluentBit and AutoCALL? Yeah, so OpenTelemetry is a new project to aggregate data that relies from the OpenTelemetry specification and framework. Right? By experience, FluentBit was created in 2014 and has been fine tuned for performance over eight to nine years and it's really in the C language. Our vision has been always like, you need to process the data without starving other applications without harming the server. So if we compare with OpenTelemetry Collector by the feedback that you get from the users, if you put the collector on every single note of your machines, you're going to end up paying for extra CPU cycles, extra memory that it's not needed. If you want to do tracing or maybe aggregation outside of the boxes, I think that's pretty fine. But our users said, hey, you know, OpenTelemetry Collector is really heavy compared to FluentBit and yeah, you have to use what is best for you. Right, but you don't have any specific benchmarks. You haven't run any. Yeah, we're going to launch some benchmarks because people has been pushing for that. We're going to compare a vector. We always didn't want to get into that because it was kind of a waste of time for maintainers. But I think, yeah, we need to do it because we know that FluentBit is the most green project in the CNCF in terms of power consumption. Awesome. Yeah, we're releasing FluentBit 3.0 next year in Paris with QQ in Paris. So we're going to have all this information together. Awesome, great, thanks. Thanks. I have a question around support for Kubernetes events. So I think you went over the input plugin for getting the Kubernetes events, but I just wanted to know about the output. Do you support the existing plugins to output the events to any existing plugins, or we'll have to write new plugins to export the events somewhere else, basically? OK, the question is, you have created a new Kubernetes events plugin that connects all the events, but now how I can send them out? So FluentBit will get the events, right? Like it will scrape the events from the API server, right? Like KHCP server. But we have a custom plugin that we use today to write logs, like pod logs, somewhere else. So can we use the same output plugin for events as well, or there will be a different output plugin that we'll have to write for events specifically? Yeah, you can use the same, the same one. I think that FluentBit can run as a demon, set a site card, it can connect to any remote endpoint, and can send the data out or to a local file without any major problem. OK, thank you. That's what I thought. Thank you. Thanks. Hey, how's it going? First of all, great talk. The questions I had were first about transforming data. So because, so in open telemetry, because there's sort of a standardized format for telemetry data, you can sort of like, they have schemas and things of that nature, right? And I'm guessing I'm new to FluentBit, but I'm assuming there's not something similar or analogous in the FluentBit ecosystem. Is that an accurate understanding? Yeah, and the main question is around how do we accomplish data transformation and processing? Fluent historically has provided us such features to enrich or to collect or filter out data for years. Now, we didn't dive in that today because sessions are a bit shorter right now, but tomorrow we have one session which is most focused with demos around data processing with logs. So maybe if you can attend tomorrow, you will get all the details about, oh, how do I do processing? How do I remove keys, add keys? Or how I can, I don't know, do fancy stuff like we have a SQL string processor inside FluentBit so you can run SQL inside FluentBit too. There's a bunch of things that are really interesting, but yeah, if you can join us tomorrow, it will be great. That's wonderful. And then the second question I had was around, I guess it's a similar idea around aggregation. So for example, if I have services, they're generating tons of, tons of, tons of errors, and maybe they're not using like a burst filter, logfoja, burst filter type situation, is it possible for us to use like a FluentBit transform to sort of aggregate, turn that into a metrics counter and then potentially drop the underlying log line itself? Yeah, so how we can reduce logs by taking advantage of the metrics features? We have one filter and process that is called log2metrics. So what it basically does, it gets a bunch of records or events, right? And you tell them, oh, this is a key that has the important value. This key is a value, this is a metric name, description, labels, and for all of them, we generate a counter, a gauge, so you can reduce the data from that. Usually used with web servers, so it's there already. Awesome, thank you. That's covered tomorrow too. Hello, thank you for a nice talk. And with all of those nice features, what room and what reason it leaves for FluentD? FluentD, where's FluentD today? FluentD is written in a language that is really hard to scale for today's needs. It's written in Ruby, but FluentD is running everywhere. It hasn't gone away. I would say most of the changes today in FluentD are around maintenance, security updates, but everything that is innovation around metrics, traces, performance, everything is, our focus is in FluentD mostly. So there's not much movement in the FluentD side because the architecture that it has, it doesn't scale for today's needs. So pretty much it means that it leaves it in maintenance mode and for everything you start using, just use FluentD. I would say that FluentD is the next generation of a Fluent ecosystem. For example, if you take, not because I say this, but if you look, for example, the journey of Amazon Cloud, AWS, Google Cloud, Microsoft Azure, all of them were used to use FluentD in the past and all of them migrated to FluentBit. So imagine that they have a different need, a different scale, and yeah, they're using FluentBit and contributing back to FluentBit. Actually today in KubeCon we have maintenance from Google and Microsoft, too. Got it. Thank you. You're welcome. What is the name and time of that talk tomorrow? Sure. Let me check my calendar. Or just the name. Oh, I forget the name, man. Oh, it's Login 430, Login deep dive and best practices. Okay, thank you so much. We're going to continue the Q&A of the stage. Thanks.