 Yeah, but these stats are from our own public repositories. We don't have any stats from the cloud providers. They just say we have more than that, right? So this is a really good from our project perspective. Now we started talking now a more interesting topic, right? Metrics, yeah, you say fluent is born for logs, fluent bit born for logs. But we always wear a project which is kind of diagnostic, right? We didn't do any kind of special things, besides serialized data, optimized memory buffers, do it twice. But if you abstract from that, it could be anything. So this year, the fluent bit project and fluent D, we decided to jump into the metrics space, pretty much because we were already there. At the beginning, as I said, this started from better Linux and we collected metrics before. We have plugins to collect CPU sample, this IO, network IO, and all of that. But as a structure logs, not as metrics, right? And there's a big difference. When you handle data and structure logs, you can have a structure, but maybe you don't have a fixed schema, right? Which is totally different. Now how this correlates? It's like logs are a structure or structure message. In metrics, you have a fixed data model. In logs, you need filters to reach data, do data reduction or aggregation, which is optional. Metrics, just aggregation. Now, in logs, you don't have a predictable size for the data, right? As much data you have, you have to optimize memory, performance, everything. In metrics, pretty much you know what's the maximum size for each metrics. Yeah, you cannot know how many metrics you're going to get or create, right? But it's how you have more control than logs. And in logs, you have map, Booleans, integer, floats, any kind of data types. And metrics kind of counter, gauges, histograms, so you can minimize the use case. So we started to think what kind of value we can bring to the metric space, right? And because every user, even coming from this conference, cube gone from years ago, everybody says, I don't want to manage multiple agents, one for metrics, one for logs. Can we have a more unified experience? So, and we started to think, okay, what is the current standard in the industry right now? It's pretty much Prometheus and Open Metrics, right? That's where the industry is using today. I'm pretty much to say, okay, let's align on that. Let's use, let's align with the industry using, but since we are also vendor agnostic, meaning we don't get married to just a specific backend, we also can be flexible enough to say, okay, we're going to what the industry is using today and adapt to what the industry is using tomorrow. And we started this small project because if you're going to jump into something, you wanted to know how to take advantage of the ecosystem. We created a lightweight project called Symetrics because we had all the engine for logs, routing buffers, but metric was something new. So how do you approach the problem? We created a project called Symetrics at very lightweight library, reading the C language, that manage all of these countergages, untyped metrics, histograms, labels, atomic operations, and it was quite highly inspired, quote, copy paste of Prometheus Go Client, right? We took the same approach. There's years of experience there. We don't need to reinvent the wheel. We just need to reimplement it for our own purposes. And one of the benefits of Symetrics project is that it allow us to create context of metrics, serialize the data to send this information over the network, also convert a context of metrics to Prometheus format, to InfluxDB, to Prometheus remote write. So be agnostic on how do we handle the data? And this is difference. One thing is handle content and the other thing is how do we handle transport? If you separate the problems, you can come up with a better solution. So Symetrics allow us that. Pretty much this is how you create, this is a C code on Symetrics. Pretty much you just create a counter for Symetrics, you get a timestamp, you create a counter, and you can have labels, and then you just can increment it, retrieve the value and print the value to the standard output or do something with it, right? The same thing is what we do in Prometheus Go Client. I don't think that the APIs is so different. Pretty much the same number of lines. But also you can say, when you have this Symetrics as I said before, hey, I'm going to send this data maybe to InfluxDB. Okay, just use this API and convert it to Influx. And you will get the output that you have down, right? So we reduce the complexity for the users on how to handle the metrics payload. Same for Prometheus Explorer. Use a different API, function name, and just export this to Prometheus. Yeah, so we are getting all this knowledge of how to handle data conversion, but in metrics. We are doing that for lots of years, right? But we're bringing this experience to the metrics space. And as a summary today, with all the Symetrics, all through in bed, how do we jump into the metrics space? We say, okay, the major pain from our users in the metrics space is that they want to have a unified engine, but they're using nowadays Prometheus NodeExporter, which is a tool that collect metrics from the host. And they told us, hey, why you don't create a fluid-in-bed input plugin that gathered the metrics in the same way that NodeExporter does? Great, so we kind of reimplement, as I've said, we clone NodeExporter plugin, sorry, project as a plugin for us. We didn't see using Symetrics. Now our fluid-in-bed metrics now are using Symetrics. All of this now can be routed out, the same information to InfluctsDB, Prometheus Explorer, Prometheus Remote Write, and also forward these metrics to other agents in case you want to have some kind of HA. So what is the value here? It's that if you were using NodeExporter, you just can send your metrics to Prometheus, be scraped by Prometheus. Now, using the same interfaces, you can export it to Prometheus or send it to a different backend databases. So what's next for fluid-in-bed? We talked about metrics. I would say that the first months of this year, this was the primary work, but we still have a lot of things to do. And what's next, I would say, for the next quarter, and part of the first months of the next year, is to implement the same NodeExporter, but for Windows. Yeah, you might be surprised, but from a Calitya perspective, all of our customers need to have the same metrics collection for Windows. They're asking for the same. So we took the approach, we're going to write this plugin for Windows for everybody. We're going to do the same thing to collect Nginx metric. We're trying to cover that in the Prometheus world, you have exporters and metrics collector for different sources or services. We're going to integrate that in the same bundle of fluid-in-bed in C code for Nginx, collectD, statZ. We're going to create an option also to convert logs to metrics. There are many applications that ship logs as metrics as a JSON payload, but we don't have a way to say, hey, this is a metric, please create a counter based on this that is coming up shortly. Also, we are going to start implementing all the connectivity for Splunk metrics, Datadoc metrics and CloudWatch metrics. The last thing that people is asking, even we got a ticket today, hey, please, can we write open plugins in Rust, you know, Fluent-Bit's written in C language. And why in Rust? Because they say it's cool. And that's fine, right? But we get requirements to write plugins in Go, in Rust, in different languages, even in JavaScript. So we're going to implement some WebAssembly layer on top of Fluent-Bit. We tried to do this this year, but it was too much work, so we got deferred to the next year. And so we are going to provide the option to use it to create the own filters, their own plugins in their preferred language. If we use WebAssembly, they can use Java, JavaScript, C, Go, Rust, or whatever they want. So that's pretty much what status of Fluent-Bit, our roadmap for the next period. And if you have any questions, please just raise your hand. Thanks. Do we have one question? Yeah. Yeah, I think that would be great. Actually, when we started this year, we said, okay, what are the two things that we can do that will bring more value to the users, the community, and companies? And we got two, right? WebAssembly and EVPF. But also, we were working in metrics. So we had to say, okay, there's one step at a time. We started with metrics, EVPF actually is in the roadmap. I didn't want to mention because we don't have yet clarified what are the first steps, right? I think that EVPF is mostly about metrics because at the end of the day, you want to sample syscalls, right? And what is the rate times invested on syscalls for your own purposes? So for us, I think that we are still on that learning phase, but it's something that we want to do for sure. Yeah, thank you. Okay, you have a question. Is Open Telemetry integration ready today? The answer to today is no. And what is the answer for the future? Open Telemetry is a quite, we're talking about locks or metrics. In general, okay, Open Telemetry is a framework. It aims to cover locks, traces, and metrics. The only GA part at the moment, as I'm aware of, is just tracing, right? For metrics, it has not hit GA. So once it hit GA, Fluent BitProject will create its own connector to receive metrics from applications that use that kind of telemetry system. For locks, it's hard to say. I think that in the logging area, there are a bit delayed on this. I hear from some cloud providers it might take one year or two years. So as of now, we integrate what the industry is using and it's pre-medious, but when Open Telemetry is ready, we will be ready to. First, I'm going to repeat the question for the audience. Yeah, the question is that Java SDK is already collector for locks SDK. So when that gets ready and GA is Fluent Bit and Fluent D will be ready for that, the answer is yes. Yeah. That is the transfer protocol, if I'm not wrong, right? Yeah, that is the way that we work. Actually, we just try to be protocol and diagnostic. So everything that hits GA, we will start implementing right away because we want to be this kind of middle layer that can help in this architecture, but without replacing, being a drop-in replacement vendor locking solution. So we're going to implement all these protocols as we have done this for years, even from C is locked to be forward, lockstache protocols. So for Fluent D, I'm not sure. If there's a Ruby gen that does that, probably there's something on the work. You have to consider that Fluent D project is quite big. We have 1,000 of plugins available. So yeah, we don't track personally each one of them. So there might be something in Fluent Bit, which is more focused right now. Yeah, we don't have that implementation. Okay, thank you so much. So if you want to talk more about all this roadmap, we can talk after the session. Thank you.