 Hello, everyone. Welcome to this week in CloudNATV where we dive into the code behind CloudNATV. I'm Paulo Simonez and I am CloudNATV ambassador. Every week we bring a new set of presenters to showcase how to work with CloudNATV technology. They will build things, they will break things, they will answer your questions. This week we have journalists every Wednesdays at 3pm ET. This week we have Unrag that will talk about FluentBeast. Also journalists in KubeCon, CloudNATV, virtual Europe in May 47 to earn the less from the CloudNATV community. This is an official live stream of the CNSF and as such is subject to the CNSF Code of Conduct. Please do not add anything to the chat or questions that would be in relation of the Code of Conduct. Basically, please be respectful of all your fellow participants and presenters. With that, we will hand it over to Unrag to kick off today's presentation. Unrag, nice to meet you. I'm very proud to have you together with us in our CloudNATV show. It's a way to show the code and answer the question from the community. Nice to meet you. Good to have you. Yeah, definitely. I'm happy to give you an intro. Hey everyone, Unrag, I'm good to represent the really awesome FluentBeat community today. I'm one of the open source maintainers. I've been primarily focused on building out the features. What are we doing next? Getting ready for KubeCon Europe and our FluentCon event that's co-located with that. We just had a really big release, February 14th, to go alongside Valentine's Day. I thought I could talk a little bit about for those who aren't as aware or fully enriched in the Fluent community, what the project is, some of the use cases that we have, some of the new features, and then of course a lot of demos so we can walk through some of the really cool stuff that we're trying to build and of course get feedback from this awesome CloudNATV community. So with that, let me go ahead and share my screen. Unrag, just before a little bit, you started your presentation. I'm very excited about this project. I was reading and studied a little bit about it. And we can see that it's writing in C. It's amazing because it's amazing language and it's not very common to have a project in CloudNATV's community writing in C. Don't worry, why would you choose C as the language for this project? Yeah, such a good question. When we think about Fluent D, which is the large graduated CNCF project and Fluent Bit, one of the primary things that user would ask is how can we make this as lightweight and performant as possible? And if we look at, again, the timetables Fluent Bit was created in 2015. And the use case was not really for containers at the time, but embedded Linux, Raspberry Pis, whether it's IoT devices, and naturally C is just so portable. You have years and years of experience with C being written on IoT devices, embedded Linux. So we started with Fluent Bit to say, let's write it in C. Let's make it super lightweight. The community is pretty large for C, of course. And now when we look today, you have this influx of Go, you have Rust. So some of the ways that we've tried to cater towards that and make it easier to contribute is adding Go plugins with Fluent Bit. So you can write plugins the way Fluent Bit works is you have many sources, many destinations. You can write those plugins in Go. And some exciting stuff that's coming out is you have this large momentum with WebAssembly going on. So now that Cloud Native WebAssembly is going on at KubeCon Europe this year, and we're happy to be participants in that community. And one of the objectives for Fluent Bit is with C, we can potentially have WebAssembly plugins. So we've done Go, but now people want to write in Rust. If you want to write in Python, they want to write in JavaScript. How can we make the developer community as broad as possible, but keep that lightweight efficiency that is so useful in container environments, so useful in embedded links environments and Cloud Native environments? I saw that the folder Fluent Project, that is a CNCF project, it was right in some parts in C, in other parts in Web. And different, Fluent Bit, it's pure C. This decision was to be more lean, to have a project more lean or more performatic. Yeah. So Fluent D, which is the larger project, is written in Ruby. But you're absolutely right, there's parts that are written in C. So anything that's going on with transformation, we use C Ruby. And that's a little different than some of the other log shipping that was out at the time, which was written in JRuby and Java and you have to have whole JVM. And when we looked at that package, which was still around a couple hundred megs of memory, how do we make it lightweight for some of these embedded environments where every meg of memory counts? And the same thing might be true of containers, is C was allowed us to get into less than a meg of memory being used, residually, microCPUs being used, it was such a lightweight profile that it just made sense to leverage that. We don't have any environmental piece that needs to come out. We can just deploy, it's a binary, it's compiled, it goes and runs. And over time, of course, more folks are adding on to this, we're seeing more users that want to deploy this and essentially replace some of the applications with this new lightweight C project. So it is getting a little larger, but the core tradition of lightweightness and performance is both there. And what I said, the idea, principle idea was to endeavor the flint bit inside the equipment, very, any equipment. So we can think that we can, in the near future, we can meet flint bit in many gadgets that we will work today in our house automation or enterprise automation, industrial automation. It makes sense this, imagine a flint bit of great projects, cloud-native open source, great projects spreading the word of automation. It makes sense this, I think that's a great thing for the project and for the community. Yeah, in some sample use cases that we've seen, which is so awesome to be part of the communities to watch this, is robotics. Folks are embedding flint bit in robotics. IoT devices, just like you mentioned, home automation, lights, any of those switches, we have folks that are doing this in wind turbines. So environmental effects and power generation. And a lot of the core competencies that Fluent D had as a project, we brought to Fluent Bit around how do you handle network connectivity, not always being available. How do we buffer data? How do we retry? How do we error handle? And so the project itself has evolved to make sure that if you do want to deploy it in some of those smaller embedded Linux environments or containerized environments, we have the packaging and it's able to do that, yeah. Oh, amazing. I'm very, I'm very excited to see this project running. Go away, go away. I will be quiet now. Thank you. Sure, sure. And of course, I love to have the discussion. So anytime folks have questions, throw it in the chat and I'm happy to answer them. And we'll go through fast. I think the more exciting stuff is the demo. So we'll keep the slides to a minimum. So for folks who are joining and potentially don't have ideas about what Fluent D and Fluent Bit are, let's talk a little bit about it. So when we look at data today, it's definitely different than it was a few years ago. You have so many different sources. You have formats, you have outputs. You have these challenges with how do you handle things like network outages? If you're going to deploy Kubernetes on the edge, how do you deal with low amount of file systems, low amount of network connectivity? How do you deal with high volumes of traffic? You might be under a data loss attack. How do you deal with these challenges? And especially with newer environments, you might have ephemeral pieces that live for less than a second, but you still need to capture all the data or capture the logs, the information, the telemetry about what's running. And of course, these all have different formats. So you'd look at 10 years ago, today there's 10 more applications that are now mainstream. We look at the CNCF ecosystem, right? It's this enormous amount of in plethora of apps that each have a unique way of distinguishing their logs and their application data. And all of these things come together. And what the folks at Treasure Data, the original inventors of the Fluent D project saw was there needs to be some way to collect from many sources all across these different environments and routed to multiple destinations. And so that was the birth of Fluent D. So here you can see we have things like applications, container operating system, security network logs, sending to a variety of locations. It might be things like elastic search, Kafka, Splunk, Amazon S3, cloud services, open source tech. You might even have tons of destinations that are yet to be invented. So more recent folks like Loki, Grafano's Loki, is another popular destination that's growing and being able to just add plugins. It makes it so that you can keep using oldest logging infrastructure without having to replace an agent for every destination that you might be running. And so actually Fluent D turns 10 this year. So this is pre-Kubernetes era even. You have in 2011 the project was created. It's been solving logging problems at scale. It's been embedded in a ton of enterprise places like Azure, Google Cloud Operations, Cloud Foundry, OpenShift. There's thousands and thousands of users. And it's still today downloaded hundreds of thousands of times per day in deploy. So when you look at Fluent Bid, we said, hey, there's all of this traction, but folks need more lightweight, they need more embedded. Containers are coming out. How do we make things that are smaller, more compact, more performant? And we started this project in 2015. It's, again, a sub-project under the umbrella of Fluent D sits next to Kubernetes and Prometheus as a graduated project, which is a status that CNCF projects reach after a certain amount of time. And now this is something that is preferred for cloud environments. We have folks that have been contributing from Amazon, from Google, from Microsoft, and folks that are contributing for their specific endpoints like Datadog and New Relic. And this has really created this nice ecosystem of really lightweight project, fits under 10 megs. And contains all the capabilities to both read all the data from all the sources and send the data to multiple sources. So what are the actual use cases here? Who cares? If I'm just sending data from point A to point B, why would I want to use Fluent D and Fluent Bid? Now, I tried to separate this out into five main reasons. One, you might want to reduce costs. I think in today's age with the cloud era, we're seeing a ton of egress charges. We're seeing a ton of places where log volumes can be quite expensive depending on your back end. You might want to enrich data. So if you want to go and enrich an IP address with geolocation, you want to enrich an IP address with some security information. These are all things that Fluent Bid does and has the capabilities for. You might want to format that data in a different way. You may want to redact and anonymize. I think this next few years, we're seeing privacy continue to be at the forefront of folks mind with GDPR, California's Consumer Privacy Act or CCPA. And you might want to redact and anonymize before you send it to a back end. And then last but not least, I think this is just true of all CNCF projects is neutrality. You want to have a vendor neutral space where folks are able to decrease their dependency on a single vendor and be able to say, hey, I purchase both Elasticsearch and Grafana Loki as projects and I want to send data to both of them. I use S3 and I use another destination. How do I make sure that I can send to all these locations without having to worry that functionality will be changed or it's one vendor is looking at everything. And I think that's just true of all CNCF projects. So what's the new release? Sorry to interrupt you. You start to talk about the release, the new release. You said about GDPR, the reduction of sensitive data. This is very, very important. Some days ago, we had a problem with some many information from bank accounts being braving and it's a very important feature. How easy is Flint to work with this reduction with this feature? How is implementing this feature? Very simple. I don't want to make trouble, but how is this for us that are trying to do this? Imagine you added this in a bank machine, bank cash machine, et cetera. How is this implement this? It's a really good question. So there's a couple of ways that we support these type of reductions. The most obvious is if it contains X, remove it. So if you find a credit card number, you find a first name, you find an address with something very simple like regex, you can just remove that whole entire message. Now, that's a very easy way to do some redaction, but it's not the most powerful way. There might be cases where you want to detect that a credit card was found and you might want to be able to see all the other information that that log message has. So what Flinty has had for a while is this concept of anonymization where you can take a salt, you can hash the entire field with something like SHA-256, a hashing algorithm, and then all of a sudden the person who is looking at that won't be able to say, oh, this is a credit card number. But they'll be able to see that the credit card existed and here's the additional metadata. Now there's all sorts of other ways that you could do this type of redaction and anonymization. You could do things like take that data and say that, okay, the security team needs to see it and send that data to the security team while the remainder of the fields get separated out into another record. You could do instead of doing some salting, you might be able to salt and hashing, you might be able to just append that field with, I don't know, warning or error and have folks be able to alert on top of that. So there's endless possibilities within the terms of logging, but some of the more popular ones, hashing, anonymization, sanitizing, and removing. Thank you. Thank you so much. Amazing. Amazing. Our friend here has done a great job on security. Thank you. Go ahead. Thank you. So the newest release, 1.7, we released 10 days ago, so it's a bit new. We've already had another upstream version since then. It's fully focused on performance. So what we saw before was Fluidbit could handle 5,000, 6,000 events per second, which is not too bad, but people want more. People have tons and tons of data. And now we're seeing just enormous amount of data being able to be processed and handled with a single instance of Fluidbit. We actually changed the crypto libraries, so we used to use Embed TLS, and we realized by doing some performance analysis that OpenSSL was much faster. So we decided we're going to change that, and that gave us a big performance boost. We noticed aspects like our IO optimization was not as robust and great as it could be, so we made changes to how we look at IO, and that's for things like resiliency and making sure if Fluidbit dies, you don't lose your data. And then, of course, the one that everyone had been asking for for so long is, give us multi-workers. Let us run, deploy Fluidbit on a 64-core machine. Let us use our 64-cores, and now we're really excited. We have a new setting within our output plugins, so if you're sending data to Elasticsearch, Splunk, Amazon S3, you can specify workers and a number so that those can be independently processed. Now we're seeing around 40,000, 50,000 events per second for a single process. So that's really exciting to us that we're able to increase the performance for everyone. And then two, if you have the scale and the resources, you can now use that worker setting to maximize that throughput, maximize the performance. And then, of course, from a plugin side, Fluidbit comes pre-packaged with all its plugins. We added GOIP as well as HTTP input, and these are great ways for folks to ingest data from, say, serverless functions if folks want to enrich their data with GOIP data. That's something that is now possible using GOIP file alongside, and we'll go ahead and filter it. And these join all the other plugins we have as well. So if you're sending data to, say, InflexDB, you're sending data to Splunk, you're sending data to Datadog, et cetera, these plugins just join the fray in terms of all of the capabilities that exist today. So yeah, I highly encourage folks to try it out. We'll go through some demos here to see what the coolness is with using Fluidbit 1.7. So the next section I have is actually stream processing. And so this has been a function that has existed in Fluidbit for a while, but it is something we're continually evolving. And one of the aspects that we've learned from the community is folks will say, hey, I'm doing all this routing and rules and regex, but how do I do aspects more like processing? How do I reduce or do some computations on the data as it's in flight? And if we remember, our goal with Fluidbit was to be very lightweight, not use too many resources. But if we allow folks to do some stream processing on top, can we still keep that lightweight package, give folks the functionality that they may want, but also give folks the ability to route data in different ways? And Rugg, let's interrupt a little bit more. One question that to me is very important is how are you seeing, I think about the stream processing the future, near future in the next five years. I think in many things, I think about the introduction of 5G networks that will increase a lot the streaming, traffic, etc. How can you see and how can Fluidbit will help us in these subjects? Yeah, absolutely. I think there's a lot of trends that we're seeing today and there's no way we'll be able to predict all of them, but at least right now, you're 100% right, 5G, telecom, folks are starting to increase the amount of data that they're sending, increased the amount of data that's being collected. And honestly, the amount of data that we're starting to collect is not as valuable, right, size and amount of data that we collect doesn't necessarily translate into value. So a lot of folks are looking as well to say, can we enrich these data streams with potentially talking about how to solve an issue that might arise? Can we enrich it with some analysis? Can we enrich it with GeoIP? And so stream processing for us as we think about the future of Fluidbit allows folks to do this in a way that they might be familiar with, so we support SQL. We allow for predictions and functions. One of the not well-known filters that Fluidbit supports is TensorFlow. So TensorFlow, great project that's being driven by a huge community in itself, and TensorFlow Lite can actually be used as a filter so we can have the model that's trained, and then you just do the inferences as the data comes through. Is this an error or is this not an error? And stream processing lets us take those type of filters, it lets us take some of the very basic math capabilities like Max, Min, and even time series linear predictions and give it to everyone. So Fluidbit, by the way, gets deployed about a million times a day and that's been growing rapidly. So that's just from our Docker container side. We're not even measuring the full extent including the Amazon packages, the Ubuntu, the Debian, Fred Hat. So that scale, and we think about how prevalent folks have this around their environments, we can just add SQL stream processing on top and it doesn't need to be a replacement of some of the big technologies that are out there that are doing stream processing, but it can be something in addition to that. It can be something that you use to do some really quick checks. Hey, let me do some summary, let me do some max, let me do some minimum. It doesn't require a database, it requires the same lightweight profile that you already have, and it's schema-less so it's not something where you need to define all the data ahead of time. You take a file and you can run some stream processing. You can run some SQL on top of that. You can connect to some Kubernetes logs, do some stream processing on top of that. And the basic gist of this is you can use stream processing, route data where it needs to go, you can aggregate data, reduce costs, and then you can predict, run functionality, run computations on top of this, process it. Thank you so much. This is very important for our future and really the future is today because everything's running now will increase along a lot. Thank you so much. Let's go. Let's go to stream processing. Yeah, yeah, absolutely. So we talked a little bit about the use cases there. I think it's time just to jump into some of the demos to be quite honest. So I'm going to go ahead and switch my screen here. And we're going to switch into... And can you see my terminal? Is that large enough? I didn't increase the text size. Oh, maybe too large. Okay. So this terminal I have a flimbit already deployed or installed, I should say. And what we're going to do is walk through a few examples. So the first example is going to be how do I do some quick selections of data? How can I take, say, Apache? I have Apache HTTP access logs and I want to select all of the HTTP codes that have 200. It's a very basic, a simple example. But to showcase this, let me first go ahead and show the configuration. So if we go to vim.ctd agent bit and I have example 1.configuration. And this is the totality of my flimbit configuration. I have a service definition that says how often am I sending data, which is every one second. The log level, the parsers file, and the streams file. So the stream files is what dictates the actual query or stream processing that's going to occur. From an input side, I'm going to be reading a file, var log Apache and anything that has a dot log in that path. I'm going to use that Apache parser. So with fluid bit, we ship a bunch of parsers out of the box. Apache, Nginx, syslog, both RFCs. We ship CRI logs, Docker logs. So all those formats come out of the box. A tag. So the way that fluid bit routes events is generally through its tagging system. So if you are pulling in data with Apache, we're labeling it Apache. And then when we are outputting that data, we're going to do something very simple. We're going to just output this to standard out and we're going to match all of the tags that are available. So now let's look at that stream's file to see what stream processing job we're going to run. So here is the same thing. So here we're going to, we define a stream task. We name it called HTTP 200 code. We're going to create a stream. We're going to use this tagging system. So we're going to create a new tag called HTTP 200. And here you're going to see SQL. We're going to do select star, the entire record from the tag Apache where the code is equal to 200. So only events with the code 200 are going to show through when we run the stream processing job. So really simple SQL. A lot of folks who know that already can start to use this. So let's go ahead and run like this. And there we go. It's a little volumous because we have a thousand records in that file. But let's just highlight a few of these. So let's look at this last message. So here we had a get method and we have the code is equal to 200. Same thing over here, code is equal to 200. And so on and so forth. So we've taken all of the various codes that exist within that HTTP access, Apache HTTP access log and said only send me the ones that have 200s. Anything else, 400s, 500s, we don't care about. Now, is that the most useful thing? Maybe, but it's maybe not, but it's the start of our stream processing journey. So now let's look at a little bit more of a complicated example. Next, we're going to take a look at our example two. And similarly, instead of this time sending all the 200s, we're going to aggregate our 404 errors. So instead of sending a thousand logs and selecting from those thousand logs, we're going to take those thousand logs, make some computations out of it, and then pump that out to our standard out. And so this time I'm using a different stream files, streams too. So let's go ahead and look at that file. OK, so here we're creating a new task called aggregation HTTP 404. We're creating a new stream. And we're going to select the count. So the nice thing about our stream processing is we include some functions out of the box. So count, max, sum, min, all of those come as needed. We're going to call it as total 404. A very easy to understand SQL statement where we're taking this count, giving it a name. We're going to look at the tag Apache. We're going to create a window. So we're going to look at a window for all these events that come in every 15 seconds. And we're going to look for the code 404. So 15 seconds might be a lot. So let's just change that to, say, 10. And do the sudo here. Just change this to 10. Give ourselves a little quicker. And then last but not least, we're going to run it. So instead of this time doing example one, we'll run example two. So we're reading. We've registered the stream processing job. And if we remember, we're looking only at the 10 seconds of the window. So we're going to have to wait a few seconds. And then there we go. So of those thousand records that we read, we had a total 404 of 45. So we're able to count. Now these are great ways to, again, do some computations before sending to the back end. Enrich your data stream so you don't even have to throw away any data or process the data. You could enrich every data stream with a computation if you so choose. And let's go ahead and look at another example here. Here we're going to go to example four. And what we're going to do here is say, instead of us telling the system, go ahead and count how many 404 errors you have. Just give me a group by. I don't know what codes are available. I don't know what's happening, but just group everything so that way I can easily see all of the various codes that I might have and the count of those codes. So let's go ahead and look at Streams 4 to look at what that stream processing job looks like. And so here we're creating a brand new stream, HTTP, aggregation with tag, we're selecting the code. So whether it's a 200, 404, 500, we're going to do count as total count. And we're going to do that from the same exact data source that we were doing before. This time we're using a window of 10 seconds. And last but not least, we're grouping that by the code. So here we can go ahead and quit. And let's go ahead and run our fourth example. The same thing. We're going to have to create a window. We're ingesting all this data. It's all coming in rapidly. We're going to wait 10 seconds. And there we go. So now we have, you can see there were 905, 200s. There were 38, 301s, 12, 500s, and 45, 404s. So it's a great way to look at all of this data that's coming in. You might not understand how all of it is working. You want to do some sort of computation on top of it. You want to group it a little bit. All this can be done in real time. So I'm using a cheat code of just taking data from a file, reading it from the top of the file every time. But absolutely imagine you're tailing that file. You're tailing Kubernetes data. You're tailing your IoT metrics. And that comes in a bunch. Andra, let me ask you one question. This configuration that you are showing to us is for the agent, right? So you should put this. The agent will go embedded in container or equipment. So you should put this configuration in its component, right? Exactly, yes. So do you, not second now, but at the end of your presentation, do you have any case where you need to do like a continuous deployment of this kind of solution when you think about not only Kubernetes with containers. That is something that we know. Something like distribute this for many, many dispositives or equipments, engagements. How can we, how can we, there is a, or is there a use case where you do this continuous deployment to maintain this configuration up to date with the needs of the customer or needs of the business case? Yeah, really, really good question. So absolutely folks need to update the configuration. We typically don't see folks update their configuration a lot. So you'll create these stream processing jobs and then you'll go and deploy it and it runs. The project runs, it goes, grabs the data, transforms it, emerges, et cetera. And one of the, one of the largest features that we're working on this year is actually live reload. So right now the configuration is very static, but FluentD, for example, has this capability of saying, oh, I have some new configuration. Let me reload without interrupting my current stream. And we want to bring that to FluentBit as well. Today it is something where what folks will do is, just like you mentioned, continuous deployment and continuous understanding of how this is going to get rolled into the new environment is essential. Kubernetes can assist with that by allowing you to do rollouts and our Helm chart has rollout as the upgrade way. So it will slowly take one container down and put the new container up. And so this is a really, yeah, key part of like, right now if you're using it as a package you're installing it straight on the OS. Live reload isn't there. You're going to have to use the CD deployment methods that you already have. Kubernetes, we have some that are out of the box, but I need to use the Helm chart and some other deployment methods. And then last but not least, yeah, things that exist in FluentD today and people are absolutely using live reload. Okay, we can think in the future we have something like a configuration starting, for example, a central HCD and the agents sometimes going to ping the HCD to know if we want a new state for them. We can think about something like this. Yeah, yeah, we've been thinking about that a lot. So I invite folks in the community who are interested in this topic, come join us. We're trying to find ways to build the pipeline of allowing things like remote configuration. And then especially when we look at FluentBit, if it's deployed in embedded type functions, that's going to be something where these remote type characteristics are really important. The other side of this is our stream processing is tied to the configuration right now. And that makes it easy to deploy, but not as flexible when you just want to do data exploration. So we are looking at ways that you can do live queries on top of streaming data and you don't necessarily need to process the data at all. You can just say, hey, what does my data look like? And that's something we're really excited about as well. Thank you, thank you. Yeah, so the last example that I have with FluentBit is our time series prediction. So one thing that's not as well known is when we built FluentBit for embedded use cases, we created a bunch of input plugins for things like CPU, memory, thermal, process information, disk information, network information. And these metrics today are not necessarily something that's fully metric based or log based metrics, but they're out of the box, they're included. And if you're using them or collecting metrics with FluentBit, you can use the stream processing to do some time series predictions. So if we look, we'll start with our example three. And in this example, we have our two inputs. We're creating a CPU input. We have that input for memory. And they don't have too much configured. And lastly, we're just doing an output and we're saying, okay, we're not going to output all the metrics. We're only going to output things that have the tag forecast. So now if we go to our streams. Good. It's not YAML. Yeah. So our configuration is not YAML based. It is its own input, filters, outputs today. That has been something else that we've been looking at. It's like, how do we conform more and make things easier? And so YAML, JSON, config, if folks have preferences there, we're always open to hearing that. But definitely something we've toyed with in the past. So here, we have another stream task, of course. This one we're calling ForecastMem. We're creating a stream and we're selecting the average memory used. So this is going to be an average over a window. And then we're also using this other function called time series forecast. So the time series forecast looks at the time that's included in the record. It looks at the specific field used memory. And then we're going to predict 10 seconds out in the future. It's nothing too crazy far out, but at least for the purposes of this demo, we'll go for a 10-second forecast. And we're going to forecast from the memory stream of memory records. We're going to do window hopping of 10 seconds. So we can do every 10 seconds, we're going to do a window and actually advance that window by one second. So you can continually do a bit of forecasting. So there's that. And we're going to go ahead and run our third example. This one, again, it's going to take around 10 seconds. We're building up all these different memory pieces. And as soon as that memory piece is done, we now are predicting every second what the next 10 seconds look like. And you can see here, it's pretty flat. Nothing too exciting here. You have your average memory used. And you have your forecasting. So yeah, these are functions that we're continually building out. SQL has made it easier to pick up and learn, because a lot of folks are familiar with SQL. And it's meant to complement many of the larger stream processing engines that exist out there. So if you can offload, the way I like to think about it is I might have 1,000 cores for doing stream processing. But if I'm already deployed across 1,000 distributed nodes, I might just use 1% of CPU on top of those 1,000 distributed cores. It's a sunk cost. And you can extract out some of the work that's already being done. So distributed stream processing, I think, is a really cool use case. Again, this is vendor neutral. Something lightweight, runs on the edge. And of course, looking for feedback on it. Yes. I can say that I'm very impressive about this project and I love it. Yes, it's amazing. I work with these kinds of distributed computers. So many times, I start, part of my experience was in a telecommunication company working with telecom machines, telecommunication equipments for call recording, et cetera, call switching, et cetera. So it was a challenge to work with the current behavior of all calls during all time, 24 hours per seven days. It was really difficult. This kind of feature with time series with a capability to embed something that can be very useful to get the steady, the current steady of this machine doing their work around the country, around the globe. It's amazing. Congratulations. I'm very impressive about this project. I really appreciate that. We still have a lot to do. So the work is never ending. As are all CNCF projects, we invite folks to join the community. So we have a Slack that's quite active. We have around 5,000 users, slack.fluendee.org. We also have a channel in the CNCF Slack channel. So Fluendee, there's around 500 folks there. We have a discuss forum that we just released. So we're moving away from Google groups, discuss.fluendee.org. That includes both Fluendee and Fluendee. So a lot of areas and places we can absolutely use the community help. So if you're interested in contributing to open source, you want to get started with open source, have a lot of stuff that folks could pick up. Otherwise, the feedback, the participation is phenomenal. Oh, I'm sure that we will have many folks trying to contribute, try to participate in these projects. I have someone in my mind. But can you show to us where is your community? I know that there is a Slack from Fluendee organization, but maybe the GitHub. And I will invite you and the guys from Fluendee to return. Any other time to prepare a hands-on, maybe a hands-on lab to contribute for Fluendee. I think that's amazing because we want to bring the community to contribute. So I want to invite you to do this if possible, of course. We'd love to. We'd love to. I think as we make stream processing easier, we want more folks to use it. We want more feedback. And I think, yeah, this would be a fantastic place to do that. And the next Qubicon Europe, what we can wait for Fluendee? Yeah, yes. We have a couple of really, really good things in the works. The first is where we have a FluidCon event. FluidCon is going to be alongside Qcon Europe. Highly recommend folks look to register for that. We're reviewing sessions right now. They're looking pretty excellent, really interesting. And the next bit is that we're also looking to align ourselves with more of the ecosystem. So we look at metrics and open metrics and Prometheus are standard. The way we do metrics is a bit archaic, but can we help conform and enrich the folks that want to do things like Prometheus and open metrics? Open Telemetry, of course, is a large project that's growing a ton of momentum. Can we also, from a project standpoint, help folks that are looking to take their logs and enrich them with metrics and have that trifecta of observability, if you will. So that's another place where we're looking to invest and have integrations and conformance. So that's what I look for at Qcon Europe, and hopefully we'll have some good announcements to go alongside that. On the short term, we're also working very hard on multi-line. So the change from Docker to the new container D and the new logging formats, we want to make sure all of that is pleasant. You don't have to worry about it. And make multi-line a really nice experience for users who want to do things like Java stack traces or Python stack traces and have that so that they can send it to their analytics back in. Yeah. Well, Anurag, just my last question, sorry, but I'm very interested about the use case and the adoption about this project. I do have, of course, not explaining any details, but do you have any case that you can mention without explaining open side information, but just whatever you offer, your most important or the biggest project that you use when it's in prediction, do you have any case to talk to it? Yeah. I think most of them are actually public, which is great. And some of the largest users include Amazon, Microsoft, and Google. So if you look at, for example, with Google Cloud, they have this agent called OpsAgent, which actually combines CollectD and FlintBit. And so if you're going to deploy that and you need to route those logs for Windows or Linux, that is used with FlintBit today. Similarly with Amazon, there's a lot of documentation and blogs about FlintBit. Amazon contributes a lot to the project itself. And so they've published articles about how to use it with CloudWatch, et cetera. And from an end user perspective, there's quite a few financial folks that are routing 200k plus servers of logs. There's folks that are using this as part of their streaming pipelines. They are using this to do fraud detection. And yeah, there's a lot of use cases. And I'm hoping if you're watching and you have a good use case and you want to present or write about it, we would love to have it. And I would love to have you on the FlintBit blog, maybe something like this. We can talk about it more. Oh, great. And Ryan, look forward for the next chapter of this novel because this presentation, it's amazing. Thank you so much. Everyone have here the link for the FlintCon. That will be an amazing event, so a side of the event from KibbeCon. And we don't have questions more. I did all the questions and give to you your last minute comments, please. Yeah. Yeah, I think again, we're really happy to be part of the CNCF. We really enjoy talking about the project, but even more, we love folks to give us feedback, participate. You know, I have a real community to build off of, so join our Slack, join our Discuss4, join us at FlintCon. We're open to work together and hope that folks want the same. Thank you so much, and Ryan. And thanks everyone for joining us today. The last episode of this week in CloudNative, our CloudNative TV show, it was great to have you and Ryan talk about Flint. It was an amazing project. We also really love the interaction that we had today and the audience that we had. Thank you so much, everyone. We bring you the last CloudNative code every Wednesday at 3 p.m. and next week we'll have amazing projects to show you the code. Thank you so much, everyone, for joining us today and see you the next week. Thank you, Ryan.