 So with that, you know, hey everyone, my name, again, is Anurag, part of Calypcia, and I'm joined here with Eduardo. We are the creators and maintainers of these projects, and we founded a company, Calypcia, around these projects. And we really concentrate on what we call first-mile observability, where you're collecting all that data, you are intrinsically doing things like parsing, enriching, redacting, and routing it to end destinations. And with that journey, there's a ton of stuff that comes into play as we start evolving into microservices, we start evolving into high performance data streams, we start evolving into data that just doesn't necessarily look like anything we've dealt with before. And so in this session, we're going to talk a little bit about that problem, inherently, what it is, how to go and solve it. Some work rounds you can use today, and of course, some places where Calypcia can come in and help with what we've built. So with that, I'm going to pass it off to Eduardo to talk a little bit about this. Thank you, Anurag. So as I said, we started this company called Calypcia to solve all these problems and challenges that from the open source perspective, yeah, we can add to some of them, but when you go to the enterprises, even more problems. And to understand the problems, we need to understand one is the pain and the other how we are tackling that problem at a low level, technically speaking, from the implementation. And scalability is a continuous problem. We always have more data, more data, more data on how do we optimize. Quick question is like, hey, they have more nodes, more clusters. But sometimes that adds other sites problem. Now, if we talk how things work, it's like we were talking about we got data, which is called input, right? We process data when this set data out. But this is more complex than that. Internally, dealing with an input means to open files, perform a lot of syscalls, create sockets, listeners, allocate, reallocate memory, free out that memory. And for metrics also is the same thing. So it's a pretty complex and tough job. After that, if you're in logging, we need to purchase information, filter it out, serialize it, right? Because you're not going to deal with just text, you're going to have binary data. You have to deal with buffers. And as I said, we have physics limits, right? Memory is not unlimited. And then when you process all this and filter this data out, you want to wrote it, create schedulers, and if this goes wrong, you have to retry. So all of this complexity, so you said, okay, if I want to scale up all my data management, right? How do we solve this in the low-level part? Because I cannot get rid of syscalls. I cannot get rid of memory handling. I have to do that. That is a continuous need. But also you have to deal with the working. Because if you're going to send the data out, we have our best friends called DNS servers, right? DNS is always complex, DNS lookups, connect, and also you want security, right? We don't want to transfer the data in a plain way. You have to do TLS, hand checks, round trips, certificates, format the data because also the destination would not understand our binary format, but Elasi will have its own JSON format, splank its own way to get that data, and that transformation or data also takes time, right? When you scale up the services with some threads, some mechanisms, but at some point you need something else. And when delivering things also over the network, we have to validate. And that also takes times. So if you think how do we scale up this, will you tell the users? The user will tell you, hey, I got this problem. I cannot scale up. I have terabytes of data. I'm putting my OSS hat. I cannot tell them, hey, take a look at the syslog, take a look at the memory buffers. That won't scale, right? That's a solution in a business. They need to have something differently ready to go that kind of app. It can approach and tackle the problem in a different way. So as OSS maintainers, you got this problem, right? That goes from left to right, from data collection standpoint, processing, filtering, buffering with the final destination, which is your database or your cloud service. And this is, as a company, we come up with this concept of the first mile observability. You have a huge pipeline, right? But we are experts on the first fraction of it. So, and we said, let's try to provide some solution on that part, right? Let's stop talking about syscol, let's stop talking about memory allocations, and let's bring things to a higher level for the enterprise. Where you, likely, where you have thousands of servers, right? You want to have something that you can put in your architecture and work and scale right away. And so our expertise here is really in a lot of the cloud native pieces. And we thought, how would we go architect something like this from an open source? How have folks gone and solved this today? And the first is, you have all of this data flowing, and you might need multiple processes, instances to go in and take care of that. So you might need multiple containers, right? If someone's running in Kubernetes, you might want five or six different containers that are reading that data, processing it and sending it. You might want automatic load and traffic balancing. We have this great idea of ingress routes within Kubernetes. It can take it data, load balance it automatically for you. And it's something powerful kind of comes out of the box with many Kubernetes distributions. And then you want simple scalability, right? The idea with Kubernetes is you can treat these workloads somewhat as ephemeral and you can scale them up. You can scale them down. And you can go and configure it, say, from a remote or API type location. So this is where we said, how can we solve these using what's available today? And for us, we thought of the operator. We have the flimpad operator. There's a session on that later today. There's ways that you can do this, just running this on top of Kubernetes. And then as Calyptia, we wanted to package and bundle all of that as well. So we created a new product called Calyptia Axel and essentially takes the fluent as a service, but allows you to deploy it within your data center, have it remotely managed and configurable. We give you some automatic monitoring about how things are flowing through, how many events per second, bytes per second, are you meeting the throughput needs? We can scale up, scale down, leverage that rich ecosystem that already exists within Kubernetes scaling and auto scaling. And then also just make sure, if you're going to be doing this from an enterprise perspective, manage this across clouds, data centers, or different environments. And so this is something that we've gone ahead and developed. And if I just go ahead and switch over to, say, this UI, we can do some very simple things where we take in a configuration that we're so used to on deploying on thousands of nodes. And if we want to go ahead and scale, say, this specific pipeline, maybe we're doing some security logging, maybe we're doing some syslog logging, we can just increase, say, the replica size. And then we can go ahead and save that. And just like we would expect with any of these cloud native architectures, how we go and update and how controllers work, where it goes, looks at the new configuration and starts to apply it. We can come back and see, okay, we have a brand new replica size that matches what we specified. So this is our way of saying, hey, there's all these problems with scalability. We can solve them using a lot of the cloud native architecture that exists out there. It's a lot of manual putting together. And if you want something more term key, we have a product that can do that. And of course, we're looking for folks who are interested in trying that out and building that vision with us. So yeah, thank you all for the time. We're going to go ahead and break for the coffee breakdown. And then we will come back to this session here at about 10, 15, 10, 20. So we'll get that 30 minutes of coffee in for us. So yeah, we'll be up here if there's any questions. Otherwise, we'll also be in the Slack where folks are asking questions and comments. But other than that, yeah, please have a good coffee break and see you soon.