 Okay. Hey, everyone. My name is Anarok. I'm part of Calypia. I'm one of the sponsored folks of Calypia, where the company behind Fluent Bit, Fluent Ecosystem. I'm also one of the maintainers for the Fluent Ecosystem as well. And really with telemetry pipelines, the first question that I like to start with is how have we sent data? How do we ingest data? How do we collect that and route it to where it needs to go? And if you look at this about 10 years ago, it really started with a vendor provided agent and an observability backend. So if you're sending data to XYZ, the vendor provides an agent, it collects that data, and it routes it to wherever it needs to go. Now, the challenge with that and the problem that we're starting to face as an industry is really we're getting locked into that agent. So we're collecting a ton of data. It's only useful for that particular backend. There's a lot of data, a lot of costs, and you might have different folks who have different permissions for doing deployments to those agents, making changes. So it's not as flexible as we'd all like that to be. So how have we as a open source community gone and tried to address that? Well, with the Fluent Projects, we had this idea of telemetry pipelines and forwarder aggregator. Now you have a bunch of different options, but the idea here is you separate out that collection tier from the processing tier. And an example on how Fluent Architecture, you could sub the aggregator side with things like OpenTelemetryCollector, DataPrapper, LogStash, other very notable projects. But you collect that data, very lightweight, stream it to a central place, and from that central place, do the processing. Maybe remove data, maybe enrich that data, or really give the team full autonomy to go in and act changes, processing rules, or different things that they need to do to make them successful for routing that data. Now the benefits that this can provide from an architecture standpoint, number one, you can save costs. So if you're being charged for how much data you're ingesting, reducing that data obviously results in saved dollars, or saved euros. And from a productivity and time perspective, it can also mean that you're no longer waiting for a service ticket to go in and act a change across a large fleet. You're reducing time to mean time to resolution if you're able to add context that makes the operator or practitioner more aware of what's going on. You can save resources, so instead of deploying something that's being processing across a thousand different nodes, you can have it centralized in a place that you can scale up, scale down. And then last but not least, you're looking to how can you reduce some of these manual mistakes. So if you're deploying in one central place instead of deploying across entire fleet, Telemetry Pipeline can start to help that out. Now where are Telemetry Pipelines going? I think this is super interesting, especially in this room. There's a lot of good projects out there with Hotel Operator, Fluent Operator, and even, for example, Enterprise Solutions like ourselves with Clip to Core. Now the big idea is you're going to be able to manage everything from a central place. So things like distributed Telemetry Pipelines like agents or Telemetry Pipelines that are centralized. You can deploy this within infrastructure, so wherever you have this stuff, it doesn't have to be limited to just Kubernetes. You have this on Red Hat 6 or something like a cloud VM or even your laptop. How can you make that centralized and part of that entire story? Things like load balancing, healing, auto scaling, really that paradigm of cloud native starts to provide the means to go and do that. And then really, we're starting to see really cool stuff coming from the community and folks who are working on Telemetry Pipeline around user interfaces, centralized governance and management policies, and all of these different abilities to go and do that. Now for us, we're looking at this problem and trying to build a solution on top of it. That's where we have what we call Clip to Core that builds on Fluent Bit. And the idea is one to provide things like simple UI, easy data processing that you can test and try, and then also telemetry for your telemetry data. So how can you take your telemetry signals and provide telemetry on top of that? Fleet management is part of that story as well if you have thousands and thousands of nodes. And last but not least, one of our new open source projects is as we as Telemetry Pipeline maintainers and folks who work on it, we have found it really hard to work with telemetry data without having to ingest that data into a particular back end. So we created a really quick project and it's now part of Apache 2, Viva Explorer. You can route telemetry data, syslog data, whatever you want, just view it. Think of it as a visual standard out for logs, metrics, traces. And it deploys in a less than 100 megabyte package. So if you want to run your laptop, run it locally. So this is a new project. We're happy to show you kind of anything and everything. And thank you so much.