 Hi, everyone. Today, we'll talk about flexible, open, and easy observability for developers. I'm Zane Asgar. I'm the GM and GBP of Pixie and Open Source at New Relic, and I was a co-founder and CEO of Pixie Labs. Later today, you'll be hearing a demo from Michelle, and she's a principal engineer in New Relic and a founding engineer at Pixie Labs. So, what is the landscape of observability today? What we typically are seeing is that software is getting more decoupled, and this supports the high velocity, agility, and scale of modern software development, and this is taking various different forms. There's a move to multi-cloud, there's standardization around technologies like Kubernetes, and that allows easy deployment of microservices. So, these decoupled pieces of software are much easier to maintain and scale. However, they introduce a new problem. We need to go analyze all the data that they're generating, and ultimately, it just turns into this data problem where you have to wrangle, do data analysis, and capture all this data in order to manage, secure, and debug your applications. So, where does time get wasted? The first area where time gets wasted is there's this rigid, predetermined data collection pipeline. The standard way to capture data is either to add a language-specific agent, or add a bunch of boiler-played language-specific instrumentation in your code base. And ultimately, this actually is interesting because it leads to the scenario where you end up with a lot more instrumentation code than actual business logic. We aren't saying that manual instrumentation isn't helpful, but there are a lot of cases that can be done automatically, which will allow you to do this without having an instrumented code base, and allow you to focus on cases where it is valuable to add manual instrumentation around business-specific use cases. The second place is that lots of data gets generated by modern backends, and you can actually capture even more if you have a debugability like we'll talk about in a few minutes, but most of this data isn't actually useful. Most of the time, your application is working as you're expecting, but there are a few cases, say like 2% of the time, when this data is invaluable and you want to have access to all of it. What if we actually change the way modern systems work and run stuff within the cluster to capture what's interesting and then relay that information to the cloud as necessary? With that in mind, we'll talk about Pixi, and Pixi is a new open-source project that we're donating to CNCF as part of New Relic. Pixi is an instant code-driven debugging observability platform, and it sports use cases, varying different use cases, from APM to inform metrics to network performance monitoring and profiling. It comes fully fledged with a UI, a CLI, and the complete experience around various different use cases. What's special and unique about Pixi? Pixi actually moves the entire observability stack, really the platform to the edge. This allows us to actually run completely within the Kubernetes cluster, which reduces the complexity and cost of maintaining the software, while giving us a few different benefits. The first one is that we've moved from having rigid predetermined manual collection to having code-driven on-the-fly, no instrumentation-based collection. A lot of this is actually enabled by utilizing new kernel technologies like ABPF, which are supported natively inside of Pixi. The second big bucket is that we've moved from a cloud-only storage and AI machine learning workloads to a system where you actually spread this out between the edge and the cloud. You can do a lot of the heavy lifting on the edge where you have access to large, rich data source, figure out what's interesting, and then ship the interesting bits and pieces over to the cloud. Lastly, there's a move away from all the manual and siloed user interfaces to making Pixi completely API-driven and easily accessible from various different tools. Pixi does quite a few things. Our core team has actually been pretty focused on building out what we call this TG debugging experience, where there's this focus on APM for performance debugging. The goal is once Pixi is installed on your Kubernetes cluster, within a couple of minutes you'll actually get access to all the baseline visibility completely agnostic to the language stacks and everything used. You can use this information with, especially in certain languages, to actually dig really, really deep and actually get code level insights. If you find out there's a slow request, you can actually go all the way down and figure out where in the code the slow request is being made, and it will help point you in the right direction. Further, there are other use cases like NPM, Build Health, AFSEC that are enabled by Pixi, but aren't really our current focus right now, but they are being built out by the community by extending our Pixel script interface, which is a Python dialect for interacting with the Pixi database. One thing to keep in mind is that Pixi runs entirely within your cluster, so no data actually leaves your cluster, but it does come with a limitation that we access mostly live data within the last day. With that, I wanted to re-emphasize that at Neuralic, we love open-source software, and we're betting big on open-source at Neuralic, both on Pixi and other open-source products that we'll talk about, but in particular, we're donating Pixi to CNCF to have the ability to make this a core part of the ecosystem. With that in mind, I'd like to hand it off to Michelle to do a quick demo of Pixi, and we can wrap up with some more information. Thanks, Zing. Let's hop right into the demo. We've made Pixi easy and painless to deploy to your cluster. You can deploy Pixi using our CLI, which you can download using the simple bash script that we feature on our main page. There's also other methods that you can use to download the CLI. We offer the CLI as WN and RPM images, or if you don't want to deploy through the CLI, you can also deploy Pixi directly through your cluster using our YAML or Helm charts. For the sake of this demo, let's use our bash script here. When you run the command, it asks you to accept the terms and conditions, ask you where you want to install the CLI, and downloads the CLI to your system for you. It will then ask you to authenticate with Pixi, and at this point, you can create your account with Pixi or log in. Okay, now that we've logged in, let's jump back to the CLI. To deploy Pixi to your cluster, all you have to do is run one simple command, PxDeploy. So let's do that right now. Pixi is going to run some checks to make sure that it can actually deploy to your cluster, confirm that this is the cluster that you want to deploy Pixi to, and it will automatically start deploying all the resources necessary to start collecting your data. Within minutes with just one simple command, Pixi is now deployed to your cluster, and Pixi has started automatically collecting data about your cluster. Pixi uses EBPF to collect telemetry data from the Linux kernel level, and that allows us to provide a rich instant out-of-the-box visibility into your system. So this is just a demo cluster, and there's actually nothing deployed yet. So let's use the Pixi CLI to deploy a demo to this cluster. So this is just a simple e-commerce application which features 11 microservices which talk to each other through GRPC. So now that's deployed, let's hop into the Pixi UI to see what data we've already started collecting. You can see here that although we just deployed Pixi in the demo application to our cluster, Pixi has already started collecting a lot of interesting data about the cluster. So if you open our script editor, you can see that the Pixi UI is powered by what we call Pixel scripts, and Pixel is our language which is very similar to Python's pandas, and it allows users programmatic access into our data so they can explore their network, their application, and infrastructure data that we've collected. So you can see here, this is our Pix cluster script, and it gives you an overview into what your cluster looks like. Pixi has figured out all the different nodes, namespaces, services, and pods running on your cluster, and also found some interesting metrics about it, such as the CPU usage or information about the request such as the latency in the throughput. We also saw here at the top of the page that we've generated a service map using the network traffic that we've collected, and now you're able to see which of your services talk to which. Hopping into our Pix HTTP data script, you can also see that we're able to collect data about what those requests actually look like. Using eBPF, we're able to collect HTTP2 and GRPC, basically a lot of different protocols from your cluster, including database traffic, and you can see here that the HTTP data script gives you all of the requests that have been flowing through your cluster. Because we use eBPF and it collects data at the kernel level, we're also able to collect traffic that was originally encrypted. So you can see here when you look into this particular request, you can see interesting metrics such as who is the requester, the responder, and you can even see the HTTP body if you want. Pixi also offers what we call deep links. So scrolling to the right, we see here we have this blue text. Deep links allow you a way to navigate through our UI and look at specific resources in your cluster. So here you can see this pod column. We want to look into, say, this carts pod. Clicking it will take you to our pod view, which gives you an overview of what this pod's health looks like. So here you can see that we've collected network traffic about this particular pod. We've collected CPU metrics and memory metrics about this particular pod. And this has all been done instantly by deploying Pixi to your cluster and you're able to see a lot of baseline level metrics like we said before. But additionally, Pixi also offers you code level visibility into your application. So scrolling down to the bottom of this pod view, we see here we have a flame graph. Pixi is actually running a profiler on your system. And by doing so, we're able to generate this flame graph, which tells you in this particular pod how much time you're spending on each function. So the wider the bar is, is the more time that your pod is actually spending executing this function. And that allows you to figure out where your problem spots are so you can optimize your code and debug. So as you can see, Pixi was quick, easy, painless to deploy. And right out of the box, you're able to get a lot of interesting data about your system. It includes all the baseline level metrics such as your network data, your infrastructure data, but also you're able to get a deeper look into your application data such as this profiler here. All right, so let's take it back to Zane to wrap up this talk. Okay, thanks for the demo, Michelle. The other thing I wanted to mention is, you know, at Pixi, we love open telemetry. Actually at New Relic in general, we're betting really big on open telemetry as the way to interact with various different data collectors. With Pixi, we support both ingress and egress of open telemetry data. And that means that you can actually collect data from various different sources that support open telemetry in addition to Pixi's native data collectors. But also you can export it to other open telemetry data sinks like New Relic where you can get access to, you know, long-term data retention with Pixi and visibility across many different clusters using their very scalable New Relic One backend. We'd love for you to check out Pixi's GitHub and try out our project. And in addition, you know, we have a pretty active Slack community that would be happy to have more people join. In addition, please check out New Relic One which comes with 100 gigabytes of data for free and allows easy integration with open telemetry data sources and other open source projects. Thanks a lot.