 So I'm actually really excited about to talk about this new feature that we've added. So this is what we're calling distributed BPF trace deployment. So I mean, let me tell a little story, like we, yeah, Sean and I, we've been working. So my colleague, Sean, we work a lot on BPF stuff and we write a lot of custom BPF code and a lot of the data that you actually see in the Pixie platform is being driven by those BPF code that we write that's kind of collecting that HTTP data that you're seeing or the stats that you're seeing. But it takes a lot of work for us to do that and, and you know, it's kind of, it's our code, it's kind of hidden away and down in the, in the guts of Pixie. And we kind of ask ourselves, like, wouldn't it be cool if we could really open this up to the community? Right. We'll have others be able to write BPF code and get access to all sorts of cool data and just dynamically plug it into the Pixie platform. Right. So we thought that would be really neat. And so what we, we focused on with this feature is, is really achieving that. We wanted to kind of put a feature in such that you can pull BPF code that you have existing probes from BPF trace, for example, stuff that's checked in to the BPF trace repo. When you see that on the slide, you see kind of, there's, there's a rich set of tools out there in the community already. And we'll take those probes, we'll kind of, you know, you write us a simple script and we'll take it and we'll run with it. We'll, we'll take care of all the orchestration. We'll deploy it on all the nodes in, in the cluster for you. We'll start collecting that data and pushing it up into the Pixie platform. And so you can use the power of the, the Pixie platform to then query the data and get access to all the, the information that, that it's pumping out and we'll do that all automatically for you. So that was the dream that we had. And again, so we're kind of ready now to kind of give off the first demo of this. I would say, so like, what does this mean for you, like for all of you in the audience? You know, there's some of you who, you know, are probably less familiar with BPF. And what that's going to mean for you, like with this feature is we have now this kind of open source collection of scripts that we're going to have that rely on these BPF trace sources. And you'll be able to run these things, right? So we're going to have a rich set of scripts out there that are, that are backed by, by the community. And you're going to be able to run these and get all sorts of cool data to, to debug different scenarios or figure out what's happening in your cluster. So that's going to be kind of the story for, for a good chunk of you. And for the power users or the people who play with BPF more often, what it really means is you can use BPF trace to write your own scripts. So you can say, hey, Pixie, you guys don't have the data that I want. There's just really interesting data that, you know, from the kernel, I want to see like what's happening in my, my, my TCP stack, you know, I already have a BPF trace script for it or, or I want to write one from scratch, you know, go for it. You can write it. And then we can take that and deploy it across the cluster for you, collect all the data. And again, you can then visualize it and see what's happening with your own custom scripts. So that's kind of in a nutshell what, what we're doing with the distributed BPF trace deployment feature. And if there's no questions with that, I'll go into the demo. So kind of show this off. This is pretty fresh, but I'm going to, going to show what we've been able to do. So first of all, as I mentioned, there's already a rich set of BPF trace scripts out there, you know, in the open source community. So this is one, this is the one I'm actually going to demo with. This is called the TCP retransmission script. And what this does is it's looking for, it's a probe that goes into the kernel, and it's essentially monitoring to see whenever a TCP retransmission happens. And this is useful to figure out if there's something misconfigured in your network, or if there's just too much traffic and you're getting a lot of TCP retransmissions because buffers are backed up. And so, you know, Dale, Dale Hamel has already kind of written this stuff. It's really cool, you know, it's out there. Can we plug this in dynamically into the Pixie platform? So I'm going to switch over to the Pixie platform here. And so what you would do is, I mean, let's say we're starting off, this is kind of the front page, we're looking at all the namespaces. What you would do is we have a script now that's called the TCP retransmits and it doesn't do anything initially, right? So we haven't run it yet. What we can do is take a quick look at the Pixie script. And what you can see is I've pretty much just embedded the program here into our Pixie script, which is saying we plugged it in. We had to tweak a few things here and there, but for the most part, it's just copy-paste, right? So we've plugged it in here and then the bottom half of the script is then we say what to do with that data. So the first half is kind of saying this is the BPF trace script and this is what we want you to deploy on all the nodes of the cluster. And then the second half is our just native query language that lets us play around with that data. And so let's just run it and see what happens. So let's, and I do want to change a few things here. So we will, just so we make this be a completely fresh thing, I'm going to change the name of the probe. So it's a brand new fresh probe. It's going to push data to a brand new fresh table. And because I know this probe generates quite a bit of data, I'm actually interested in looking at traffic being, I have a web app. It's called online boutique. It's the same one that James was showing off earlier where we're capturing HTTP data from it. It's an e-commerce site. We just want to see what's happening for that particular service. So I'm just going to filter for data on that. Okay, so we run it. What you see now what's happening behind the scenes here is it's deploying, so the Pixi framework is taking that BPF trace code, distributing it to all the nodes in the cluster and running it and then starting to collect the brand new data that's being pumped out from those probes. And that data is being pushed out into tables and then we use that data to then run queries and create cool visualizations like this one. And so what we're seeing here, this was, if you recall, the script was measuring the number of TCP retransmits. And so what we see is on the left here, we're seeing the pods. So we see a recommendation service, a checkout service and a front-end service. And on kind of the right kind of half of it, what you're seeing is the services that they're talking to. So what these recommendation service, the checkout service and the front-end are actually talking to. And what we see is, there's some retransmits happening in the system. There was nine retransmits happening from the recommendation service here. The checkout service had three, the front-end had one. Now this was a snapshot. I just ran the query once. It had just deployed it. So it just kind of collected a little bit of data and then sent the data for us to visualize. So I'm gonna rerun it now because it's had time to collect more data and it's gonna refresh. And we're gonna see now the updated view. So same, interesting thing, same three pods, the front-end pod, the checkout service pod and the recommendation service pod are talking to a bunch of different services. It seems to be the case that, it's always these three pods that are having retransmits and they're having quite a few, which is indicative of something's going on. If this amount of retransmissions is not normal, right? And so this is helping us visualize what's actually happening in the cluster. And this is a great example of what I mean by, we start with BPF Trace and we get the whole power of the PIX, the platform where we're able to actually, take that raw data and turn it into visualizations like this so we can kind of look at it and say, okay, what's going on visually in this cluster? Where are the problem points in the bottlenecks? And so we definitely see that there's these three services having a problem. Now I'm gonna cut to the chase a little bit here. What I actually went and did in the cluster is I went into one particular node for the sake of this demo and actually kind of put a little gremlin in there that what it's doing is it's introducing packet loss. So it's corrupting 5% of the packets that are coming out from that node. So that kind of represents a faulty node in the system, something that's having a problem. And it's no coincidence that the pods that are mapped onto that node, so online boutique has like 10 or so pods. The three that map onto that node are indeed the frontend service, the checkout service and the recommendation service. And so it's no coincidence that we see visually here that these three pods are experiencing a lot of retransmits because they're sending out corrupted packets. The receiver gets them and says, these are bad packets, you know, it'll time out and then the TCP stack will retransmit those packets again, right? So you can use these sort of scripts to debug these sort of scenarios or other issues like where you're having performance bottlenecks where the socket buffers, you're just sending too much traffic, for example, and you're getting retransmits, stuff's backing up, all sorts of different scenarios.