 So yeah, I'm Natalie. I'm another one of the engineers at Pixie and I'm going to demo the data source plug-in that we built out for Grafana. And I just want to note that Vishal actually did most of the work here, so I'm just demoing it. So for those that are unfamiliar with Grafana, it's a really popular dashboarding tool. And it also has a really rich plug-in ecosystem that allows you to pull in data from various sources and visualize it all in one place. And so we wanted to give Pixie users who also use Grafana the option to easily pull in Pixie data into their dashboards. So that is why we've made this plug-in. So I'm going to kind of just show you a little bit about how it works. And while I do it, I'm also going to show you a little bit about how to write queries. So even if you don't use Grafana, you might still find it somewhat relevant. So we're still waiting for approval from Grafana to submit this as a public plug-in to their repository. So for now, you're going to want to download the plug-in from this repo and manually put it in your plug-in directory. So I've already gone ahead and done that, but just to kind of demonstrate it, what you would do is you would go into this releases section and then click on this zip file and download it. And you would want to put that inside the plugins directory that you have configured. So you can see that we've unzipped it and put it in there. And all of the relevant binaries are present. So that would be the process for doing that. And I have already deployed a Grafana server, but once you would move the plug-in into this directory, you would just want to do a restart of Grafana like that. And I guess one more thing is I've also already deployed Pixie to my Kubernetes cluster. So you'll want to have done that too. So moving toward the plug-in. I have Grafana hosted here and I'm going to use the flow to add a data source. So this should be pretty quick. I'm going to scroll down to the bottom and select the Pixie data source plug-in. And for this plug-in, there's two pieces of information that it needs to get set up properly. The first is an API key so that when Grafana's, when our plug-in is making requests to your Pixie instance, it needs to know that, you know, it's allowed to do so. So the API key is used for that. So let's grab one of those right now. So we're going to go to the admin page and go to the API key section. And we can copy it by clicking on these three dots and then click copy value. So we'll stick that in here. We also need a cluster ID so you can have multiple Pixie clusters in the same account. And so the plug-in needs to know which one to talk to. So we would go to the cluster section. And I'm going to be demoing this on my cluster and it's important that we get the full cluster ID for this setup process. So you're going to want to hover here and then copy this full value that shows up on hover. You can also use the CLI, which will just print out the full thing anyway. Okay, so we've saved this and now let's try it out. Let's try to build a dashboard using Pixie. So what you can kind of see with this is that it's a pretty simple plug-in. All you have to do is put a pixel script in here and then the data is going to get plotted and you can select different types of visualizations like table or bar chart or time series. So let's just do a really simple example. And I'm going to kind of compose it over here and we'll build it up over time and then use it to make a panel in this dashboard. So let's just start really simple. So let's just count the number of HTTP requests that we're seeing in a particular namespace. So every pixel script will start with import PX. This imports the Pixie module. We're going to want to create a data frame. So we're going to load the HTTP events data frame. And this is Pixie's data source for all the HTTP events in your cluster. One thing that we've added in order to make the plug-in work well with Pixel is the ability to use a macro. And what this basically does is it allows you to put information from Grafana directly in your pixel script for convenience. And one such macro that we have is the time range so that you can use this time picker and the pixel script that you put in will automatically pick that up, which is more convenient to just changing the pixel script every time. So in order to enable that, we would do start time equals time from. And these macros and all this information is documented so you can check the reference for it later. So the next thing that we're going to do is we want to bin these events by time. So maybe plot every second or every minute how many HTTP events that we're seeing. So we're going to assign a new column using the PX bin function where we take the time column, which is nanosecond resolution. We bin it according to a particular interval like minute or second, like I said. So this is actually another macro that we have where we can inherit the interval that's being passed by Grafana in the pixel script. So we'll just do it like that. Next, we're going to do a group by on timestamp because we want to count the number of requests at every time. And we'll specify that count as an aggregate as well. And, you know, similar to the plugin reference, like the pixel reference docs are also on our website. So if any of this is kind of confusing or you don't fully understand the syntax, we have tutorials so that you can better understand pixel. And for time series, the time underscore column is a special column. So we like to call it the script and PX.display will basically print out the table. There's one more thing which is that I've actually only interested in a particular namespace right now. So I'm going to actually add a filter to my pixel script that will basically say that I only want to look at HTTP events that are part of the PX stock shop namespace. So let's hope the live coding gods are blessing me today and put that in here. Great, they did. So we can see that we're plotting the HTTP request count over time and we can do things like change the time range. It looks kind of similar but just kind of steady traffic. And maybe fail to query just then which can happen sometimes but put it back to 15. I think I jumped too soon saying the the coding gods but I think you all saw it there so let's move on to the next step. So basically it's like kind of interesting to see the number of HTTP events over time. But it would probably be a little bit more interesting if I was able to see that actually by service, there is again. So I want to see this line chart but I actually want to see it one line per service in my cluster because I don't really like just a number of requests alone doesn't tell me that much. So we're going to pull out that as a new column using this thing called context, which we used actually to filter on namespace already and context is cool because it allows you to understand. For all the data that you have what pod was it running on what node was it running on, etc, which makes us able to do richer queries. So we're pulling out this service, and we're also going to add the service to the group by so that we're bending by both timestamp and service. So we'll copy and paste that in. And now we can see that for all the services in our namespace, we can see the number of, you know, the number of requests that they have. And graph on allows you to do a lot of configuration like you can make these into bars, instead of lines and things like that. You can stack the request so that it looks cumulative rather than each one plotted kind of on its own so there's a lot of cool like configuration that you can do here. There's one really cool picture feature of pixie that we have, which is the ability to actually cluster the endpoints of your service, even when there are variable URL parameters in the path. So, I've already pre written the script for this so we don't have to walk through the pixel for this, but I just wanted to kind of show that off and show a different type of visualization that you can have. So, let's add a new panel, and let's make this one a table, instead of a bar chart. And I'm basically going to show a table of all of the different endpoints that the catalog service has. So we'll just paste that in here. We'll apply that. So what we can see here is that pixie has been able to automatically determine that there is a wild card in the URL parameters for the catalog service and so the script has actually bucketed it under, you know, this path, even though for every single path here it was actually filled out with a different item ID. So, let's finish up with one more chart. And it will basically be similar to the time series but showing the latency of each of these endpoints over time. And this is another one that I've pre written so that we don't have to kind of go through the exact pixel. But we can kind of see it here. So, you know, if I'm looking at my HTTP requests, a lot of times, looking just at a service is not that helpful because there may be one particular endpoint that I have that can tell me, you know, like what is the exact problem I'm having but the other endpoints are actually fine and they drown out the signal from the endpoint that's having a problem. And so, using this type of script we can, you know, support drill down into endpoints and I guess that's not really specific to graph on it but it might be the kind of thing that you might want to visualize in graph on it if, you know, you were to use integration. So, I think that's all my end and, you know, I guess we can move to questions now.