 This tutorial will demonstrate how to write a custom script to export pixie data in the OpenTelemetry format. To do this, we are going to first write a pixel script that exports pixie data in the OpenTelemetry format, then we'll test that pixel script in Pixie's live UI, and finally we'll configure a pixie plugin to run the script periodically at a fixed interval. Everything I'm about to show you is available in a tutorial that I'll link below. You can also find this tutorial on our doc site. For this tutorial, you'll need a cluster with Pixie installed. You can find directions for this in the tutorial. I have a Minicube cluster set up, and I've deployed Pixie to it, and Pixie is in this PL and PX operator namespace. I've also deployed a demo application, the Sockshop demo application, and that's so that we'll have some traffic in the cluster that we can export. I've also deployed an OpenTelemetry collector to this cluster, and that's what we're going to send the pixie data to, and you can also find directions for that in the tutorial. Let's go look at Pixie's live UI. One of Pixie's unique features is that it uses EBPF to drive its data collection. This approach enables Pixie to efficiently collect data in a way that requires no user instrumentation. Immediately after installing Pixie, you'll start to see data that Pixie automatically collects. Here you can see a service map of the HTTP traffic in our cluster, and we can see that the Sockshop front-end service is talking to the catalog orders, user service, and we can also go look at the actual HTTP requests. So if I go up to the script drop-down menu and then do HTTP data, this view shows us all of the HTTP traffic flowing through the cluster that Pixie has traced. Click on any of the rows to see the HTTP request and response. So here we can see that the orders service is talking to the shipping service, and we can see the request body and the response body. So now that we can see this HTTP traffic in Pixie, let's try exporting some HTTP metrics in the OpenTelemetry format. Pixie uses scripts to query its telemetry data. These scripts are written in Pixie's query language. We're going to use the LiveUI Scratchpad to develop a Pixel script to export our Pixie data. So go up to the script drop-down menu, select the Scratchpad. We're going to replace the contents of this Pixel script tab with the script in the tutorial. We will run the script using the Run button here, and you can close the script editor using this button, and we'll go into what this script does in a minute, but let's first validate that this data has been received by our OpenTelemetry collector by checking the OpenTelemetry collector pod logs. My OpenTelemetry collector is just an example collector that outputs the data it receives to the logs. So that's how we'll validate if it's working. Great, and we can see that we have received some metrics here when we ran our data export script. Okay, so let's examine how this script actually works. So I'm going to open up the script editor again with this button, and we're going to take a look at it. So on line one, we're simply importing Pixie's module for querying data, and then on line three, we're populating a data frame with the last 10 seconds of Pixie's HTTP events table, and this table stores all of the HTTP requests and response pairs that Pixie has traced in your cluster. So once we have that data frame, we're going to create columns for pod and service, and so that's just getting the metadata for each HTTP event in that HTTP events table. Then on line nine, we're grouping all of these HTTP events by unique pod and HTTP request path, and then for each pod and request path, we're counting the number of requests to determine throughput, and then hotel metrics also require a timestamp per record. So we're just going to grab the latest value in the window as our representative timestamp. On line 15, we're calculating requests per second by using our throughput value and our 10 second time window that we've grabbed the records for. Down here on line 41, we are calling PX display, and that's simply telling the live UI to display this data frame as a table. I'll comment this up for now. So if we rerun the script, here you can see pod, service, request path, throughput, timestamp, and request per second. So this script so far is just using our normal pixel functions that you've seen before, but lines 17 through 39 will introduce our new methods for exporting data. So to export data, we're going to call PX export with the data frame that we've just created as the first argument and the export target as the second argument. This export target describes which columns to use for the corresponding open telemetry fields. Next, we have the endpoint parameter. And this is describing the endpoint and any connection arguments necessary to talk to an open telemetry collector. The endpoint URL must be an open telemetry gRPC endpoint. And if the open telemetry gRPC endpoint is not secured with SSL, you can set insecure equals true. Optionally, you can also specify the headers passed to the endpoint, as some open telemetry collectors provide providers look for authentication tokens or API keys in the connection context. So next we have the resource parameter. And this parameter defines the entity producing the telemetry data. So this parameter is defined using a dictionary mapping attribute keys to the string columns that populate the attribute values. So the pixel configuration expects service name to be set, but all of their attributes are optional. Keep in mind that when you create new attribute keys, open telemetry has a recommended pattern that you should follow to maintain broad compatibility with open telemetry collectors. So next we have the data parameter. And this parameter allows you to specify a list of metrics or spans that are generated from the data frame. In this example script, we specify a single gauge metric for the request per second column. We also specify an attribute for the metric, which is request path. Each metric and trace type supports a custom attribute field, and metric trace attributes work similarly to resource attributes, but they're scoped only to the specific method. Pixi currently supports a limited set of open telemetry signal types, metric gauge, metric summary, and trace span. We also support a subset of the available fields for each instrument. You can see the full set of features in our API documentation, and if you'd like us to support other fields, please open a GitHub issue. So now that we have a pixel script that exports open telemetry data, let's set up the plugin system to run the script at a regularly scheduled interval. So first we need to make sure that a plugin is enabled. So we'll go to our admin page, and then the plugins tab. And we're going to use the open telemetry plugin, but the script we wrote will work with any of Pixi's long-term data retention plugins. So I'm going to click the toggle to enable the open telemetry plugin. I need to put in my custom export path, which if you're using the demo collector from the tutorial, this path will be in there. And then my demo open telemetry collector does not support TLS, so I'm going to turn that off and save it. So now we've enabled a plugin, and we need to configure which data to export. So to do this, you can follow this link here, or you can go to this data retention icon in the left sidebar. So the open telemetry plugin comes with several preconfigured open telemetry export scripts. I'm going to disable these scripts for now so that we can ensure that the data that the collector is receiving is actually from our custom script. So let's add our custom script. So if you scroll down, or in the custom script section, you can click create a script. And we're exporting HTTP throughput, so I'll put that in the script name. And then I'm going to replace the contents of this pixel script field with the script that we developed. So we need to make a few modifications to this script. First of all, instead of hard coding start time, we're going to replace this with px.plugin start time. And this value can be configured using this summary window field here. We're also going to remove the endpoint parameter since we configured this endpoint when we enabled the open telemetry plugin. And the final thing we're going to do is remove this px display call since that was just used to display the data in the live UI when we were developing our script. If you scroll down, we're going to set the plugin as the open telemetry plugin. Again, this data export script will work with any of our long term data export plugins, but we're going to demo the open telemetry one. Finally, click create. And you can see that our HTTP throughput script is here and enabled and we have a 10 second. It's going to, this plugin will run the script every 10 seconds. So at this point we should be exporting our HTTP throughput metrics to open telemetry and we can validate that by going and looking at our open telemetry collector logs again, since our collector is just simply outputting the data it receives to its logs. So if we run this again, from these logs we can confirm that the open telemetry collector is receiving metrics every 10 seconds. So just to recap, what we've learned is that we can write a pixel script to export pixies auto telemetry data in the open telemetry format. We can configure one of pixies plugins to regularly run this open telemetry export script. And we use the open telemetry plugin, but you can use the script with any of the pixie data retention plugins. If you run into any issues writing pixel scripts, please reach out on our community Slack channel and we'd be happy to help. And if you're interested in contributing to open source, we're seeking contributions for preset open telemetry scripts.