 In this video series, we will use Pixie to estimate the cost of our Kubernetes services. We will be calculating CPU, memory, and network costs for our services. In the last video, we'll compute the amortized costs per request for each service. Pixie is an open-source observability tool for Kubernetes applications. Pixie uses EBPF to automatically collect a lot of data out of the box, such as infrastructure and network metrics, application profiles, and full-body requests. Pixie is a scriptable interface, so you can modify existing scripts or write custom scripts to do the analysis that you're interested in. In this video, we will be writing a custom script to get visibility into the cost of our services. In this first video, we will focus on CPU cost. CPU costs can be calculated by multiplying the amount of time the CPU has been utilized, times the number of CPU units, times the price per CPU hour. So I've got Pixie's live UI open, and from this HTTP service map, you can see all the microsurfaces that are talking to each other. I'm going to open the scratchpad script. I have the custom CPU cost script pasted into the editor. Let's run this custom script using the run button, and here we can see two tables that this script is outputting. The first table shows CPU costs broken down by service, and you can sort that. This first row, which doesn't have any service listed, is CPU costs associated with pods that don't have a service. The bottom table sums the CPU service costs into a total cost. So let's open this script using this script editor button, and take a closer look. So this script loads the last one hour of Pixie's process stats table into a data frame. The process stats table includes CPU usage for all of the Kubernetes processes in your cluster. CPU time is measured by a counter for each UPID. So to figure out how much CPU time each service spent in the last hour, we take the min and max of each counter to get the CPU usage over our time window. Then we sum the CPU time for all UPIDs per service. This CPU time is in nanoseconds, so we convert it to CPU hours, and then we multiply that by the CPU cost per hour. Note that the hourly CPU cost will vary depending on your provider. We've chosen an estimated value, but you can try the script out with your own value. We then multiply the hourly CPU cost by the number of hours in a year in order to estimate the yearly cost of that pod. All of these scripts are public, so you can try them out by following the link in the video description. If you want to play around with this script, it should be easy to modify it to slice cost by name, space, or pod. In the next video, we'll do a similar analysis for memory usage, so be sure to check that out.