 So Redis observability has been a top request from you guys and for those of you who don't know Redis is a popular in memory key value store that's often used for application caching. So as of today, pixie traces Redis requests that are made to in cluster Redis instances, as well as Redis instances outside of your cluster. So like the rest of the protocols that pixie traces here on the slide, you don't need to add any instrumentation to get Redis visibility. So if your cluster has Redis requests passing through it, pixie will automatically trace them for you. And pixie provides deep introspection. So for Redis we're able to see not only who is talking to who, but the contents of those requests. So the Redis commands and command arguments and the full response. So the application is pushed to a new Redis table, and you can use pixel queries to access this data with the live UI, see a lie, or the new API that we discussed in last week or last month's meeting. So now I'll show you some of the scripts that we provide for out of the box Redis observability. So I've got the live UI open here, and I've got my clusters selected in the top left. I'm going to go to the script drop down menu and type Redis. So here are three scripts you can look at. And today we're going to start with the flow graph. So I'm going to click that script, and the script has a required argument for namespace. So I'll type that in and rerun it. And here you can see a graph of all of the traffic using the Redis protocol within our cluster. And it also shows latency and throughput information. So if you've used our DNS flow graph, that's a really popular script we have. This is very similar to that, but it's for the Redis protocol. So this cluster has on it a modified version of Google Cloud Platforms microservices demo. And this is like a web app that users can browse products, add them to their cart and purchase those products. And this is app as the name implies has a bunch of microservices, and it has a cart service. And the cart service is responsible for managing a user shopping cart on this website. And the cart service uses a Redis database to cash the user's cart. And to allow for a larger data set with partition this Redis to have three shards. So jumping back over to the live UI. So here we can see our cart service to untangle this graph a little bit. And the cart surface is talking to three Redis instances. And each of those Redis leader instances has a follower instance which is a replica. In this graph, the color of the arrows between the pods represents the latency. And if you hover over the latent, the arrow you can see more detailed latency and throughput information. And so for example, here you can see that our Redis follower pods, which are the replicas, their communication to their leader pods, or to their leader Redis instances is much slower than the communication between the leader Redis instances and the cart service. We can also see information based on the weight or the thickness of these arrows. So in this example, we can see that Redis cart zero is getting much higher traffic much more traffic than cart to or cart one, based on the thickness of the arrow here. So the table below the graph shows the same data that powers the graph. Whoops. Ah, sorry. So if we scroll down here, we can sort by the request throughput column by clicking on that column name, and we can confirm that Redis cart zero is seeing about 10 X the throughput of the other primary, the other leader Redis instances so our Redis charts. So generally it's best if you can set up your application and database so that your traffic is evenly distributed between your Redis partitions. Remember that the Redis database in this in our example here is caching user carts. So overwhelming a single Redis instance could lead to a poor user experience for the customers whose carts are stored there. We can also investigate this imbalance of traffic between our three Redis partitions by taking a look at the Redis pod with this highest traffic and that's Redis cart zero. So for this example will assume we've already checked our Redis configuration and it has adequate resources, and we're primarily just investigating the load on this one Redis instance. So we're going to look at some actual Redis requests to do that. We're going to look at the Redis data script. So up at the top in the script argument, I'll type Redis data. And this script shows all of the Redis requests traced in your cluster. So if I expand one of the rows. We can see all the data from that row and it's Jason representation. And the most important columns here are the Redis requests command, which in this case is get the arguments that are passed with that command, which is a key. And in our case that's the user ID and the response. So, um, okay, so let me close that. So my goal here is to see a breakdown of the Redis commands being sent to our highest traffic Redis pot. So I'm going to open the editor using control or command E. And I'm going to copy in a script I've already written over here, and we'll run it and see what it does. So this script on line 15 is loading the last five minutes of data from the Redis data dot beta table into a data frame. And if you're writing scripts with this Redis events table just note that eventually we're going to change the name to Redis events so just be aware of that. In this case, we're using the context function and the context function provides extra Kubernetes metadata based on the existing information your data frame. In this case, the Redis events table has a you could call them, and we're using that information to infer the pod name. Now the pod name in this Redis request data represents the pod being sent the Redis requests. So we want to filter only requests going to that high throughput Redis instance. We that pod names in pixie are prepended with their namespace. So just be aware of that. And then because I want to see an overview of what's happening what's what requests are being sent to this pod to see if we can figure out why it's getting so many requests. I'm going to group the Redis requests by pod which the request is being sent to and commit the Redis command being sent in that request. So I'm going to count the number of requests with this unique pod command pair. So I'm going to run this using control enter or command enter. And here you can see in the last five minutes which is specified in the top right here for pod Redis cart zero which is our high traffic pod. We have been sending it these five Redis requests. And if we sort by count, we can see that like the large majority of these Redis requests to this pod are each get each get is the Redis hash get command to get an item from a hash table. So let's drill down and look at only these age get requests to this pod and see which keys are being requested by this age get command. So we can use the same script for that. And we just need to modify a little bit. So first we're going to filter based on the Redis command for each get. Then we're going to grab the key, which represents our user ID from the command arguments column using the px pluck command, which grabs from a JSON object. In addition to grouping by pod and command we're also going to add key in here. And then we're going to count the number of times that key was requested in all of the Redis requests in the last five minutes. So we'll run that using command or control enter again. And here, here we can see immediately that we have a hot key. So one of our user IDs and their shopping cart is being accessed at a much higher rate than the rest of the keys on this instance Redis instance for our five minute time window. Because the these keys represent user IDs and the values are the user shopping cart. In this example, we're retrieving the cart of a single user much, much more frequently than the other users on the site. So, with just a few lines of code we've been able to get insight into how the keys and our Redis database are being accessed, and we can use that information to determine how to best configure our Redis database and partition the cluster so that we're providing the best user experience to that customer, assuming they're not a like malicious thought. So to wrap things up, pixies Redis tracing requires no instrumentation. It's, it traces not only who's talking to who or who's sending who Redis requests, but also the contents of those requests down to the full Redis request command the command arguments, and the response. So we can access this data using the lives UI as we've done here, our CLI or our new API. So, finally, I want to thank Yasheng, who actually built this feature. And I want to note that this is a very early feature preview so please please please reach out if you run into any issues either on GitHub or Slack, and let us know how it goes for you.