 Good morning, good afternoon. We are presenting a cloud native benchmark index today. I'm Ritika Ganguly from Intel and my partner in crime is Lee Calcott from layer five. This is a cloud native value measurement and it's pretty exciting. So let's get started, Lee. Ritika, we're, as you know, we're missing some performance characteristics. We are, well, any number of you in the audience have a lot of metrics that you use to track your environments and actually maybe too many. It might actually, if you think about it, it might actually take you a little while to articulate and characterize the performance of your environment. If you had to boil that down to a single metric, maybe to a handful of metrics, what would those be? Golden signals, maybe some signals from the red method or if you had to boil it down to one. You know, I don't know that we have A, just a clear and concise way of conveying, you know, the, of characterizing the performance of your environment, nor do we have a single kind of true north measurement. We're also, you know, quite frequently overlooking business performance, which is in large respect to why we're running the infrastructure in the first place. We usually talk about performance and cold hard quantitative speeds and feeds, but instead I would submit to you that performance could, should be, it should absolutely be measured in terms of speed and speeds and feeds, but it's a lot more meaningful to layer in the value and to quantify the value that your infrastructure is providing. We're really kind of missing the business performance aspects of what we're tracking, how we're characterizing. So the discussion that we're having today falls under the umbrella of a CNCF project, the service mesh performance project. This project is at its core, probably a specification for capturing the details of your environment in a uniform way in a consistent way, capturing your infrastructure configuration, your service mesh configuration and characterizing the details of your workloads and doing so consistently such that you can baseline your environments, you can benchmark them in a consistent way, share with others, maybe compare with the performance that others are having. To the extent that it's codified, you can have system to system exchange of this information. And so S&P is a CNCF project as a, well, I guess it's maybe less than a year now. There's some research aspects to what service mesh performance is as well. Sure. And so we have all been involved with the S&P project for more than a year now with people in our teams contributing and it's an extremely useful set of tools. And as part of that, we introduce Meshmark. Meshmark, like I mentioned, it's a cloud native value measurement. From value, you're essentially trying to measure if the performance of your infrastructure matches what you want to get from your deployment, what kind of value you want to get, business value you want to get from your deployment. So for example, if you have some key performance indicators, do you want to measure whether the Meshmark value is directly responsible for how, say, your video gets loaded or your image gets loaded on a particular webpage. So often you'll see when you have a YouTube video uploaded, it will have only the text and not the video. And so what happens is you do not want that kind of an experience. You want the video to be and the images to be loaded first. As you see in this video here inside Etsy, if you click on something, you may often see the text get rendered first and then the video. The load latency of the video traffic is what impacts you see visually. And so the deployment of your cloud native environment is now, if it can be indexed through a Meshmark ratio, your load latency will be directly proportional to that. And then the number of resources you're using to deploy this environment to get a particular lower latency or a higher latency is a usage metric. And that's directly proportional to your Meshmark. The TCO of your application hence becomes directly related to Meshmark. And so how do we go ahead and define what is Meshmark? And here is a definition that we propose and we are working towards building this up. Essentially you have utilization of different kinds of resources. And these utilization classes can be categorized as for example, a compute utilization or a network utilization or any other resource utilization could be a utilization class. And as part of that class, you will have some efficiency metrics which we call MUE and we'll define that. Attach weights to those MUE, have a set of one to N metrics with weights and you sum them up as a ratio of the number of MUEs. And that becomes your Meshmark. You can give weights, you can give higher priority to utilization one or N and you have Meshmark and one index which can tell you how well it is mapped to your value vector. So as next steps, we look at one specific MUE. So what is an MUE? It's a calculation combined ratio of measured platform resources to assigned resources. If you are able to measure what your assigned resources are in whatever form and able to also monitor what's the used resources, you can have this ratio. So for example, a very simple one is CPU performance and you would want to see if the CPU performance as a ratio to the available resources is a loss or a gain. So CPU performance or loss over total CPU is our MUE one. And that's just one minus CPU utilization over 100. That's a very simple ratio. And if you see on the right hand side, the graph shows you that as the latency increases your MUE lowers. And so that's a very good indicator that your efficiency of your infrastructure is not very good because your latencies are increasing as your QPS increases. So it's very easy to then take action based on my MUE is lower, let me take action. And that's something we can address in future discussions. So like this, you can measure and create other MUEs. We will look at how you can visualize this within an environment. And so let's look at the demo Lee. Sure, let's jump into a sibling CNCF project called Mechery. Mechery is a cloud native management plane. Users of Mechery can configure their Kubernetes deployments, any and every service mesh, as well as onboard and off-board their workloads onto any given mesh. Let's take an example workload, a console application loaded into the visual designer. Take a look at the service splitting functionality of console and note in this case, we're assigning a weight of three, when we can change that to four, to derive its mesh mark, which is a mesh utilization efficiency calculation of the efficiency by which that network function is being performed. We could also take a look at service intentions of console and examine the efficiency of that network function. Now that you've seen the demo, you want to go ahead and, you know, publish the results and call everyone to get together, give us feedback. And if you have a unique use case, join us. What say Lee, what else? Yeah, we are continually looking for people to give descriptions of their workloads. We want to make sure that the workloads under test are fairly accurate representations are interesting representations to you. There are a few different predefined tests that are being run and those are being run by some of the contributors in the project within the CNCF labs, so within controlled environments. At the moment, there are upwards of about 40,000 test results that have been collected. And so that a data to be analyzed and to be published. So part of the goal here is to begin publishing some mesh marks. Ritika, thanks so much for engaging in the definition of mesh mark. This has been amazing.