 And I'm Rob. I am Rob. Hello, guys. Today we are, this is a session on metrics that matter. This is a beginner session. And it is essentially work that both Raj and myself have done at a pretty highly regulated, in a highly regulated federal environment. So we're going to share with you some information today. So. Well, I would say that there's still one set of measures and metrics that have been put in every domain that, perhaps, through this discussion, we need to expose to the things that work for us, perhaps, that we need to focus on. And somebody has told me a wonderful quote that they just came from a section where they were all about, don't drag anything. Is this fine? Is this something that I want? And the context, I want to keep it in line. Very, very large federal programs, lots of regulations, Congress and more and so forth. So we have to have some different way of attracting our work. So anyway, so that's the context and the agenda. So first and foremost, as we get started, I'm going to ask you guys, what has done me in today's lean, agile, DevOps-inspired world? We are, of course, in a DevOps strategy. What does that mean? Shipable. Shipable. Shipable to? Yeah, all the way to production, putting code in the hands of the customers. And what are the implications of that notion of being done? What are some of the implications of saying, I am done when my code, from idea to customer, is in the hands of the customer. It's shipable. What are the implications of that? Yeah, that's all my theoretical stuff. Value is the only thing that matters. It's in the hands of the customer. Feedback, all that good stuff. But the implications from our delivery perspective, testing and operation perspective, the implications are we need to be sure that when the code hits the customer, how is it going to react? Is it really ready? Software development inherently is anonymous, becomes interactive. There's a lot of uncertainty. Now when your definition of done within our DevOps context is code in the hands of the customer, the questions really are, how do we minimize some of those uncertainties? There are uncertainties, of course. Now that I've said this, that's my quote. Is this today's one of the biggest delivery could be, is this one of the biggest operation? So one way to perhaps minimize all these uncertainties, maybe, is through some way of measurement. Measurement is essentially about collecting data, observing a certain problem faced to, as it draws an insight. Measurement cannot solve the problem for you. All it can do is check some light in a certain context of all the game. The insights and the actions determine how you're going to react to those measures. Just a little bit of a very close contrast here, because often these terms are misuse and misrepresentative. Measurement can come in two forms. Measures are concrete things, usually quantitative, and they apply to one thing. I have five apples. Number of production data, these are examples of measures. Metrics, on the other hand, are usually comparative in these. Redaration, you're essentially targeting two different dimensions, meaning, looking at the same measure, I have five apples. A metric, maybe, I have 10 apples more than yesterday. Or in terms of incidents, instead of just being a concrete number, a number of incidents, a metric, maybe, number of seven incidents since my last trip. You're always playing more than one dimension, and you're talking about a metric. You're talking about a metric. Now, when you talk about metrics, as you mentioned, you are, in lots of sessions, you're talking about metrics of pastures that are easy. So, with your perspective, how many of you are doing some sort of measures? These are measures of metrics in our organization. And, lots of both, right? So, are they all evil? Can you give me an example of evil measures? What are you talking about? What are you talking about? It's targeting an individual. It's targeting an individual. It's a necessary evil step. But, there's lots of debate about metric being evil. Can it be gained? Absolutely. Metrics can be misused. Metrics can be gained. Don't talk about it. No matter what set for a metric being evil. But, that doesn't mislead me. Metrics are all bad. It's in the context of how you do it. What's an example where you could potentially gain a metric? All we have is a new team. Core coverage. Core coverage! Thank you so much, right? Instead of this high bottom line, say that, you know, I'm gonna fail my bills, if it doesn't reach 95%, you need to start it. What do you think is gonna happen? Extra round of likes. Yeah, or just to focus, a certain true, a certain true, a certain true, right? It's very easy to gain numbers. So, there's a certain way you can use these metric, where it's not that it can't be gameable, but it thinks that you're gonna try to use them in a manner where it's actually better to the organization. So, this is some of the challenge that we face, and we'll sort of walk through some of the methods and how we try to make sure that it wasn't gained and not be misused. So, a good, so, good hard claw is a lot of out there. It really says that a measure becomes a target, it seems to be a good measure. What good hard claw is really saying is, if the measure itself, the metric itself becomes a goal under itself, you've completely lost its balance. And this is a powerful concept that he mentioned and something that you ought to keep in mind, right? Your test coverage is the end goal, it's useless as a metric, right? It's a metric towards something a lot. And that should be the context for all of our, as we design our metric for our users. So, having said that, what are examples of, so what makes for a good metric? In the first one, I did what makes for a good metric? You said you get an all metric, some sort of thing. What metric means that, I don't want the actual metric itself, what was it about the measure that made it more valuable than some of the metric? It makes sense to learn. What makes sense to learn? Makes sense to learn, okay, something that's helping us to give you some sort of, yeah. It generates insights. Generates insights, yes, that's the point of metrics. It generates insights. It generates insights. We generally identify a potential problem. Indicate a general area of moral disturbance as they call it, right? We have to find a key element in that. So, simple, they're together on a goal, yes. So, some characteristics of a good metric are, first and foremost, is that you can understand. If you require the PhD to decipher the metric, it's sort of useful, right? And especially for leadership, you want metrics that make sense, because oftentimes there's a tension between the development team and leadership who can support us, but also the metrics have to help cater both means. Comparative metrics. At least we find that metrics are a ratio of some sort. They're always trying to balance or measure against a different dimension. Maybe it's over time. Maybe it's compared in user support. Always ratcheting up tension between two dimensions to us feels like it's a much more relevant measure than a plain number. So, as an example, I think I have 10% more unit of coverage than last spring. That's perhaps to be no meaningful, isn't it? Right now, our discovered number is 32%. That doesn't give me the context as to in terms of how we're progressing or we're progressing, you don't have to, right? Comparative metrics are more useful. Last but not least, the most important criteria when designing metrics and doing metrics is, is it going to change the behavior of teams and individuals? This has to be one of your primary criteria when you're beginning to start choosing your metrics. So, having said that, I've looked at examples of characteristics of a good measure. I can tell you from at least the experience that we've had is, you can have all the critical metrics that we wanted to measure, but if it's not aligned towards some larger business goal or outcome, it will completely, meaning that you have to start identifying what specific business goals or outcomes that we're trying to reach and then, design your metrics because then they will provide you the gain. Are you progressing towards this given goal or are we sort of like that, right? This has to be your starting point when you start talking about designing, all right? So, having said that, there are some guidance that we'd like to share with you in terms of what are the three, the key dimensions across which you might want to design your metrics and how would these various dimensions sort of completely each other as you're designing the metrics and will go through some of those dimensions and see metrics. Thanks, Raj. So, as Raj stated, I'm sure most folks have seen this before moving from essentially left to right from a pipeline perspective. We've tried to organize this because this is sort of DevOps Day here at Agilindia. So, again, we've tried to provide, again, a topology to go over some of these key drivers and metrics. We'll break this topology down into some of these really four success factors and dive in again on some specific metrics that you may take home and reuse and hopefully stimulate some further thought. So, first would be, as Raj has already stated, would be our business success, this being one of our drivers, right? What is the success, meaning, you know, exactly, if we look at this image here below, and a lot of new companies or startups or whatever else, there's a lot of impetus, a lot of action just to do something and then something will happen afterwards, right? So, I'm going to build some awesome thing and something will happen. Typically, there's a lot of sort of afterthought and forward sort of back looking. Again, to Raj's point, we want to think about what are we trying to measure from the business perspective? What objectives are we actually looking to measure against? So, it could be something like a market share or it could be something like new business service enablement. These are sort of key indicators of success from the business perspective. Again, should be tied to some sort of measurable objective. Again, too often, a lot of companies, even mature companies, as we've seen in certain projects or initiatives, they fail to actually tie any sort of metric or measurement to what they're starting off with. And essentially, there's, again, in phase two, it's a big question mark, what happened? What are we actually going to accomplish? How would you get there? So, here is actually some data that we like to share. Again, the data we're going to show you here are examples that we've derived from, again, a pretty highly regulated federal environment to help illustrate some of these factors. So, this is a pretty simple, straightforward example. You're all familiar with, or not, right? I'm going to start at, this is just an example of what I'm talking about. And our goal, just to go back and bring this slide, is we had to, we wanted to increase your digitizing of a bunch of paper forms in the developer sector, and our business motivation was really to digitize as much as we can with a high quality, because for the general public. And the question really was, we had 50 developers in one specific project on a shared code base, how do we really track the work that's being done? We thought of it as a basic one-up chart. But what we noticed there was a different interaction by our leadership as well as the team in terms of interacting with this one-up chart. Yeah, we have a different number of stories. We have our sprints, and things are happening. Each sprint goes with things up. And we just, what we realized, this is not really helping the office in a whole lot. All it was doing was getting over it as more data and so on. We took an example from Spotify, the music people service, and that was essentially to, for every time it produced a one-up chart, we also, in our program area, answered three quick and this was live data all the time. The big questions were, when will all the planned stories be completed? And the second question is, what is the total number of stories that's going to be completed? What number of stories will be completed by a given date? And the third one is, can we deliver all the planned stories by a given date? If you look at these three questions, the first question is asking the question of fixed scope. When are all the plans for each going to be completed? What's the scope of the question? Based on the trend lines, you guys have all been there with your one-up chart. We just say, okay, it's going to be done in spring nine. The second question answers this. It is fixed date. If I have to deliver something by a given date, what will get done? So that's a fixed date, like the scope question. And again, based on the same trend line, we're able to answer, we can get about 120 stories from the story that we've done of a given date. And finally, the last question is a question of fixed scope and fixed date. Usually, management is interested in this sort of stuff. I would also, and I want to analyze this data. And clearly, based on our trend line, no, this is not possible. So making little adaptations to simple charts like most of us are familiar with, actually completely change the dynamic and our conversations with management. I think that this is simple for them to understand. So again, it's burn-up chart and burn-up chart. And this is also a really good conversation. Okay, we can't get done, but what can we get done? So meaningful conversations, just by making these other changes. So again, just to wrap up on business success. I mean, we're talking about business outcomes, identifying what those key indicators of success are. And they could be varied, depending upon, again, the maturity of your company, maturity of your product, and they can actually drive certain, again, from key indicators, they can actually drive how you're going to shape, which way you're going to move, as Roger's already stated in this example, to, you know, if it's a new company, you're probably going to be focusing more on stability and getting things rolling. If you're sort of a more mature, you may be focusing on more efficiency of your resources, being more innovative. Again, having some of these outcomes with your key indicators aligned properly can help drive the future state. So the next sort of, from the archipology, the next sort of driver here is customer slash user success. Again, this is, everybody has had, I'm sure, had some level of dealing with a customer, has had some interface or action in dealing with service desk responses. Some of these results here could even be, from a DevOps perspective, getting data from our customers, for example, from our AB test results, actually putting something out for specific testing, canary releases as well, receiving some sort of feedback from our customers. Customer ticket volume is also a very good metric that we probably use pervasively across, not just from the development perspective, but again, from the operations perspective. We have to actually tie, or actually have to rely on a new group or a team to receive these types of tickets or requests, and then ultimately, we're the, from a development perspective, we're the folks that end up having to deal with fixing those challenges. This is something we want to track, maintain, and ensure that we have the capability to continue to enhance and make those volumes either decrease or responses decrease. So the data that we're showing here again, this is from a pretty highly regulated federal environment. As you can see, we've got pending, this is improving responses to our issues. The challenge was we were having a lot of tickets come in on this certain project, and we were, a lot of the tickets were coming in, they weren't being resolved in any sort of manner that was quickly. We ended up stating to the actual development teams, as opposed to proxying this off to some tier one support crew. Devs, as you move forward, you guys are going to own your work, this work as it moves to production. Not only that, but you now are going to put a pager on your side and on your belt, and you're going to be tier one in some sort of manner. So the dynamics changed for these folks working and having to reschedule themselves in a queuing perspective, setting up schedules so they were able to actually respond from a front line to any issues. As you can see from both of these products, as we progress with you build it, you own it mentality, you can see our pending dropped dramatically over time, almost becoming completely equatable to having almost near time resolution of those tickets. You shed light on the talk today. Can we have more capability in isolation, not worrying about the manifest defects downstream? So this is of course a lot of pushback from the teams that are already trying to have them go, right? Sure. But also if it's not maybe enabling a couple of aspects, improving that, perhaps we've got to do something about it. So by the same token, we have to do some defensive part even though, you've got to build all the new features. Oh, and by the way, you're also exposed. You don't support for the features that you're putting into production. You'd have to, they have the capacity of this capacity. You'll have to make sure that you're balancing this kind of work along with the future of what they're doing. And that's the self-management. And again, if using some sort of data to illuminate that this is what the team is doing, if you want to improve customer satisfaction, perhaps you've got to write it down and decide if we're going to take it or not. Again, from efficiencies, from staffing, hiring the right talent, these are all indicators of success. So there's a lot of different metrics that you can move forward with. I'd like to take the opportunity of something I've read recently. This was a Google in their discussion of a customer reliability engineer. This is something that's sort of spun out of their Site Reliability Engineer book. And it's about customer anxiety and the ability to try and reduce the customer anxiety and make it zero. That's something that we all sort of strive to. And then just reliability in general, from a, as we move forward with, from our talk this morning, migrating into the fine world of multiple containers and multiple objects, as well as service providers that we now rely on, the customer anxiety reliability becomes very important for customer success. So just thinking about how we can actually get some level of metric from our customers that would be speed back to ourselves becomes very important, again, from a service provider like Google Cloud Compute or AWS. These things become critical. So I'm sure lots of folks have dealt with tire fires and trash bin fires in the world of operations. Who's, anybody done operations in the past? Awesome, so that's a small handful of folks, awesome. So 3 a.m. calls in the morning, always a fun time. This is our third sort of driver here. And as you've seen from our, I've actually put in deploy release and operate. I'd like to say that operations now is, these are areas that they are also responsible for. Again, as we move forward. In sort of a historical context, or maybe a classical context, a lot of sort of operations success metrics can be derived from network, platform, latency, things of that nature. As we move forward in relying on more cloud providers, CSPs, more objects to look at, some of the metrics we're actually trying to capture are starting to change. They've changed and they've also grown. So from a classic perspective, let's say we actually have like two nodes or two hosts. The typical amount of metrics would be maybe 150. Now let's say on those same two nodes, we have 100 containers. The amount of metrics essentially would be over 10,000. So you can see that again as we move forward, metrics and the number of metrics we're starting to receive is growing dramatically. And the same sort of hardware. So again, this is data that we've pulled. The impetus of this data was the driver was actually trying to actually have from our main line, we're actually trying to have something that was deployable. That was the goal. So what we actually tried to do was start to track just the simple success or failure as we were doing our pull requests and starting our actual build process. As you can see, there was a lot of challenges up front as we started to increase the number of jobs we had over time, our failure came up and we ended up having to sort of look at other data to figure out why are we having this, the number of jobs are increasing from our deployment success rate, but we're having a lot of failures. What we ended up doing was actually going in and diving into different various test suites. And we found that a lot of our regression tests were kind of bloated, they were failing, there were challenges. The manual testing, we essentially had to rip out. So again, diving into other data, other information forced us not only to sort of readjust how we were moving forward with our pipeline with this goal in mind, but also sort of refactoring our regression tests. This ratio, number of fail deployments. So this really helped us doing the problem by looking at a number of data points that were directly telling us this was a problem but it helped us sort of investigate close up and isolate the problem. So again, back to our metrics pipeline. Again, from an operate perspective, I'd like to illustrate, including deploy, release, and operate, included some metrics here that we've found very useful. Again, I think really the purpose of this slide is to illustrate, from a DevOps perspective, the ownership of looking at these metrics, investigating these metrics has grown from not just looking at, again, latency and availability and my pings is something up and running. We now need to take a step, a broader scope and look at what is happening in our releases. What's happening in our deploys? What level of predictability do we actually have a gain? How is that deployment frequency? And then matching that up with our development success. So this is sort of a potentially a port mantue from a metrics perspective of where development and operation will come together. So as we know, or as a lot of folks should know, when we talk about development success, develop, QA, deploy, or the three main themes from a metrics perspective and a topology perspective, it's always sort of a, when we think about DevOps, I think we've heard something that's shippable. I think it's something that, so typically that infers just like speed, vast, quick. We tend to leave out the quality perspective here. So development success, as we stated with develop, QA, and deploy should invoke and include just as important, good to high quality. Now, a lot of times as we've seen and probably illustrate in certain contracting environments, there is reward for speed and zero reward for quality. This happens all too often. So essentially having some of these metrics in place that are highlighted in a meaningful manner to provide feedback to managers of those contracts to help adjust what these folks correctly do. I don't think that's a normal feature. As we indicated in the previous slide, the VR deal in this range is not easy of a project. You have hundreds of developers committed to what they'll call the main item. So it is the challenging environment. And we provide this visual chart to everybody and in the common area for leadership and teams which, pretty much, it affects how the teams are utilizing their capacity. And as you can see, not all of them, just about cranking out new features and cranking out code. There's technical depth concerns that needs to be addressed. I don't need to touch on that either though, so we'll get back to you for that. Maybe artificial improvement, new code. And in our case, because we're moving towards the use of this, your own functionality, team line support, that's the key, the use of all the dealings with the projects that you get to support. And you can see, we are dealing with lots of productive problems. It can be, that's there, but how do these metrics in place at least provides the visibility to both teams, as well as leaders so they don't keep pushing for the things, you capability or whatever else your context is, by having how the capacity being allocated around the most powerful in terms of management and team understood that there's only, whatever capacity you have, you balance it against it and does that easier to deal with. And this sort of stuff, unfortunately, we're in an environment where we have a strong leadership pushing for dev ops, and all of it, how does it not need to support, again, it will be experimenting with the kind of things that we've been doing and talking about. So again, just to sum up, I mean, from a development perspective, we're, as we're, from the portmante of being responsible, ideally, to all the way to production, we wanna help emphasize that these key metrics and drivers to include development, QA, and deploy. Now, from a traditional CICD perspective, these metrics provide a lot of data to the dev team as has been illustrated and also provides them the ability to have more of an operation mindset as they move forward. So they can be responsible for that deploy and they get feedback immediately so they can do something about it. So I think we, it was Wednesday, we attended Evan's session on part of his no-project experience, and really his goal was really focusing on business outcomes. So Raj and I, just for S&G, said, hey, why don't we take a pass at this and see what this looks like? So this is, again, I'll give Evan all the credit here because this was done during his talk and we modified this a little bit. The goal here is to try to provide you guys like, again, I was talking about a metric, it needs to be ideally tied to some sort of business object. So our outcome title here is we're trying to provide a common platform for CICD, a platform as a service or a container as a service. Our measure here is team adoption rate. So we're kind of, from a owner perspective, we're taking this from the provider or the landlord of this service, right? We wanted to ensure we have some sort of, we're tracking adoption rate. So our current baseline is probably one team a month and we, because we are not having any targets, we're abating the target piece and so how do we actually measure team adoption rate? What does that look like? So Raj and I sort of needle around and figure that probably features are important to folks. Form factor is probably something we want to keep track of and the frequency. So below is just a little bit of sort of mock data that we put up and as you'll see, sort of what makes this two dimensional again and makes it a metric is the date. We want to keep track of the date. Event being something as simple as a log on or actually doing an image stream into this pass or cast system and then the form could be their IDE or maybe they're hitting the GUI to do this type of thing and then the actual count. So again, just a sort of representation of how perhaps you guys as you move forward, looking at what business objective you've got or outcome that you're trying to achieve, just simple exercise to move forward with the try and draw out some metrics that'll get you there. Oh, well Raj, so were you part of that session? Yeah, yeah, so we pulled the metric out or we pulled the target out because at this stage where we were trying to accomplish, we didn't want to have a target. I mean, a target could be very, very important. I agree, especially if it's tied to some sort of financial aspect. But for this, we're just want to measure our ongoing adoption. So tying it all together from all of the context that we've got really starting from business success to the other tale of customer success, the two key pieces again that tie it together really is the development and operation success system. Again, balancing our speed and risk quality cost, setting our key indicators of success, again, reducing time to obtain a response to our customer. So we've actually aligned, this is kind of a takeaway slide that hopefully folks will leverage to take advantage of a bunch of different metrics that help promote some fall. For metrics, again, there's a lot of restrictions along the way, but it's just about 20 cents. But make sure that they're all in challenging different drivers and insurers. Why do you want more people to adopt our testing protocol? So it's a lot of log-in decisions of making sure there's always a lot of time to do it. So as we talked about earlier as well, with this, as we move forward with new technology, as we move forward with the ubiquitous movement to more virtualization, more cloud containerization. Again, the number of objects we need to actually track and monitor and obtain metrics from is increasing, then it's very complex. And then if we couple it with all of our other success areas, that's a lot of data. And this is a framework that sort of the adaption or source of this image is from the Yard of Moderator at Turnbull, James Turnbull, it kind of hacked it a little bit with the customer success areas to provide where this data is coming from. Again, this is really, we're looking at data sources, we're looking at sort of a router or a transport mechanism and how do we are aggregating all of these events, again, that being quite large. And then what are the destinations? Is it just the storage? Maybe it's just security data or log data that needs to be stored away and maybe looked at at a later date. Is this something that we need real time or near real time action against? Is it alerting data? Is it thresholds that need to be set off? Whatever the case may be. So again, this is a framework that's been well used. A lot of folks such as Etsy and Spotify, even Google have their own manifestations of this framework and essentially what that framework ends up looking from the display perspective in an alerting perspective. We're gonna show you one that's from Capital One Labs, it's called Higia. And this is again a representation of multiple data sources providing metrics and essentially a single dashboard pane of view. So we're looking at data. Again, this is potentially sort of your CICD dashboard and the health of what's actually happening on your pipeline. So we're looking at with the top, we're aggregating data from JIRA that's giving us our sprint information. We're getting actually our Git or GitHub repo information. Again, Jenkins information. Any sort of static code analysis that's going on. Potentially performance data and deployment data. So again, we have a good snapshot that's easy to consume. Again, coming from multiple disparate sources. Yeah, this again is from Capital One Labs. The source here, you can actually download this. It's open source tool. Yeah, you can. Yeah, if you already, not only that, if you have these tool sets already in your pipeline, it's very simple to have them plugged up. Yeah. You can also create your own tool, your own sort of widgets and your own data sources and integrate them into this tool as well. Whatever makes sense. So, you know, a few minutes left, but if the few takeaways is, as I mentioned, there's lots of messages out there. But I've mentioned, it's useful, if it uses the right context, if it engenders some sort of behavior change, it's useful, as is code from low-rises. Measure me in a logical way, expect logical behavior in an open source. So, if this is talking about talking about velocity, velocity, velocity, give me more story. Oh, I think it's gonna happen. I'm gonna just make my head 20 points if it's wrong. I'm gonna get high velocity. But when high is that covered, that's all I care about. Here's what's gonna happen. I can write lots of write-offs, lots of bonus tests and then increase coverage. So, that's important. But don't be intimidated. There's lots and lots of semantics out there, think into what makes sense. And think of all sorts of semantics that's targeted. I mean, that's what we did, if you continue to add new metrics as well. And never target individual. Always a logical business outcome that you're after, but you're going to measure it with just a light on. Make sure the measures are across the different dimensions. We've presented the four different private dimensions all the way from business because they're coming up with that. The metric is never about one thing, it's always about balancing multiple drivers. Make sure you learn metrics in that fashion. And then, I would strongly suggest that use comparative metrics. Why do we try to compare this? We're adding up attention across different drivers. You provide a deeper insight into the real-life function of the project, which is important. And finally, make it with nobody interacting with it, it's sort of useless. You want metrics, you want them to be simple, and you want to display and have leadership and deep interact with it. Without it, it is pointless. Finally, just a couple of things we're just starting out. Start with the business outcomes that you're after. Take one or two metrics, design those metrics, let the data provide the insights, and the insights themselves will generate the action. Metrics by themselves are like good friends, George and Ricky is around here, and the point of metrics does not indicate the problem. It's to just shed some light in the general area of disturbance that you have, and then use those insights to drive action. That's it, thanks guys. This is what we did, right? First and foremost, we had to make a technical data track, right? First and foremost, right? It was a developer, I, they were complaining about it, but nothing was really being done about it. First and foremost, we just listed all the technical metrics, items that the team had, and then once we had that, we started to see what's the level of effort for trying to address that. We then balanced the value proposition against the effort in terms of always going to take that particular issue, and once business determination, we made sure that the technical teams with the architects, their technical depth items were balanced if you do the team back off, and they would kind of be allocated to start with a different, that's the fact. Okay, thanks. Yeah, because they're, I mean, the product owner doesn't know what they're talking about, but it's our job as a technical team to start the company and doesn't do that. So that's what we're doing. Make it concurrent, estimate the work, and then make sure it's in that one cycle. The product effort was being the company that we do at midnight, it is work, and we're thinking about it. Yeah, yeah, sure. Again, this is something that came out of the Google Site Reliability Engineer work, and October of last year, they started talking about, again, a customer-reliability engineer. The idea is they wanted to actually have more success and a partnership with their customers on Google Cloud. So they brought up this concept of, again, a customer-reliability engineer, and some of the things they wanted to start talking about was anxiety. What is that customer anxiety? And their goal was to minimize, if there was a score, they want to minimize the score to essentially zero, and again, increase the partnership with that customer and see the success. To the point where I think they're experimenting with Google and the customer, if the customer is willing, having a partnership is the fact that when they're ready to do a release, they'll actually go through like a PRR together, and it would be then up to Google to say yes or no. Thank you. Do you have something to say? That's interesting. So the question really was, a lot of the data is coming from the development team, and the concern is they're asking you how you're going to do too much. So in terms of the measures that we took, it really is all in our chair, so it wasn't really asking teams to do additional work on them. And we have approaches of those teams for the long-term, and some after the long, collecting that data from their Sierra, and then that product doesn't really, they're really not asking teams to do any extra work. If anything, it's for the team to protect the team, but they can make their work as far as we're adding their work to them. Well, I just want to make you elaborate on some of the factors that gave you this data, and any data that can be managed by the team. Yeah. So let me give you an example of that metric that is, oh, by the way, are we ready for it, right? Yeah, last one. Okay, so let me just give you an example of the test coverage one, right? So if you start to set high watermark, this is what we did. That was a big mistake, right? We said from now on, we want a high test coverage, whatever the percentage is, 25, whatever. Yeah, that was a big mistake, right? That was a big mistake, right? We said from now on, we want a high test coverage, whatever the percentage was, 25, whatever. And in hindsight, that was a bad way for us to go about it. The failing will say, if you don't beat 85. So the adaptation we did instead was so setting a high watermark, we started where we are, meaning right now we're at 40% but what we said is we will never let our coverage cross the low end. And over time, we wanted to make sure it was constantly changing. So it's really the same data but changing the context of how we're looking at it, right? So setting high watermark, we never let it cross the low where we are and in time, we want to make sure that 40, maybe my sprint, that's inside the 45% so we constantly push the needle further and further up the stroke setting the goals that are really setting up the cater and driving the wrong kind of behavior. Does that help at all? Well, thank you so much for your time.