 I'm a developer at ThoughtWorks and today I'm going to be talking about some of what I learned while working on an analytics offering for GoCD about metrics that matter and can help you improve your continuous delivery process as well as some metrics that don't as much. And the way in which I'm delivering this is kind of inspired by this graph. It represents two things I spend a lot of time thinking about. One is software and being at my keyboard writing code and the other one is trying to get it out into the mountains and especially in Denver, I figured I would bring some of that into this talk. Peter Drucker has this famous quote, if you can't measure it, you can't manage it. We see that with business, with software, and right now I'm wearing a Fitbit, even with the quantified self-movement as a way of understanding progress and improvement. Similarly, in software sometimes we see these KPIs that kind of make us want to throw ourselves off a cliff. They don't feel useful. They don't feel relevant. And all of the metrics I'm talking about are hopefully ones where you can see value for yourself as representing things that you might actually want to track. And as I said to make it a little more engaging, let's start with something about mountains. So the forever pace is this concept for mountaineering or for through hikes where you need to break things down into a sustainable pace. Software like mountains is often harder than it first seems. And so we want to come up with a way to be making incremental progress. So one thing you might track is the number of deploy-ready builds you have. How often are you committing code that can be released to users to provide value? And tracking this metric facilitates collaboration across developers, operations, and product people because you need to chunk your work into pieces that are routinely committed and tested but also provide user value so that to the scope of the work change you can actually still provide value to your users and you're doing something that represents progress and improvement. Another thing that I think a lot about with hiking is there's a lot of things that go into leading up to that fun part where you're actually on the mountain. There's all of these logistics of getting in your car, getting packed, all of the stuff that you have to worry about if you want to be out there safely, but you aren't actually the parts you enjoy. And in software, this is kind of like the time that it takes to go from a commit to getting feedback. So there's a lot that goes into cycle time. There's how you structure your tests so you can get feedback early and learn if things are green pretty quickly. There's also, are you parallelizing builds so that you get feedback as early as you can. And I know personally this is something that's caused a great deal of frustration when I've been places where it's like four or eight hours before you know if the code you've written is actually working. And there's a lot of room for optimizations in this to allow you to focus on the part you like writing software as opposed to just sitting around twiddling your thumbs. Gear is another really important part. You will inevitably have to replace your gear, but like I had a friend who recently lost his sleeping bag on a multi-day backpacking trip because he tied it to the outside of his bag. Ideally, if you take good care of your gear, you don't lose it. Similarly, you are going to have some builds that go red. Certain things will fail. If your CD system is actually catching and like running tests, you will have some failed tests, but can you extend the period between those so that most of the time you're green, you're working routinely, you're writing code. And this relies a lot on strong local builds. Builds that on your developer machine are representative of the actual state of your CI or your CD or your production server. This will allow you to prolong the time between having failures and let you mostly focus on writing code. Hiking is hard. Software is also hard. You should be able to get out of the parking lot, ideally, before you get tired and you should be able to get back out there pretty quickly. So when you do have a failure, when your legs do get tired, how long does it take you to get back into a green state? And this one's a pretty direct analogy. How long when you have one of your builds go red, does it take for you to correct it? Are you piling up and having compounding failures, or can you identify the root cause of your issues pretty early on and make progress and make improvements? So your meantime to recover might be impacted by the amount of monitoring that you have or by whether you follow good practices like not committing when things are red and prioritizing fixing the build so that you can address that problem quickly and move on to developing new functionality. But not every metric matters equally. Sometimes maybe the weight of your pack is what's keeping you down, but other times maybe it's that your base fitness isn't good enough or any of those other metrics that I talked about. And so it's important to reflect on what's actually relevant in your context. In software, we kind of see the same thing. A lot of times you'll get people tracking things like story points. And there's a really easy way if you don't buy into this to game it. You just make bigger estimates. I think everyone's kind of familiar with that concept. And so it's important that as you go to use metrics, these are four that we think are valuable and that we heard from a lot of people are valuable. But reflect on the ones that actually impact your organization. Maybe try one of these out. Maybe try something else that reflects the pain you're feeling. Because if you don't believe in the impact of the metric you're tracking, you're just going to try and game it. So if you're interested in either metrics or mountains, come find me. I'll be at the Go CD booth. I'd love to hear what's worked well for you and maybe some metrics that happen as much. Thank you all for your time.