 So, thanks for coming to the talk. Today I'm here to talk about the broken state of process improvement in software development. And I use strong words, broken, for a reason, because I want to take the opportunity for this talk to change how you think about process in software development. And in a conference like this, we talk about price process quite a bit. So I want to try to do a shift in your thinking. So first a little bit about me, because why should I be talking to you about this? I am the managing director of ThoughtWorks Studios, which is the product division of ThoughtWorks. So I work on building things, and I'm a product manager by trade for the last several years. But I have managed projects. I've done QA, BA, et cetera. I've been coding since I was six. And I see in my job, with a lot of my customers, I see a lot of processes. I see what organizations do as a ThoughtWorker, where we have a consulting organization that deals with many, many clients all over the world. I see the processes that people do. I see what people talk about. And I am an unfortunate Six Sigma Black Belt. I didn't choose to get certified with Six Sigma, but I did anyway. And I learned a lot from that process. And this talk actually has some of that. And Six Sigma is interesting because it lost to lean. You hear in just in this conference, people talking a lot about lean and not about Six Sigma at all. And that was largely because I think Six Sigma had this very rote process driven methodology driven way to improvement that didn't actually work very well. But Six Sigma has a great lens to it and has some very good tools in the box. So some of this talk is informed by that. And then lastly, I'm really fascinated by what I call process wonkery. And you see it a lot in software development shops. A lot, a lot, a lot of people really are geeked out and interested by process. And so that starts to create weird patterns when people are into process and into creating process and building out process, designing process. You may be in a tools and methodologies group. You start to do things because you're into the process. And so I'm fascinated by that. I had a conversation many years back that actually was the genesis of this talk with an analyst at Gardner. And she said, I'm an old school project manager. We used to manage projects by looking people in the eye. And I knew when such and such was lying to me and that's how we manage projects. And you guys today, you got all these numbers and all these charts and things like that, but you don't know what you're doing. So prove to me that you young guys know what you're doing. And I said, hmm, that's a very interesting thing. So the goal of this talk before I go into the broken bits, if you take anything else home with you is that in terms of your software development processes, the same kinds of things that you would use for delivery, incremental, thoughtful building of things you should use for your processes. And so far in this conference, you see big process rollouts that people have. And how does a big process rollout of agile work in the context of the fact that we do things incrementally? We do things where we try, where we learn. So this talk, if you don't remember anything else, remember that bit. So again, I assert that how we think about process improvement and software development is fundamentally broken. And why is it broken? It's because people like me get up at conferences like this and tell you what you should do. We say, oh, I've done this before. I've seen this. I've seen this process, this methodology. And therefore, because it worked for me, you should do it. And I think, that's a little odd. We've become obsessed with the latest framework, the latest methodology, the latest set of practices. So someone will say, hey, work in small batches. Worry about your cues. Or someone will say, do CI, do CD, do TDD, all these things. And these practices and frameworks methodologies are all great. They're good guideposts. They're good things for you to learn about, for me to learn about, for people to talk about collectively. But what's the point of all these things in the first place? And what's the point of process? So I'm going to take a step back a little bit and talk about process at a high level and why we have them and then go into how that affects your software delivery. So why do we have processes in the first place? What actually happened? If you think about it historically, the first and the second industrial revolution really changed how people worked. So if I was a blacksmith or some other trade, a farmer, maybe I helped my kids learn the trade. And there was a lot of nuance to how to make a sword, for example. And how I heated the kiln and what I needed to do to get the metal to the right temperature. I relayed that in an apprenticeship to someone else and they learned those skills and they added on to it. And when I made a sword, I made a sword, that person made a sword. But the big change in the industrial revolution was that people collaborated, right? So Adam Smith, the famous economist, one of the sort of forefathers of modern economics, noted in about a pin factory that, wow, one person did this job and then another person did this job and another person did this job and that, whoa, this is a kind of a change. And so when people collaborate, there's a lot more room for problems to happen, right? Because I'm talking to you, talking to you, I say, oh, OK, I'm doing things and we have to work together. So a simple definition for a process is, a process is there is to reduce variation in a business outcome. So I have something I want to do, whether that be write software, answer a phone call at a call center, test something. And people are collaborating around that. And so therefore, for me to do it consistently, I need something to put it under control, as you would say in sort of the sigma way of thinking about things. So your process shouldn't get in the way of your outcome, but a lot of times, because we disconnected the outcome from the process, it does. So by definition, then a bad process is something where you don't achieve your outcome and is something where you don't reduce the variation in that outcome. And those two things are important for later, kind of trying to lay a little groundwork until we get into, again, how processes affect your delivery. So again, something that's bad doesn't achieve the outcome at all, or doesn't reduce the variation in that outcome. So that begs the question, what's a good outcome? And again, this is where people like me get up in conferences and say, this is what your outcome should be. And people tell you about outcomes. And I submit to you that you know your outcome better than anyone like me, and that your organization, your team, is working on that. And I really like what Jim Highsmith has done here. What he calls the agile triangle. So the old triangle of cost, scope, and time is on one side as a constraint. And then the main thing that you're trying to do is deliver value to your end user, to your customer. And then you have quality as a piece as well. So at least a lot of flexibility for what your outcomes are. You could be building a trading app. A trading app may have performance as its highest value because that's the highest value to the customer. And so all your team's activity may be around performance. You may be in a regulated environment. Say you're doing health care or something similar. You build car software. Then maybe compliance is the most important thing to you, and not speed. But there are all kinds of things that imply what we're trying to do. And you have to decide what your outcomes are. Because if you design your processes without regard to your outcomes, then you have a problem. So this is a book, Dean mentioned it yesterday, called The Principles of Product Development Throw. It's by Don Rennerston. I hope I pronounced his name correctly. First of all, if you haven't read it, it's great. Read it. And particularly there's a chapter on variation that I think is very good. And I won't go into all the bits of it because it's a little, I've got enough theoretical bits in the talk. But he talks about payoff functions and the likelihood that variation creates upside for you. And payoff asymmetries, where the expected value of a payoff is more than the expected cost. So go read it. Check it out. I think it's important. But some of it is wrong. And Don does two things that I think are not that great. The first is that he tries to tell you what your outcome is, again. So the first part of the book is about profitability has to drive product development. That's the most important thing. And then the best way to measure that's the cost of delay. So you hear people talking about cost of delay, cost of delay, cost of delay. It's good. Cost of delay is a good measure if profitability is your most important outcome. But if you look at a company like Apple, only the CFO owns the P&L. And everyone else is concerned with driving great products and building great products for their customers. That's their outcome. And then the most profitable company in the world. So if you drive yourself by what's important to you and your customers, it's not necessarily just profitability. But the second thing that he does that's more germane to process improvement is he says reducing variation is kind of bad because we've become obsessed with it. And that's not quite true. There's a nuance there. It's reducing variation without regard to the outcome is bad. Reducing variation could be a great thing, right? Because you want a consistent outcome. That's the reason you have a process in the first place. But it's reducing variation without regard to the outcome. That's bad. OK, so now we have a general set up, a general framework for what is a process. Why do you have them? Now how do processes affect your outcomes? And this is where it gets like, uh-oh. Wow, I didn't realize this stuff. So let's talk about throughput to make things easy. Again, there's lots of different outcomes that you want to measure. But in software, we tend to think of how much work we get done. So let's talk about throughput to make things easy. So first, the marginal benefit of any additional process declines over time. Goes up, right? If I have no process in chaos, I create a process. I get some good output from it. And as I do more, assuming that the steps are discrete and small and additive, there's a decline in how much that helps me. So can I have five people come up possibly? I had ringers. I had people that were supposed to come up. But I want to illustrate a game. Nobody wants to do it. I'll just say what the game is. Five folks. Yay. OK. Awesome. So I'm going to jump down here for a sec. Sorry, camera person. So yeah, we can fake it if someone doesn't want to come. OK, cool. So in the US growing up, you play a game called Telephone in School, where you talk in someone's ear and you see how garbled it comes out in the end that's kind of to show that communication is hard to pass from one person to the other. So we're going to do that. But we're going to do that with this principle of mine. So can you come here and I'll try to cover my mic here. So what is it? Yes, so that's pretty close. It's the old lady washes lots of different things. So it's pretty close. You said washes? Yeah, washes. And it got to watches. Right. OK. So now that's a good example. Our outcome wasn't quite right. We collaborated. And at the end of the day, we didn't get the thing done. How could we fix that? There's a simple way to fix that. We can have each person say the thing twice. And that'll probably improve our accuracy. So let's try it again. Yes, see right. Now, I could keep doing that, right? If accuracy is really important, you could say it three times, four times. But what does that do? That doubled the time it took to work, right? So if you think about your outcome, and in this case, it's getting the thing right over time, it got better when I added an additional way to say it. But if I kept doing that, it would get not so beneficial, right? There's some accuracy point that you hit, and it doesn't help. Thank you, guys. So when you add process steps, when you add complexity, at some point, it doesn't help you get any better. But yet you see organizations, and you go in, and you say, wow, this is the most complex process I've ever seen. They're like, well, that's what we need, we need it. And actually, at some point, it negatively affects you. And you guys know that. When you were doing waterfall, you probably had processes that were so arduous that you barely delivered anything. We see that all the time. So the next weird thing is all variation is reduced. This is one that is mind-boggling that people don't talk about. So usually, people do processes to try to reduce the bad, the low bits. But also, when you create something, you squeeze the high bits as well. And look at this. It's not really a sine curve, how people, but it looks pretty and it's easier to talk about. But when you put a process in place, you are reducing the ability for a team that group of people could have got it right on the first try completely. But then I made them do it twice just to get the accuracy right. I made them do it three times. What if there's an awesome set of people who get it right every time? I've just ruined their productivity by imposing a process. Then there's what I like to call institutional mediocrity. That's where you create a process, but the outcome is so bad, you're consistently bad. So you see here, it's below zero, but we're nice and tight. The process keeps the variation really tight and sucky land. It's not good, right? And so you've made a process that makes you bad, but you've reduced the variation, and that's what Don is talking about in the book. So here's a real example, because people like to say, oh, you know, I'm doing Kanban, and so I'm efficient. But this has nothing to do with your methodology. I was talking with a real customer that came into our office and they were talking to ThoughtWorks about, give them some wisdom on how to help them be more agile. And they said, one of our problems is our developers have very little discipline. They don't do their unit testing, and we want them to do their unit testing. So you know what we're gonna do? We've got an idea. We wanna run it past you, see what you think. We're gonna, every time they check in, make them fill out a form as a pre-check-in hook, and then we're gonna ask them to take their unit tests and attach it to the story card and the tracking system, and then we'll know that they did their unit tests. And I said, well, okay, I understand you have a problem, but let's really think about that. How many developers do you have? 300. How long is the program of work that you're working on? It's all year. Oh, how many times do your developers check in a day? Ah, about three times. And let's say that it takes 10 minutes to do that activity. So you multiply that out, and that's 35,000 hours attaching unit tests to a card and filling out a form. Is that valuable? And they said, no, it's a terrible idea. You're like, right, because you thought of a process step without respect to the outcome. You thought, I'm supposed to build software, but I have this problem, so I'm gonna create some solution, and it doesn't matter if it makes a lot of sense in the whole scheme of things. And people do this kind of thing all the time. So there's another ugly bear, and it's called standardization, and it's special, because when people talk about scaling agility or scaling anything, standardization is where people leap to. And I wanna talk about it, because I think, again, people don't realize the cost of standardization, so they standardize everything, because it feels like that's the right thing to do. That's what everybody talks about. Make it standard, make your scrum teams the same. They're scaled institutional mediocrity. So I have team one, was doing a pretty good job doing what they do. I have a bunch of teams that maybe aren't doing such a great job, but now I've made them work all the same, and I've made the team that was doing well worse, and I've made the team that was doing poorly maybe a little better. And this can work well if all your teams are doing poorly, right? But then, even if that's true, at some later point in time, hopefully they're doing better because you made some changes, then you're still keeping all your teams in a zone that may be mediocre, or that reduce the positives of having some high-level variation. And so we see this all the time. We're going agile, or we've been agile for a while, but now we're talking about scale, and we want to make all the teams the same, do all the same things. We want to make people interchangeable. We want to have the sprints lined up. We want to have all these things the same. Why? Why do we want to do that? What if your teams have different outcomes? I see this all the time too. I see teams that deal more with maybe operations, production-like things, that maybe Kanban suits them a little more, being told to do scrum because everybody else is doing scrum. I see things where there's always the special team. If you have a special team, that's a smell. A team that doesn't have to follow any of the rules, this is acknowledgement that your rules probably don't work very well. That if your process is such where you're giving a special exception, maybe your process doesn't work well at all. And what if you have a team that's supposed to get a new product down? They're supposed to move fast. There's a time to market issue and your general processes are kind of slow. What do you do then? Again, when you see often to say, this team over here, get that product out the door, don't follow any of the rules. That means there's something wrong with your process. What happens if your outcomes change? What if one day, quality is the most important thing that a group of people are supposed to be working on? But then the next time it's speed or performance, it's the same set of things. If you have teams that structure and the practices they follow are oriented towards one thing and then you tell them to do the other, but they're supposed to work the same way. They're supposed to not have feedback loops to change what they're doing. When I first learned about Agile, the guy that tutored me sat me down and he said, look, I'm teaching you all these things. I'm teaching you about pairing. I'm teaching you about TDD. I'm teaching you about stories. And they're all reinforcing. I'm not teaching you these things to say this is the way. I'm teaching you these things so that you understand some bits and that you use feedback loops to change what I've taught you. And four years from now, what I'm teaching you now will be irrelevant, but you'll learn things on projects and you'll change. Why do we restrict teams from doing that? And then you see, and I've seen it in some of the experience reports here, the org rollout is a big bang, right? So we're gonna go Agile, we're now scaling it to teams. Do we touch it after that? Right, we've got the processes, we've told people what to do, they're doing it, it's working pretty well. Do we go back and say, actually, these 10 things aren't working so well? Let's redo it? Standardization also creates a responsibility disconnect. And I see this often as well. Rarely do people say, the process that we just rolled out, we just went Agile, we just did all this stuff, that's the problem for why this team is underperforming. Usually it's, the team is underperforming, this team isn't working well. And it's like, well, but wait, you made everybody work the same way, right? Maybe it's the processes fault, maybe it's your fault as a manager. Now, I'm about to say something in the next slide that might offend some folks in here, who's in a PMO or a leadership role here, where they might be offended a little bit. Okay, good. So reporting is not an outcome. Reporting is not an outcome. Reporting for you and for management is not an outcome. Good reporting is a side effect of good process. But if you're making people jump through hoops and do a bunch of stuff for your reports, or you're standardizing people for your reports, without respect to the outcome, you're doing it wrong, right? We're here to build software for some outcome, right? Whether it be fast, profitability, whatever. But if you say, hey teams, build me this report because this helps me manage you. Well, hopefully the overhead of doing that's very small. But I see all the time people on teams where they're spending hours doing things so that someone can have a report. I had a customer call a few months back where someone who ran production, so ran the real upside of things, took over development. And her new policy was that all the teams across the entire organization were to have the same reports. It was completely irrelevant for the dev teams, but it made it easier for her. And it's like, okay, so then you're making people work in a way that isn't productive, that makes it easy for you to see what's going on. Not an outcome. Scale isn't an excuse for process complexity. So a lot of times what you see is companies say, we're big, we're bad, we're ugly, we got scale. And so then we're complicated and therefore our processes need to be complicated. And the issue there is that you wouldn't make a product that way, right? If I told you, go build a product for your customers and it's a complex domain, but make it as complex as possible, right? You wouldn't do that. If you had do-dads all over the screen, crazy workflows, people, your customers would say, huh? So if you have complexity in your organization, which you probably do, your processes still hide away complexity, particularly from the folks doing work, right? You don't want to expose people to complexity to make them slower. But again, we try to do that a lot in scaling. So why do we need to standardize? What's important about standardization? And you've heard people talk about it this week, common language across teams, people fungibility, I can move people around, reporting. These are all good things. Management convenience is a good thing. Investment, how do I make investment choices? These are all good things and they're all important. But again, if these things are over and above your outcomes, then you're probably building a process that doesn't work very well. So okay, I bashed things. I said, all these things are wrong. What are some solutions for them? First, I'll have a couple of thought questions that you can ask yourself and then we'll get into the build, measure, learn type loop. So how complex is your process? A good proxy for if your processes are too complex is to ask a random person on the team who has to experience these processes to explain them. What do you do? What do we do? How does it work? And you see it if someone has a complex portfolio or a program management process, when people say, oh, the work breakdown is this and then it goes to this and then this status happens. If someone can't explain it to you, that means they can't execute it very well. And so that's a smell. So that's a good thing to check, okay, we've got complex processes that people don't understand. Are your processes any good? We talked about what makes a good and a bad process. So are you hitting your outcomes? If you're there to be responsive to the business, then you can come up with a measure for that. The lead time from when a business request is made to when we start working on it. That's responsiveness. And if that's what you're there to do, then you can check it and say, are we really responsive? And then what's our variation in our responsiveness? Are we really responsive sometimes? And then are we not responsive other times? Have our cues backed us up, such that sometimes we respond in two days and we start building and sometimes it takes us three weeks? Do you have feedback loops across your organizational hierarchy? This is something that I see very, very often as not being true. So again, you roll out a process, you're doing all these things, and people on the ground hate the processes. They hate them. And you can come into an organization, talk to people, they hate the processes, they'll tell you, but then people at different levels of the organization don't know that the processes are broken. And there's no formal structure for people at different levels. So if you look at, say, the safe diagram, there's a lot of different levels in the diagram. There's release chains happening. Where is the actual mechanism for someone to say, hey, this isn't working over here. Can we do something different? And then if there aren't those feedback loops, then how are you having an agile approach to things? Do you have agility if things are broken and people can't say that they're broken and you can't fix them? So do your standards change often? If they don't, hmm. Is there a visible feedback mechanism that you can say, you can point to, this is what people do when things are broken. If there isn't, that's, again, something to question and to think about. How much time do you spend administering your process? Do you do what I call working the spreadsheet? So if you're a scrum master or a program manager and you're at your computer all day, massaging data, putting things in place, trying to manage the process, that's a smell, right? And the first big scrum project I was on, we had these crazy elaborate Excel spreadsheets that we were using to track our work and they were all the same across all the streams of work and it was a big program. And literally we'd spend hours a day like making sure that things weren't broken in the spreadsheet. And those hours were hours that I didn't spend with the team and with the product owner understanding what should be built. I was sitting there massaging data and then we'd do our sprint meetings where we kind of said what we built and then before that for hours I'm putting together a PowerPoint presentation that was the same massaging of the data I had spent all week making sure didn't break. So I think we have a good mechanism for thinking about this differently. And it's what I'm calling try, measure, and learn. And you see on the left-hand side the build, measure, and learn loop from Lean Startup. This is basically the scientific method. Have a hypothesis, test it, and then measure the outcome and rinse and repeat to a similar set of ideas around process. And again, my point in the talk to you today is to say if you're thinking about being incremental and agile in your delivery, you need to be incremental and agile in how you design, manage, and think about process. So what are experiments in Lean Startup and how would it apply to processes? Well, you want a clear hypothesis problem statement. So in that case, that story that I gave you earlier about people having a problem with unit testing. If we have a problem with unit testing, we know. So what's the solution for that? Well, maybe the solution is to have some sort of shaming function, a naughty light that pops on when people don't check in unit tests. And maybe you have somebody write some code to see if that happens, right? Maybe that will help. So what do you do there? You say, okay. So and so, write some code so that this can happen. Let's put some build lights out in the team area. And when someone doesn't use unit tests, we're gonna turn those lights on and see if it shames people enough into doing the behavior, right? Now, that solution versus what they had proposed, big difference, right? But that solution might not work. But you can try it, you can test it, you can see, okay, our benchmark of how many people were writing unit tests is here. We suppose that if we do this, we'll get a 50% improvement. And then once you've done it, can you see a 50% improvement? And maybe you got a 40% improvement, is that enough? The other thing is that if you have your business outcomes, you need to constantly monitor those because the result that you're trying to get, the better unit testing discipline may not be related to the fact that you wanna be speedy, right? So if you are implementing something that affects your business outcomes and you're not monitoring those and saying, okay, for example, cycle time matters to us. And now we're gonna change our process and do a bunch of stuff over here to address a problem that we have with requirements quality, right? If we fix the requirements quality problem, but it affects our cycle time, that's a fail. So we need to constantly think about, okay, what are the things that matter to us from a business outcome and monitor those as we change the processes themselves? And then things that hurt those outcomes, stop them. A lot of times we don't see people when there's like a well-known problem in an organization. So hey, the way we're structured makes us go on death march just a lot. Okay, well. There's things that you're doing in your processes that are creating those death marches, stop them. And then learn a bit about statistics. This is one of the biggest things that I find really kind of problematic in the fact that lean one over six sigma. Again, I don't like six sigma actually, but there's some good things in it. And one of the things that we see a lot is people acting off of data that's invalid. And it's like, okay, so you made a decision based on that chart, that's probably not statistically significant. Really? You made that choice because the chart is trending up. Is it really trending up? Is that a valid trend? Or you'll see things like, hey, we wanna roll up some data. And you say, okay, yeah, let's look at that roll up. Okay, are those really comparable things that you're rolling up? Is that a real roll up? So statistics and basic statistics are a good way to help you start to validate like, oh, okay, I made some choices or were these real choices? Is this data really telling me that? And again, my point is not to tell you to go out and get your six sigma green belt and black belt and stuff like that, but there's some great tools about how to think about analytics. And then again, rinse and repeat. I had a talk with one of my own product teams where they were justifying their behavior. They were doing some kind of lean startup kind of activities with customers. And they were justifying their behavior by, oh, the data went up after we did it. And I said, well, how do you know that that just wasn't a blip in the radar? You can't just look at a chart and say, I see a trend going up and that actually means something. So really think about that when you're running experiments. To borrow from lean startup again, there's minimum viable product, minimum viable process. So if you have an outcome you want to get to, what's the least set of things you can do to get to that outcome? Put all things aside, I've got to build something that gets out next week. What would be the minimum, tiniest set of things I could do with all the handoffs and the people involved to get that built? And then once I have that and I know it works and you could pilot this on teams, you can do lots of ways to experiment with what this is. Maybe you can't actually do it, especially again if you're in a regulated industry. But the thought experiment is at least useful. And then you say from there, okay, what things do we need to add on? Well, we do need to standardize some things because we have to be part of our organization, we need to communicate. Oh, you know what? We do need to roll some reports that other people need to look at. Oh, okay, we do need to do this thing to help the ops group as we go to production. So layer on those things as you have the base be minimal. And then be careful about it because again, as we saw, marginal additions of process don't necessarily lead to positive outcomes. And Jezz Humble pointed out to me that the improvement Cata from Lean is actually helpful in doing some of the stuff. I haven't used it, but I thought it was relevant to sort of bring up in the context of the talk. Minimum viable standardization, same thing, right? If you have to standardize, that's there's completely legitimate reasons to standardize, right, but start with the minimum set that gives your teams wiggle room to do things that they need to do to get their outcomes. Give your teams the feedback loops and the ability to have the flexibility to really be agile. To say, you know what, this way of handing off work, we have these four states of handing off work. It's not helpful to us, we need another. Or you know what, we are gonna limit work in progress a different way than everyone else. And make sure that as you do that, you're measuring the outcomes. So again, all of that is to say that none of those things that I said earlier are bad. Reporting isn't bad, standardization isn't bad in its own right, but doing it without respect to the outcome is. And we see that a lot in software development, particularly because again, people like me talk about practices and things you ought to do and get up at conferences and people say, oh yeah, let's do that. But things that are additive always aren't the best thing to do to get to your outcome. So, shameless plug in the middle. We, ThoughtWorks Studio is just open sourced our product called Go. And Go, it helps you do continuous delivery. We are looking for people to use it, have fun with it, get help from it, and also to contribute code. So as you go through your continuous delivery journey and you have challenges, contribute stuff. Because we're not gonna build everything in the world to do continuous delivery, but we wanna help people do it. So, let's just throw it in there since I'm up here. And it's open sourced so it's not like a shady product bench. So, if there are three things that I could articulate out of the talk that I think are important to, if you're gonna scribble down and make some changes, is the try, measure, and learn loop for process design. And the acknowledgement that processes themselves have an inherent cost and an inherent structure to how they affect your outcomes. And that you should know your outcomes and target the variation that you want to reduce and the outcomes that you wanna hit. And if you don't do that and you add process, ad nauseum, you won't be successful in getting to the productivity places you wanna be. So, once you do that, have fun with all the frameworks, methodologies, and practices, again, they're not bad. But just doing them because someone said to do them without respect to the big picture is not effective. That's it. How do you ensure what? Well, again, I think other people's best practices are guideposts, right? They should inform your thinking. And I would never sort of take something that someone else said and apply it wholesale to what I'm doing. Now, maybe if I had done it before in a project or something like that, I might try it. But again, the key thing is to set up a measurement framework for yourself to know, okay, I did this. Is it helping me? Is it creating value for me? And is it helping me be somewhat consistent in what I do? And if it fails any of those tests, then you have root cause analysis or something like a theory of constraints to figure out where your bottlenecks are or where your problems are, right? So, if you can identify that there's a problem by the fact that you did something, then you have the ability to fix it. That, does that help? Any other questions? Thoughts, eggs, rotten fruit? Okay, cool. Thanks everybody.