 Okay, so in today's talk this is sort of part two of a series that I started yesterday. So yesterday I talked a little bit about getting started and your first couple of years and transitioning and getting towards effective agile. And this is sort of picking up on where teams often get to consistently at about two years in to a tech-first agile transition. So this is in the stages of, or in the agile fluency model. This is when you've got good fluency at level two. So we're going to talk about what that looks like. And I'm going to talk about the area that teams are exploring these days. So all the way up through level two, you can pretty much, teams get the same result. You do a fairly similar set of practices, you get a fairly consistent result. So I'm going to talk about what that result is. Level two is where teams are exploring today. And there are some parts of it that teams are getting consistent result. And there are some parts of it that are wide open field and people are still exploring. So that'll be the second half. So first half is discipline, second half is make awesome. So first thing I want to talk about is that most people have never seen two-star success. So I'm going to be telling everything in terms of stories. I want to talk about my first agile transition. This is way back in the day when the extreme programming white book had been written, but no other XB books had yet been written fairly early on. So like most teams that were making transitions at that point, we did it because we were hosed. The company was going to die. So our board of directors had told us literally that we needed two signed contracts at the end of the E3 game expo in May, or they were going to shut the company down. In order to do that, we needed to put together a first version of the product to be released in May. We started the plan and development for that in September. By January, I looked around at all the dev team and said, who thinks there is even a 1% chance of success here and no one raised their hand? Company is going to die because the product is going to launch and the board has told us it's going to shut it down. So we decided, the second question was, so who's willing to try something extreme? And we pulled out the extreme programming book and started our transition. And fortunately on the team, we had a guy who had a physics background, a laboratory physics background. And so he said, well, we don't have time to mess around. We don't have time to do it wrong. So we need to have short iterations, we need to run a lot of experiments, we need to gather data, and we need to intelligently change direction. And we also don't understand this space. So we can't go off and invent our own. What we need to do is assume that the book is right. And we're going to assume for the first while, until we really have evidence that the book isn't working in our context, that any mistake we have is because we aren't understanding it correctly and aren't applying it correctly. And so all of our experiments, we then did one week iterations, we'd experiment and we'd look at our results, find a problem, and then design another experiment and the data that we were going to use to measure it and iterate. And every one of those was trying to find in what way we don't yet understand XP. And we did that for three or four months and got a pretty good process. And between middle of January and May, we not only completed the whole project that had been scheduled for twice that long, we completed it a month early and then went to the customer, to our biz dev guy who was going to be doing the contracts and asked him to go out and find the killer features for every person he was going to demo to. And we implemented one killer feature for each of the 50 demos he was going to do. There was custom targeted for that conversation and the company did not die. So that is two star agile success. That ability to just deliver, to execute successfully. So first indicator that I see of a team that is a two star success, zero bugs. So numbers seem to be, the number of bugs written is right around one bug per fortnight per team. Most of those bugs are found within an hour by the devs who wrote the bug. But a bug in the way that I'm calculating here is anything that surprises somebody in a non positive way, no matter who that person is. It exists in code that has been checked in to main. So that's a very open-ended definition of bug. Doesn't matter whether it's found in testing. Doesn't matter whether it's found by development or by the live users. Just it's a bug if it's a negative. It also doesn't matter what severity it is, it's a bug if it's a negative. Teams that are doing this are writing one of those every two-ish, two to three weeks for the whole team. So Microsoft online store has a good example of this. They have a small team there that the way they calculate their bug metric is days since last industrial accident. They consider a bug to be, a bug hurts people. It hurts the users or it hurts the dev team or it hurts whatever. So they identify that as an injury accident. And they have in the corner on their board the number of days and the number of stories that they've shipped since the last time they had an industrial accident. When you're at zero bugs, maintenance becomes boring. So, typical teams, your database, bug database size. Because you're writing so few bugs, you can fix not only all the bugs you write, but any of the bugs that you had from before. And so if you look daily at the size of the database, the maximum size that it got to, the median will be zero and the max will be one, which means a lot of stage two proficient agile teams don't have an electronic bug database because they don't need to prioritize, they don't need to hold them for very long. What they have is a corner of the whiteboard that says the current bug is and is usually blank. So, Smilebox does exactly that. They typically are running at zero bugs. Once in a while, they will have a bug that they choose to not do anything with for a little while, but that's pretty rare. Usually it's right around the holidays when they have something else that they need to get done. But even then, they'll get two bugs and then deal with it. So, Smilebox, when they get a bug, their response is when they see something unexpected, then their response is this is one of two things. Either it's unexpected and it pleases people, pleases users, in which case it's a feature. It's an unanticipated feature, but it is a feature. And it's a feature without tests, so we need to write tests around it. So it's got a card to finish the feature by writing all the tests. And if it isn't, the other option is it's a surprise and it's negative. In which case, it's a bug and it needs to be fixed. And then tests put in place and root cause done to ensure that nothing like this ever happens again. This entire category won't arise again. So either way, it is a card and it preempts everything else. That's how they keep it zero. These teams are insanely predictable. The amount of variance between tasks is tiny. There isn't technical debt anywhere in the system. So the amount of time it takes to do something is based on the amount of time it takes to do something. Not random expense of working in different parts of the system that are paying to work with. And they have enough data over a long enough period of time that they really do know their velocity. They can make medium and long range projections. When I was at Blue Tech, we stopped calculating costs for anything because there just wasn't any point. We could just, when we made an estimate, it was gonna come true. And in fact, the stories were tended to be about the same size. This is also pretty common. The smallest to biggest, the ratio was about 2x. So it really wasn't worth estimating. And you could just count the number of stories and see when things were going to happen. The result was that the sales people and the customer support people had complete control over what was going to happen. They'd pick the order of things. They'd pick when things were going to occur. And they'd know what the result is. They never had any need to push the team because they didn't feel like they could squeeze the team and get a 15% boost in productivity. No, we were gonna get the result we were gonna get. And it was going to be the maximum that was possible, and every single time. Which allowed us to prioritize based on value alone. Corey Haines is also doing this, there's current company whose name I have forgotten, otherwise it would have the company name up here. So when you're working in this way, cost becomes not an issue. In fact, what I find is that cost of work tends to be distributed in a Pareto distribution, it tends to be vaguely linear. The difference between the most expensive feature and the least expensive feature ends up being about 2x, right? There's not that much difference. Whereas value is distributed far more, it's actually the high value features tend to be distributed exponentially in terms of their value. Which means that if I'm doing trying to optimize for ROI, well, I've got one thing that's linear distribution, one that's an exponential distribution, I can ignore the linear. I optimize for ROI by optimizing for R, just for the return. And that, if everything is exponentially distributed, is really easy to tell. The ones that aren't exponentially distributed, the crowd-pleasing features are exponential. The ones that are competitive catch-up are linear. And the ones that are hygiene factors are all flat. They don't really have a return. So it's pretty easy to identify which ones are the exponential features that are so much better than everything else. And you just do those first. And when you run out of those ideas, then all the rest are roughly the same. And so you can pick them at random or on the basis of preference until the first time you can find something that is clearly and obviously better. And this is what their company does and what a number of others that I've been at have done. Every work item can be done by anyone. You see a lot of pairing at these companies, pairing or mobbing or whatever. And the reason is not because of the impact that it has on productivity. Not because of the impact that it has on bugs and the like. I mean, those are advantage. It's the impact that it has on learning. If you go look at mental innovations, they've got at any given time four or so different projects in flight. They're a software outsourcing consulting company. The four or five projects, they have a team allocated each one. And every week, they rotate people through those teams. In every single one of those projects, it rotates like a third of the people through, so you're on a project for like three weeks and then you move around. They'll bring in new projects from new companies and they're just totally unrelated. In every single one of those, every single task can be done by any arbitrary pair. They don't need to allocate it to anybody because everyone has the skills to do any of the tasks for any of the projects that mental innovation takes on. Reason is because they're so good at spreading knowledge that the instant one person on the team knows something, within a few hours, everyone on the team knows that thing. Productivity doubles every two years. So this one, there's two facets to it. One, the industry as a whole doubles productivity every two years. So that which cost me $100,000 to develop today will cost me $50,000 to develop two years from now, just on average. Some teams are able to keep up with that rate, many teams are not. So often the way that most of the industry is advancing at that rate is teams come into existence at the high end of the bell curve and then increase with a doubling time of four years, which means they are slowly going to the bottom of the bell curve until they're no longer competitive and that company ceases to exist. All those people go form a new company with a new way of working that's at the high end of the bell curve and continue working their way down. So industry is outpacing almost all of the members of industry and everyone, the company's just live for a little while and then drop off the bottom. But there are some companies that are actually able to keep up with that pace and not drop off the bottom. Hunter Technologies is a fine example. They a few years back made a change to Mob Program, talk a little bit more about that. And one of the impacts of that was they went from annual releases of two to three products a year, so two and a half, to releasing twice a day and shipping 25 products a year on average. 10x improvement in productivity, same products to the same customers. People talk about legacy code as a debt, code debt and a cost. Working in legacy code is harder than working in other code. And in fact, you can ask someone the question, so how much harder? Assume that you have the same task to do. You have two different ways you can do it. You can implement it in your own code base using everything that's there today, integrating to the stuff that's there today as an add on feature. Or you can implement it in a brand new project, make it a console app or something. You're not allowed to use any of your existing libraries or anything that you have built up. But you also don't have to integrate it to anything. What is the cost difference between those? For many teams it's as high as a three times as expensive in legacy code using all of my existing stuff as it would be to just do it as a console app. So legacy code is a cost, I have a debt there. And I have to pay a tax when I implement every feature. For James Schor had a team a couple of companies ago in which it was an asset when they measured that they found that they were about three times as productive in the legacy environment as a green field. Because there was so much more advantage, so much more code already existing in their legacy system that they could take advantage of. It was an asset that paid a dividend on every feature that they built. When they needed to add internationalization, they were producing an application that the customers had promised to them would only ever be English. It was only for the US market and it was easy. And so they'd built with that assumption and hadn't really thought about internationalization. And then the customers came back and said, actually, so we just got a new investor and we want you to support English, French, and Arabic. And it should just work in this RTL thing, figure it out. And ask, so how long is this going to take? And they thought about it for a little while and scratched their head and said, okay, we need to run a spike so that we can give you an answer because we know internationalization can be hard. So we're gonna go figure that out. They came back an hour later and said, we'll have it done by the end of the day. And they did. Because they just happened to have factored all their code well so that even though they hadn't ever thought of this requirement, all of the things about string handling and the language and layout and position were each located in one class. They had three classes to update and the updates were obvious and the code was well tested. At Facebook, 90%, actually more than 90% of the code that they ship is not their code. Facebook has identified that there are one or two things that determine their business. They have a key competitive advantage and they keep those all internal. Originally it was Facebook was able to do compression of large sets of photos better than anybody else. So they could have more photos stored than anyone else could store. And they've got a couple of things like that that are there and are proprietary and they build and code that. And they code nothing else. Everything else they do is shared, it's open source, it's whatever else. Some of it, they write and then share it out and then other people contribute. Some of it they consume, but they are very good at analyzing what is our key competitive advantage, let's code that. And what isn't our key competitive advantage? Let's not code that, let's turn that into commodity software so that it can't be anybody's key competitive advantage. These teams ship fast. And it's not necessarily that they ship on a high cadence, but they have the option to ship on a very high cadence. And when I talk about option to ship, I mean like company that my wife was at where when the president wanted to do a demo to a new customer. He would just grab the latest thing from the CI server and throw it in front of the customer. They don't worry about which one it was and there wasn't any concern about there might be something broken. Because he could take any random thing and show it in front of a prospective customer and not lose the sale. Flickr ships, my data is out of date, but three, four years ago they were shipping a couple hundred times a day. Amazon ships every 12 seconds. Now, Amazon's 12 seconds is across the whole company. So it's not that any one team in Amazon is shipping that frequently. But teams at these companies are checking in to Maine. Typically, I won't go more than an hour, often 15 to 30 minutes between commits to Maine, and every one of those is either fully shipped live or fully shipped through my CI system and packaged up so that someone else could basically decide that this one wants to go live. Stakeholders are all involved from the start throughout the whole process. At Merrill Lynch, this is now several years ago. They set their traders and their software people working together. So every experienced trader had sitting, he was sitting there doing his trades at his computer and sitting right next to him was a pair, a pair of developers. And the developers were virtually connected into teams of eight. And their job was to execute any piece of software that would help that guy make more effective trades right now. So typically, they would produce products in like a one or two hour turnaround time. The trader is doing some trading. He says, it would be really handy if I could do this sort of analysis. And the programmer executes that. And three hours later, he's got a tool to do that sort of analysis. The stakeholders are involved all throughout that process. The trader, as he's trading and doing stuff, he asked for the analysis. The coders code for 20 minutes and they say, so is this what you're thinking? And they look over, yes, back to trading. These teams are transparent and they invite decision collaboration, right? It's not just that they report status out and are obvious about what they're doing. It's not just that they will allow people to come to their startups, their stand-ups and interact or come to their planning meetings and interact. The groups are working out loud. They are actively engaging everybody who is interested in those discussions. They're making it very easy for people to find out what's going on, to get up to speed, to throw in some information and choose to participate as much as they want in a decision or not. They can drop by and leave. They can drop by and stay. It's up to everybody how much they want to do. Yammer recently was purchased by Microsoft a couple years ago. And as part of that, they've gone from being a small San Jose startup to a part of a large corporation. And one of the things that was added there was annual budgeting. They'd never done budgeting before, because they were not a public and trading company, and so who cares? And when they came and joined Microsoft, then the requirement came down to do budgeting. Now, Microsoft wanted Yammer to keep Yammer's culture as much as possible. So it wasn't placed initially as a requirement. It was as a pitch of we think that annual budgeting will be useful to you for the following reasons. And the leaders that sort of heard those reasons and yeah, it makes a lot of sense, okay. So at that point, management of this division of Microsoft is now bought into, yeah, we need to do annual budgeting. So the question is what do they do? How do they roll that out? At most companies and most parts of places, what they do is they start creating budgets and they invite people into the budgeting process and so on. So at Yammer instead what they do is they posted on to the Yammer Eng leadership alias, which is the one that everyone's on. There are more people on that public group than are in Yammer. There are people who are no longer, who have moved other parts of the company that are in that group. There are other people from Microsoft in that group. This is the primary mechanism whereby low level management conversations and decisions happen, and it is completely public. So when the budgeting thing came up, the leaders of that group popped in and said, so annual budgeting has been suggested and we think it makes sense for the following reasons and we think it also has the following downsides. So guys, who is interested in working with us to figure out how to amplify those benefits and decrease the downsides in how we're gonna roll out budgeting? And should we actually roll it out? And they started that conversation. And they didn't actually reply back to Microsoft that yeah, we're going to do this until everyone in Yammer had figured out that yeah, it actually, it makes a lot of sense for Yammer. So bringing these all together again, these are average two star results. There are two star teams that exceed that. There are two star teams that are a little bit below that during the same category. These are not what the results that most people are used to. Most people have never seen a two star proficient, fluently executing agile team. And this is what gives you that ability to change direction on a dime with no sunk costs. It's not that agile planning allows us to change direction and we can embrace change late. It's that when you're working in a way that you get these results with this level of discipline, you can change your mind on a whim, right? So you can welcome late changing requirements as advantage because you have no sunk costs, you might as well. Like, if there's new information, let's use it. So now let's talk about making awesome. That before was discipline. That level of result is what you can get by just running the formula. You get that by out of the book extreme programming without really modifying it. Except you might have one or two contextual things that you need to nuance a little bit. But pretty much that's what you're gonna get and teams get it over and over and over. This is, once you've gotten that, now where can you go? So first one I wanna talk about is again Yammer, metrics driven development. So this is an interesting case because Yammer is, at this point, moderately large. I consider them small, there are a couple hundred people, but many people consider that larger. They are trying, they're a single product. They're trying to make that product better. And they have no real central planning. They do, they have a number of bodies that are coming up with ideas and whatever, but they don't actually need central planning. So what they've done is they value metrics and they do metrics driven development. So they are continually gathering information about the customers. And they're using that information to do two things. One, they run a whole lot of analysis to identify what are the measures that correlate with future revenue? What is the lead time of each measure? And what is the correlation coefficient of each measure? So that they can identify from the many hundreds of things that they measure. Which are the ones that we should really be paying attention to as indicators of future success? And they're continually reevaluating that model. Because they don't assume that before Microsoft and after Microsoft, after making joining and working with Office 365, any of these things could dramatically change what measures actually drive revenues, and many of the features they do could change even what measures drive the revenues. So they're continually reanalyzing the measures that they'll use to drive the business on the basis of what predicts revenues. And then the decisions of what to implement are all driven by what moves these indicators. And so when they go decide what to build, it's not a popularity contest and it's not people stating their opinions. It's people coming up with ideas and then saying which measure they expect it to move, and then they run it and see whether it moves the measure. And they are often surprised by which things have an impact. Turns out changing the font to sagoy and changing the color to blue had a 1% improvement on their engagement metric, which was one of the most significant improvements in engagement they had seen. And completely shocked them. No one has been able to explain why, but fact is it works better for customers, so whatever. So this is an example of hypothesis driven development. So a canonical example of hypothesis driven development really is the lean startup stuff, build, measure, learn, everything at IMVU. When you're doing hypothesis driven development, you don't assume that you're building features. You assume that you're testing hypotheses. The purpose of every story is to increase the amount of knowledge that we have to predict what stories will be useful. Secondarily, if the story is useful, then by all means, we will keep it out there. And if it is not useful, then we will not keep it out there. So in this hypothesis driven environment, we're really focused on the knowledge gain that we get from doing work. Not focused on the implementation and what the system turns out to be. The system is a side effect. It's a result of gaining knowledge about the customers. Another direction that people are going is again related to figuring out what should we build. You'll notice that a bunch of these make-awesome's are figuring out what should we build and how should we understand the customers and interact. Whereas a lot of the stuff before was about execution. That these things that people are exploring are three-star agile. So the next one is customer development, design thinking, and a number of other related sciences. Many coming out of user experience design. Many coming out from other fields. Nordstrom Innovation Lab is a great example of this. If you have a chance at some point, watch a search for Nordstrom Innovation Lab on YouTube. They've got a really great five minute video that shows a lot about what they do. But in the video, they move their dev team from this lab out into a store in Nordstrom. And they set up an environment out there with their whiteboards and everything else in the middle of the floor. And they're testing out ideas. So the purpose of this lab is people from all over the company have some idea of some product, some software product, that they think will help retail sales. And this lab's job is to test and see which of those are worth making a significant investment in and which ones are not. And then those ideas that are worth making an investment get flow into the regular development pipeline and they make an investment. So this group is testing out one of those ideas. It's for an iPad in the sunglasses place that allows you to take a selfie with a couple of different classes on them and then compare them easily side by side so you can decide what you want to look at. They build that over the course of a couple of days, iteratively. And they're sitting there coding and when they get a deploy that they like, they take it out to the sunglasses stand and use it with a bunch of customers. And they aren't just doing that in the way of here, let's test this out and let's demo it. They're actually very intentionally doing design thinking. So through the whole process, and this you can't really see in the video because it's condensed, but through the whole process they're very carefully paying attention to when they're doing divergence and when they're doing convergence. They've got different activities and different ways that they're engaging with the customers to gain different types of information. They're following a very rigorous process of finding out what we should build by the process of designing and building that thing. And the nice thing about these techniques is they're established, they are documented, they work. So much like we have techniques that work for getting to two star agility, these are some that work for once you've got that ability to execute, getting some of the three star advantages. Co-opetition, this is another interesting one. So Derrick Neighbors' company, who again I've forgotten the name of the company and I apologize, they found that they had one chief rival in their market, them and one other competing for the market. And they did something that was really odd. They went and bought a building that was twice the size of what they needed. And then invited their chief rival to lease the other half. And they put the cafeteria in the middle so that everyone would eat together and talk together. And the whole point was that they wanted to get conversations going between them and their chief rival. And engineers would swap ideas and they intentionally reduced the barriers. They told all their engineers, feel free, talk to them about all of our tech. Go to, end of the sales people, talk to them about sales and even future sales and things that are in the pipeline, go to. And the result was that these two companies started swapping ideas. And they started seeing where each of their technologies had an advantage of the other and all the techies were able to see that. And then they could tell their own sales people. And the sales people could do that. And very quickly they switched from, over the course of some number of months, they switched from showing up at a perspective sales with two competitors. To showing up with both of them saying, we're able to do more of a consultative sale, ask what the person needed and say, okay, sounds like about three quarters of it is stuff that we do. And the other quarter, you should go talk to these guys. And they were all referring things to each other. Total market size goes up a lot. No other competitors in the field can really compete against the two of them cooperating together, and it was better for both. And it is still competition. They had a couple of products where they were in direct head to head competition. But they were doing some of each, mobs. Hunter Technologies is the canonical example of this, mentioned them before. So at Hunter Technologies, they do not program solo, they don't even program in pairs. The whole team, which is seven people, sits in front of one computer, dual screen projector, all day, they program single threaded. So at any given time, there's one typist, typist is not allowed to think. The typist rotates through every five to 15 minutes. The other people in the room think, discuss, and command the typist what to do. The other people in the room are all sorts of other people in the room. There's some programmer you like people, there's some tester you like people. Now, of course, since they're mobbing, everyone has developed all of their skills. But there's different mindsets. You've got the couple of people who really like tearing systems apart. And the couple of people who are really creative, and the couple of people who are really analytical. And actually, since it's a mob, it no longer requires programming skill to be part of the programming team. So Hunter Technologies produces software that's factory control automation. The vast majority of the company are factory workers that are working out on the floor doing stuff. So at any given point, they rotate one factory worker in for a couple of weeks at a time into the mob. That person is a full programmer, member of the mob for a couple of weeks. They're bringing in all the knowledge and experience of, yeah, I mean that seems like a great idea. But you need to understand the way these things are laid out in 3D on that factory floor, right? It's a different set of knowledge. And that person is fully able to participate because you don't need any particular set of skills in order to be part of the mob. And that's what has resulted in Hunter just not having bugs anymore. They shipped 65 products in a row without any defect ever being found in testing or in live. It's just an amazing story. Now, mobs, again, brand new technique. I can't say this is well tested. I can say that a whole bunch of people have had some really interesting experiences in its anecdotes, but you might want to try it. Responsive organization. So this is, as you're starting to get even beyond three star, you're starting to build a culture that thinks differently in your company. Fundamentally a decentralization, not just of decision making and power, but of the way you think about things. So this one, there's a number of movements that are coming together under the responsive organization banner, working out loud movement and a number of others, Management 2.0, I don't remember all of them. But they're all looking at different ways that businesses can be run when the amount of information increases, the ability to execute increases, the lead time and transaction cost of changing direction decreases, and the amount of transparency increases dramatically. All of what you're happening in business today, you've got the ability for people to see all the code across the company. And to see all the decisions across the company before they're made. And to see the information about all the customers across the company. And to communicate easily. Red Robin is a perfect example of this. So Red Robin, a restaurant chain in the US, they had for forever a menu that was centrally designed. It was built in the headquarters. And the reason for that is because of logistics, supply logistics. They need to ensure that they can send everyone the same sort of food and everyone can build the same sorts of things. And then the customers also want a uniform experience. They want to show up at a Red Robin, any Red Robin, and get about the same thing. So central designed menu that would then get distributed. And that happened like twice a year or something. I don't remember, two or three times a year. And at some point they brought in, they started using Yammer. And Yammer originally was rolled out in the central office as a replacement for email where it floated like a lead balloon. Nobody used it, there was no value, there was no point. And so it stopped getting used in the central office. However, it made it out to the periphery. It made it out to all the Red Robin stores. And what people there found was that whenever a customer would complain about something, the wait person would grab their phone, they pulled out and they said, here, could you just type that into the Yammer thing? And they put that in there. And whenever a waiter would have a problem with trying to figure out how to get something out of the kitchen. They'd type it into Yammer and, hey, has anyone else seen this? And all of the stores across the US were all in Yammer together. And so some customer would have some problem in New York. And a store in Cincinnati would say, yeah, actually we had that problem. We solved it this way and it seemed like it helped them, right? And so then they'd apply that in New York. And ideas just started flowing from across the wait staff level, all over the company. The wait staff at a restaurant, those are the people that are in contact with your customers, right? They're the ones that actually have the most information about the most important resource in your company. So menu organization goes through. The next time the big menu comes out from central, there's a fairly quick response from a number of the stores. It's like, we recognize and value the ideas that you sent to us. And most of them are pretty good. So it looks like you're making the following four changes. Well, those two, we already implemented on the basis of customer demand. And it turns out that the first one of those does sell pretty well. The second one doesn't, we already discontinued it. This third one that you did is an interesting idea. We tried these other three things that were pretty similar. And we think that it's probably better if we go in this direction. So we're probably going to unless you correct us. And that fourth one is new. We'll go ahead and try that. They had so much more information and data. That was the last time that menus were centrally planned. Because now they had an ability to decentralize the planning of that menu. And to make the decisions on the basis of more information. And all the people at the restaurants were also fully aware of the logistics concerns and all of that sort of stuff. Because they've got the cooks there who are dealing with all of that, right? So they could figure out that we needed to have a uniform menu and how to do that. And the only thing that they really needed from corporate was, if we're going to add this ingredient. Like, what's the supplier? Can you get a buyer on this? So this is a shift that we're seeing towards responsive organizations. And I won't say that it's a groundswell, everyone's going this way. But we're starting to see them come into existence. These organizations that are making fundamental business decisions, strategic business decisions on the line. With the people who have the most information, not with the people who have the most authority. And that means that the old power structures end up changing. And in fact, the responsive organizations, they talk about here. Learns and responds rapidly through an open communication, experimentation, and working as a network. It stops being the relationships between people in the company. Stop being about power and authority. And start being about moving information around. And your ability to work is based off the information you have. And how much you can share that information with others and get information from others. So too long to read. Awesome discipline is awesome. Most people haven't seen it. But it is actually pretty darn amazing what you can get just by doing things by the book at high discipline and never accepting slacking, right? Never agree to something that you're not going to commit to. If it's worth doing, it's worth doing to access. And if you do that, you get amazing results, right? Few teams do that. I used to when I was going out and getting jobs, the number one criterion that I would rate on depending was whether a team was willing to be the best in the world. If a team was not willing to be the best in the world, I wasn't going to work there. And it's not whether they want to be, it's what they're willing to do. They're willing to put forth, to maintain that level of discipline, to be constantly asking themselves, questioning themselves, and really improving themselves. Because if they are, you can get these results every single time. So most teams think they are doing pretty awesome, and they have the great discipline, until they see one of these teams. Some teams really do have this discipline. This is not a one or two teams have these results. When I go talk to XP teams, I show them some of the papers of bug rates. And Nancy Van Schoedenwurt's paper, Embedded Agile by the Numbers with Newbies, it's a great example. Full detailed analysis of every defect they had, the whole three-year project. And XP teams go, yep, it's about right. Now it's normal. And the teams that are really pushing the boundary today, they're the ones that did that, and then they built on it. So the key thing is, go ahead, and when you hear about continuous deployment, and when you hear about Lean Startup, when you hear about all those things, be inspired by them. But if you want to go execute them, make sure you've built in the discipline first. Build the discipline, get the strong results, get the ability to turn on a dime without any sunk costs. And then leverage that just like those people are who are exploring continuous deployment and Lean Startup and Build Measure Learn. And really go for that. Thank you very much. Conversation and telling more stories. If you'd like to do questions here, we can, but come on up. So the first one is the average for teams that haven't started that two-year transition to discipline. When they get there, then it's more like 99.99% is value. And the once in a while they have some stabilization.