 Okay. So, this is my generic title. I use it for every talk I give, because it means nothing. It's good to have some, to be semantics three sometimes. And as said, I'm going to do rather than one long boring talk, I'm going to do instead two short boring talks. So, if you don't like the first one, just nap for a while, and then you'll see if the second one is any good. For the first one, I'm going to look at what I see as the essence of Agile software development. We're here at an Agile conference. We hear a lot of talk about Agile. When we started doing this stuff, we made the big thing about Agile software development, because we were kind of the rebels. And now, Agile is everywhere. Well, actually, Agile isn't everywhere. Pseudo Agile is everywhere. But I think there's still some benefits that we've seen. And I'll talk about that as we go. But in particular, I think it is at times worth reflecting on what the essence of Agile is about. And to begin with, it really has to start with thinking about where software was in the 1990s. Lots of problems with all sorts of projects. I.e., as a consultant, I ran time in time with these kinds of situations. I'm not saying this stuff has gone away, but I think it has been significantly reduced. But at the time, certainly this was seen as a big problem. And many people felt they knew what the cure was. And the cure was the big plan-driven processes. We need to have these highly defined everything stepped up front, big methodology with a capital M and a capital E and all the rest in double caps, and very defined approaches to doing things. That was seen as the cure to the software ills at the time. If you weren't talking about this in the late 1990s, you were clearly unprofessional and not really a part of the future of software development. But there were some people who had a different kind of approach. And that's where Agile was born in very much in reaction to this notion that the true way of doing things was these defined processes. Now, the manifesto, obviously, is one way of trying to describe what tied these people together. I mean, because there's a bunch of different people here for the extreme programming crowd, the scrum crowd, Alistair Coburn, the pragmatic programmers, various different people who came up with this notion of Agile as an umbrella to cover the different ideas that were around. Just before we had this meeting, about a year or so before, I wrote an article called New Methodology, which was my attempt to try to look at what was going on in this scene. Also together with some other stuff like open source software and pull out what I felt were the common characteristics. The article is still around, or at least an updated version of the article is still around. And basically, I said to myself, there's a difference between what came to be called Agile methods and this plan-driven, defined process, engineering-oriented process that we saw. And it really came down, I felt, to two different separations that were interesting. The first one was the attitude towards planning. In the plan-driven world, the idea was that if you wanted to carry out some kind of project, the first thing you do is you come up with a plan. That was both in terms of a project plan, how long the project was going to be, who was going to work on it, et cetera, et cetera, and also a plan in terms of the architectural design for the software. You would have a group of people doing that. And then that plan would be handed over to some separate group of people in order to execute and produce the software. There was definitely a big difference between this, the plan and the execution. And what this was, of course, modeled onto a large degree was how people imagined that real engineering was done. You know, you have the architect drawing up the diagrams and everything and the blueprints for a building, having to hand it over to a separate company to actually build the thing. And that was, that kind of model was often talked about. I remember being told someone who said, software isn't really very valuable. As long as we've got the designs, the architecture for the software, we can hand it over to any old company to build it. The actual code is unimportant. Now, I think of this as a kind of predictive planning approach. Because what you're trying to do with that plan is you're trying to predict how things are going to go. And it also has a consequence about how you measure success. Success means the plan, everything went according to plan. And that model you still hear even in supposedly agile circles today. When anybody talks about a project being successful or not, based on did it go according to plan or not, that is plan-driven thinking. It most certainly isn't agile thinking. Agile thinking is much more about have we delivered value to our users, are our users happy? Are they able to be awesome? Those are the kinds of things that outcome is really what matters to an agile thinking. But in a plan-driven world, it's much more, well, did everything go the way it's supposed to be. Now, why is it that we pushed against this notion of planning? Because actually, it is quite a strong human desire to kind of say, well, we can plan what's going to happen and then things will just go according to it. Lots of people like that kind of way of thinking. Well, the problem is that if you come up with this kind of approach, it really depends on you coming up with stable requirements. If you don't know what it is you're going to build, then you can't really plan in any kind of stable ways how you're going to build it. And that's not just true of the plan, it's true of the entire software development approach that you follow. But of course what we saw and what we see now is that was a big question of whether you could get the requirements stable. And so plan-driven approaches came up with all sorts of techniques to try and stabilize requirements. All sorts of meetings and sign-offs and all the rest of it to try and reduce that requirements churn or requirements creep. The agile community reacted against this by saying, well, you know, one of the things we ought to know about software is if we've got a situation where we're dependent upon something that's very unstable and difficult to work with, maybe we should think about breaking that dependency. And hence the idea that in our approaches we don't try to come up with, we don't need stable requirements, we need to come up with a process that's tolerant to requirements that are changing. It's in many ways how in the same way that when you're doing with large distributed systems today, you have to assume that failure always occurs. You have to be able to recover from failure. Here you have to recover from churns and requirements. And in fact, not just can you be stable or can you survive instability in the requirements, you can actually take it a bit further. Hence one of my favorite phrases of Mary's. Actually saying that a changing requirement is actually an advantage. Another phrase I rather liked, I think it was, we encourage requirements to creep around until they actually find something valuable for people. The point is that we react very well to change and we use the whole process as a discovery process. And that to me is one of the big shifts that the agile world beckoned in. And when I look at pseudo agile stuff, often you see this kind of notion we need to have some kind of stability. There is perhaps a grand list of all the great backlog that has to be managed and changes to the backlog as seen as a bad thing. But if that's the case, that goes back to that plan driven thinking again. We're constantly altering and shifting things. It's okay to have a plan, but it's that plan has a different role. It's now a baseline to assess, well if I make this change, what consequences does I have? If I have this change, what consequences does it have? It acts as a thinking tool to manage the consequences of change and to be able to make decisions. Do I take this path or this path instead of a situation where you're seeing it acts as a kind of sign of goodness, a sign of health for you. So that's the first shift that I basically looked at and summed it up. And it's probably the one that is most talked about. But the second shift is perhaps even more important, the role of people and process. So if we think of the plan driven world, we're actually following the leader of this guy. How many people recognize this picture? A few people do. This is one of the most important people in the 19th and 20th centuries. It's had more influence upon 20th century about life and work than most politicians that you've heard of. His name's Frederick Taylor. He came up with the idea of scientific management. And at the heart of scientific management is the notion that the people who are doing the work are not the best people to figure out how to do the work. But instead you need a separate group of planners who figures out how things should be done. So if you look at these plan driven defined methodologies, one of the things that they do is they say what are all the tasks that you need to do? What are all the deliverables that you need to come up with? What are all the roles that need to be played in a project? They define all of this stuff and what makes a good project execution. Then when it actually comes to running a project, you grab a bunch of people, you know, five developers, four analysts, three testers and a project manager in a pantry, and you slot them into these roles. The agile thinking twists this around and says no, instead the teams should decide what process they're going to follow. Because it's the people who are actually doing the work who are best qualified to understand what's going on. And software projects vary too much. They vary in, you know, the pressures of exactly how the needs of the users have to be met. They vary and depends on different technologies that you're using. And quite frankly, they vary depending upon the team personalities. Different people like to work in different kinds of ways. And therefore they should decide how best they're going to do things. Certainly it's helpful to listen to clever, loud people like myself who kind of whitter on about how things should be done. Often we come up with okay ideas that can be brought into the project. People can try them, see how well they work and sometimes decide, yes, they're useful and we can go with that. But it's the team's decision how they need to operate. And whenever you see someone saying, oh, we've got a agile process that's going to standardize things across different parts of your organization for you, then that's an immediate warning sign. Because that's immediately going against this autonomy of teams. That's such a vital part of how agile thinking operates. This is usually where I say something rude about safe. You know what safe stands for, right? Shitty agile for enterprises. But I don't want to do that this time. Because actually we should be grateful to safe. Because thanks to safe we now think that scrum certification wasn't so awful after all. So that's how I see the distinction between the agile world and the plan driven world that is the heart of where things come from. And the wonderful thing that agile has moved so much more beyond the kinds of things that we were thinking about in that scary sort in snowbird where we came up with a manifesto. Lots of people have come into the agile community since then. They've come up with all sorts of ideas. The whole world of lean has come in. We see database evolution. We see the UX world integrating in. The whole DevOps thing has exploded out of that. None of this was foreseen by us. And rightly so. All we were trying to do was make it safe for people to explore this territory and to see where it could lead us to. And I'm sure that if we look in ten times this time we'll look at all sorts of other nice things that have come in as well. The whole point is we can't predict that future. I'm often asked, you know, what's after agile? What's next for agile? I haven't got the foggiest. I mean all we were trying to do and all we can do is allow a space for us to experiment. Not just do teams decide how they're going to work. They decide how things change. Individual teams experimenting with those ideas, pulling interesting things in. And then when they find things that work they talk about it and the knowledge spreads around. That's really the most important thing a conference like this can do is to allow people to exchange ideas and pass them through. I'm a kind of conduit for that at ThoughtWorks. I go around, listen to people, listen to what they're up to, different projects, and then try and pull together the more interesting things. My first choice is then to get somebody else to write about them for you. But if nobody else will and I think it's interesting enough I do it myself. But that is the way that we should be working. Channeling from each other's experiences. So one of these things that particularly struck me, not from any colleagues at ThoughtWorks but extremely good, was the idea of fluency in the agile fluency model. This was developed by Diana Larson and James Shaw, two people I've known for quite a long time who are very active in the agile world. And what they did is they observed the projects that they'd been helping and people they were talking to and noticed that there was a typical sort of different styles of how people operated within agile projects. Commonly they passed through a set of stages as they went through. And the note of fluency was to say this is what people do when under pressure, when they have to revert to their way of operating. It's one thing to kind of learn how to do something, but you only really fluent in it when that is your default when pressure comes up. And their modeler, I like their model a lot because it resonated very well with what I'd heard and seen in terms of how projects operated. So the one-star level focuses very much on what we might call the management practices, where people are using things like sprints and backlogs and cam-band boards and things of that kind. You might also think of this as scrum in its kind of default state without any technical stuff thrown in. What they saw is that although a lot of people easily sniff at this, particularly us more seasoned Agilists, you actually do see benefit from it. People find that they get a greater visibility as to what's going on. And people begin to think about progress more in terms of business value than in terms of that following that plan checklist. And most teams, the largest, the plurality of projects that they saw, nearly half the projects that they run into, they reckon operated at this level, who called themselves Agile. But it still took a certain amount of time to get there. And this is, I think, a generally important point. Getting used to know about Agile techniques and things takes time. And even to operate at this level of working is an exercise of many months. But this is only the first step. And ideally, you want to move the step further. And that step further means bringing in a whole bunch of additional practices which are much more to do with the technical side of things. This is where things like knowing how to do the automated testing and continuous delivery, refactoring, techniques like this come into play. In many ways, you can think of this as like the extreme programming picture, which says the combination. In extreme programming, there was no difference between management and technical practices. They were all just practices. And one of the things I've always loved about extreme programming is that it had things on both sides. It had both the technical and management stuff together. Now, once you bring in the second step, you begin to get some very nice benefits. In particularly, you see your productivity go up, but you also very noticeably see defect rates go down. Now, one of the great annoyances to me is how so many organizations seem to think that if you want to move quickly, if you want to be responsive and agile, you have to tolerate lower levels of quality. You've probably heard of this thing around these days called bimodal IT or two-speed IT, or as I sometimes refer to it, bipolar IT. Now, one of the fundamental things about this is that the idea is that you've got a fast, responsive, but unreliable element, which is kind of the agile bit, and your back-end systems have to be reliable and as a result can't move quickly. But anybody with any experience in doing this knows that if you want to move fast, you have to have high quality. You have to have high reliability. You can't move fast if all the time you're tripping over defects. So, in fact, one of the most interesting things for many people is to realize that using these kinds of techniques, this two-star agile level, drastically lowers defects. I was talking with a colleague of mine, they were doing some work actually, fairly close to here, for an agency where they said that the corporate-wide reporting system people came in and said, well, you've got a problem with the way you're giving us your reports because one of your numbers is clearly wrong. You're saying you've got no defects going from month to month. Zero. That's obviously an error because if a defect appears and you spot it on the last day of the month, it's obviously going to carry over to the next day of the month. And our team said, no, not an error. We fix every defect the day it's spotted. We're that fast because we use continuous delivery. We have the tests. We're able to quickly find what the problem is, push the whole thing through to production and get the whole thing cleared within the day. That's the kind of thing that is needed in order to operate effectively in an agile way. But it takes investment. It takes time. Teams can take a couple of years to reach this level of fluency. It's not an easy thing to do. And now I think it's one of the things to bear in mind. Learning how to do software development is not easy. Learning it how to do it well takes time and effort and dedication. And it's not a thing to expect to see instantly. Now another way of thinking about these first two fluency points is the first is requires the shifting culture. A culture that shifts you from the following a plan to this notion of, oh, we've got to be value focused and talk about success in terms of delivering useful things. The second shift is a shift in terms of skills. It says now we've actually got to up our skills considerably and that's why, of course, it takes so much longer. The third level of the fluency model, although I don't like using the word levels, it's hard to avoid it, this is the one that takes things even further. This is actually heading into the territory of one might say the modern agile approach that Josh talked about yesterday. Because now we're getting into a situation where the team themselves are making more decisions about what to do, they're tracking metrics, they're using things like A-B testing, sophisticated monitoring to find out to what extent they're delivering that value. Now when they came up with a fluency model which was a couple of years ago, this is only a very small amount of projects that reached that point. And it took a long time to get there. And part of the reason is because this requires order organisational changes than just a single team. In order to pull this off, it takes a lot more effort. It's easier in smaller companies, start-ups, things of that kind, because the organisation hasn't sort of acquired all the scar tissue of life over the years. But in any case, it can still take a long time. And they posited a fourth level which took it even further. But they hadn't really seen it, they were more hypothetical. Although talking with James more recently, he says he's beginning to get a better picture of what that further level begins to look like. The important thing of thinking in terms of this fluency model, though, is not necessarily saying it's a sense of the worth of the rest of it, but a common progression as to how teams go. Not all teams will follow through those steps, but most seem to. And to realise, I think, also the degree of time required to take you through it. This is not a rapid process. It takes time to get through this and it requires patience and dedication and resilience. But it's also a very doable and possible thing to do. People, a lot of people in the course of the conversations I have here and other places get frustrated with the big amount of pseudo-agile out there, but let's face it, probably 80% of what's out there. Agile software development is a subject to Sturgeon's law as anything else. Sturgeon's law, 80% of everything is shit. I mean, that's true with Agile too. But the important thing for me about the wider acceptance of Agile software development isn't so much that. It has given the ability for teams who want to work in this way to be able to work in this way. It's a lot of work. We used to have to really hide that we were doing Agile stuff most of the time. It was a constant battle to do things like testing and continuous delivery and things of that kind. Now people want us to do it. We're able to find places. And we don't succeed all the time and we fall down from here time from place to place. And you know, you still got arguments between should we use sprints or should we use continuous flow or often working in this kind of style of work than we were 10, 15 years ago. And that's the most important victory for me that the Agile movement has had. So if you want more on this, the new methodology article and the Agile fluency articles are on my website. Don't bother with the slides because they make no sense without me speaking. But there are videos of me giving this talk elsewhere. In general, if you want background stuff to any of my talks, go to the videos page on my website and there's links to supporting articles and things. So that was the first one. For the second one, I thought I'd move into something a bit more programmary oriented. How many people here call themselves programmers? As in you generate stack traces on a regular basis. Well, this is not just for you for everyone who didn't put their hand up. This is important to know about as well because what this is an vital part of programming is the process of refactoring. And it's something that, even after all of these years, is not necessarily as well understood as it should be. So many people, when they get introduced to what refactoring is, they get it introduced on the basis of talking about test-driven development. You haven't heard of the red-green refactor cycle of test-driven development where when you're writing some new functionality, you start by adding a test. You then go, your test suite goes red because it fails because you haven't written new functionality yet. You then make it pass, then everything goes green. And then, of course, at the beginning, people will skip over that the next step. But actually, vital step is then to use refactoring on the code base to make that pass code clean. And you constantly go through that cycle. Now, the interesting thing to bring out of this is that as you're going around this cycle, you're going to be able to operate. And this was characterized by Kent as the two-hats metaphor. He said, when you're working, you can either be adding new functionality or you can be refactoring. You can't do both at the same time. And there's a change of mode when you switch from one hat to the other. In red-green refactor cycle, when you add the new test and when you're making that test work, you are changing what the overall system does. But when you're in that refactoring step, you're in this slightly different mode. You're not changing what the system does. You're just changing how it does it. And you use different techniques and tools. In particular, you should be following the refactoring credo of Take Small Steps, none of which changes the overall behavior of a program. Very small steps. If you went to any of the talks on refactoring during this conference, hopefully that should have come through loud and clear. I would say the biggest surprise that most people have when I show them refactoring is how small the steps are that I take. There's a bunch of articles about refactoring on my website where I actually go through chunks of these. And the common reaction is, wow, I didn't think you'd take such tiny steps. But I do. Because that's really the essence. Refactoring is all about taking very small steps that if you keep them small, they'll compose cleanly to allow you to make big changes. This style of refactoring and this way in which refactoring is done, I think of it refactoring in the context of TDD. But it's only one of the ways that refactoring can fit into somebody's workflow. A more common way, perhaps, in many ways of doing it, is when I'm looking at some code and I go, oh, that's pretty awful. I do this frequently because I work on my own code. And so I see lots of awful stuff. You were supposed to laugh in a sympathetic way when I said that. And awful may mean all sorts of different things. I mean, it may mean some convoluted logic, but this just seems messy. But often it just means you read some code and you go, oh, that was a stupid way to call this. I should have named this method differently. Vital part of refactoring is that as soon as you spot something that's not right, you deal with it. I think of this like picking up litter. As soon as you see it, you get rid of it. There's a lot of litter on the ground. You discuss, oh, it's dirty. There's a lot of trash everywhere. But on the other hand, if you constantly keep the place clean of trash, then you're more inclined to spot and clean it up next time. So an important part of refactoring is as soon as you see a problem, you fix that problem. And that's true across all of the code base that your team is working on. And that, by the way, is one of the reasons why it's kind of sort of more than a day-length period. Because that kind of approach tends to discourage this kind of activity. Because people don't like the consequences of a merge if you're doing that kind of cleanup. But that constant cleanup is an important part of what makes refactoring work. Now, a slight variation on this is when you look at some code and you don't immediately understand it. If you're ever in a position where you're looking at a bit of code and you can't quite see what it does and you have to think through it a little bit, it might be for 30 seconds or so. It might be a few minutes. At some point you get, now I understand what's going on in that code. At that point, you should refactor. Because what you should do is what's happened is some understanding of the software has appeared in your brain. Wood Cunningham likes to say, whenever that happens, you've got to take it out of your brain and put it into the code so you won't have to puzzle it out again. I think Steve McConaugh who said, you know, reading something and puzzling out what's going on, that's a good thing in a detective novel. It is a bad thing in code. Code should be bloody obvious all the time. And any time you have to think to puzzle it out, that's a sign that you can improve it. And I think of this as a comprehension refactoring. As soon as you understand something that's going on in the code, you put that understanding into the code itself so that it clarifies it. Now, I've said straight away there's a kind of common flow to both of these approaches. You don't necessarily do it right away, but you always have to think about it and say, do I do it right now? If you're in the middle of adding some function that's a bit tangled, you might not want to do it straight away. In which case, you might finish, get to a reasonable stopping point, get to where you've got a green bar and then the results are all passing and at that point, you can carry out the refactorings. A lot of people like to make a little note of refactorings that need to be done on a card as they're working or a little note, a list on a buffer. It's important to get to them quickly though, because if you don't get to them quickly, then you'll forget to do them. On the other hand, if you're already in a position where you're actually not too tangled, just get quickly to a point where your tests are green, then do the refactoring and then carry on and finish the feature you're working on. Sometimes this might mean just stashing what you're working on. You're in the middle of something, do a stash, get back to a green point, do the work and then you can reapply the feature change work you're doing on. A useful part of this, by the way, is if you're using a distributed version control system like Git, you should be locally committing lots of times. My rule of thumb is every time you get a green bar, commit. Green bar, commit. Write a little thing, fix a little thing, do a little bit of refactoring, extract one method, commit. I'm committing all the time when I'm working in this way. I don't... I then, you know, before I push to the master, I'll squash the commits together and make a logical change out of it, but the great thing about a distributed version system is you can constantly make checkpoints. You can never make enough, really, because you can always squash them together before you share them. And that way, should you make any mistakes, you can always roll back quickly, because if you ever get into trouble in the middle of a refactoring, the best thing to do is to roll back to your last green bar. And so if you've got the commits, you make that easy to do. So another common case is when you look at some code and you say, you've got a new feature to add and before you even start adding that new feature, you say, you've only written the code differently, it will be so much easier to add this feature. If you ever get that feeling, then refactor. I think of this as preparatory refactoring. Essentially, what you're doing here is you're saying, is the existing code base a good fit for this new change that I want to apply? If it is, that's fine. Apply the feature. But if it isn't, refactor it so that it is easy to add the feature. Kent sums this up really well. I should include the tweet on the slide. I've forgotten to do it. He says, when you want to make a change, first make the change easy, then make the easy change. That is a really useful technique and should be a regular part of how you work. Certainly it's something I do regularly when I have to modify my code, is I look and say, okay, how can I make what I've got to do forward? Why was I stupid last year when I originally did this and didn't foresee this new thing that I wanted to do? Well, I'll do the refactoring. And you would do this, the reason we separate refactoring from adding new features is refactoring is much more smooth process. Because you're not adding anything, all the tests are always working before. And afterwards, you've got a, you're following a set of these small refactoring changes, you can make quite big changes at a relatively low stress level and at a much less chance of screwing anything up. That's why I want to spend as much time as I can with my refactoring hat on. Because when I'm refactoring, everything's cool and easy. It's when you're adding new things that things will get a bit uncertain and awkward. So, when making a change, I really like to get it so that it's nice and straightforward to put the new change in and then go in and do it. And as I said that, you may only discover that halfway through adding the feature, in which case you're afraid to roll back, stash your changes to one side, make the refactoring and then make the easy change instead. Another kind of refactoring that people do a lot is a case where you have refactoring cards on your wall that are all about making changes to the system. And you schedule them, they plan them as part of your work. I actually say that a good team should hardly ever have to do this. Refactoring is something that should never be big enough to need a story card to capture or a task or something of that kind. Refactoring is a constant process that you're doing all the time. Every time you pass through some code, you should make it that little bit better. And actually stopping even for a story's worth of work is not something a good team should be doing. But I'm not saying that that means it's always a bad thing to do. Particularly when you're learning how to do this process sometimes you'll find you've built up some crud in the software and you need to clean it out. And, you know, that happens. Even good teams occasionally run into this. But ideally you should minimize the amount of planned refactoring. Most of the refactoring should do, should operate as a kind of regular process as what you're doing. But that doesn't mean that you can't sometimes see some large-scale refactoring. I mean, spy were this one by something that was done many, many years ago on a ThoughtWorks project where people got unhappy with the way the dependencies between various modules were. And they had a short design meeting. It was over-lunch or whatever. And they sketched out how things ought to be. But they didn't stop work in order to refactor from one to the other. What they said was, we'll get this to be state. We'll stick it on a wall in the team area. And what we'll do is we'll say over the next few months any time you're in the encode that doesn't match the new model refactor it towards that direction. You don't have to get all the way there but make sure you're taking a step in that direction. And over the course of several months, while there were shipping features and doing all their regular work the code slowly moved in that direction. And at some point the tech lead said, oh, we're nearly there. I'm just going to spend a couple of days and finish it off. That's how to do a long-term refactoring. Lots of little steps over the course of time. Because one of the beautiful things about refactoring is when you're refactoring, if you're doing it if you are actually refactoring, you can't break the code. You shouldn't be breaking it. Refactoring by definition does not change the observable behaviour of the software. If somebody ever tells you, oh, the code is all broken for a couple of days because I'm in the middle of refactoring the only thing you can be sure of is that they're not doing refactoring. They're doing some rewriting restructuring stuff. But refactoring is all about lots of little changes and at any point you can stop and everything still works. And that's one of the things about refactoring is a lot of the time you don't necessarily have a vision of what the end state exactly looks like. Often when you're refactoring you're just trying, you're just a sense of, well, I can make things a little bit better. I can name things a little more clearly. I can clean up some convoluted logic here or there. And particularly when I'm working on some more messy code I might just refactor for a while without any idea of what the hell I'm doing. But I'm making the code a bit better, a bit clearer, and then I can once I've done that for a bit I can say, oh, now I know where I need to take it further on. And then over the course perhaps of weeks I'll actually get there. Refactoring is all about these little steps and it's all about making the code that much clearer. Reducing the amount of times you've got to look at this, get and stretch your head to try and figure out what's going on. And if you don't do this really you'll never get to that two-star level. Because refactoring is the key to being able to shift and change, to be able to react to new things as they come in, new demands, new spotting how the users can be worked, can have been made better by the use of your software. You can't have that reaction if you're not able to refactor. But despite this people still say well is refactoring rework? And my answer to that is really comes down to a little pseudo-model, a pseudo-graph I made a while ago. Under the perhaps rather ugly name of the design stamina hypothesis. So if we plot this pseudo-graph of functionality, cumulative functionality against time most software projects that don't put a lot of effort into architecture and design and refactoring they follow a graph like this the more you grow the software the harder and harder it is to add new features. And you feel that you're slowing down, getting slower as the project goes on again and again. How many people have been on a project that is like that? Usually most people get their hands up at this point. A lot of you didn't which is a bit suspicious but that's okay, it's late. But you do run into projects where the reverse happens where not where you can in fact speed up as you're going on because new features, to add a new feature well I'll just plug this, plug that make a few changes over here with some refactoring and everything plugs through and I can quickly bring. It seems like I'm speeding up rather than slowing down. How many people have been on a project like that? Less hands but there's still some hands going up. That's what refactoring does for you. It allows your ability to add code more quickly to come through. It gives you that ability to be fast in the medium to long term. And a medium to long term really is sort of anything from weeks to months to years. And really that's the essence of why refactoring is so important. When you are talking about justifying refactoring you have to be wary of a bit of a trap. A lot of people when they say why should we bother about refactoring? Why should we bother about clean code? They'll say, well it's the right thing to do. It's being a proper professional. You may say, I'm going to wear a little badge around my wrist or so I'm a true professional. I refactor. And to ask me to write code that isn't really well structured and well factored that's wrong. That's sort of sinful. If you use that argument you're screwed. You've lost. Because management and customers, they don't care how clean your code is. It doesn't matter to them. It's just irrelevant. The answer the justification, the only justification for refactoring is the economic argument. It says we refactor because that way we go faster. We build more features over time. We're able to be more effective and efficient. Be more responsive to change. That's why we refactor. We refactor because it's cheaper because it allows us to go better. Remember that comment I made early on when I was criticizing bipolar IT. It's all about the fact that in order to go fast we need good quality. In order to get good quality a key part of that is regular refactoring. So remember that. Whenever you think of justifying refactoring always use an economic argument. And on that note, I'm done.