 Right. Good afternoon. I'd like to talk a little about individuals and interactions over processes and tools. This is a tagline that gets thrown around. It's kind of listed as the first value, first comparative value in the Agile Manifesto. And this is a big question as to what does it really mean because in many cases I find that when people are pursuing Agile, either specifically because they have something in mind or even broadly just we're doing Agile because we feel we ought to because other people are, so not necessarily a good reason. There is a surprising reliance on the second category. For some people Agile development seems to be associated with a very strict interpretation of process. For other people there's a little thought bubble that goes on. They think of the tools. They think of their testing frameworks. They think of their continuous integration frameworks. They tend to associate it with these various mechanical features. Now these are useful. These are good. I'm not going to say they're bad. That's definitely not the idea of the comparative. But there is this question about understanding a little bit more about the individuals and indeed how they interact. So I want to explore that a little bit. I guess a brief bio. My brief bio is I'm Kevin Henney and I walk around the stage a lot so I'm moving this out of the way because otherwise there will be damage to me and this. Right. And my back, that's just done the right thing. Okay. So I've been involved in software type stuff. Probably the most relevant things here are a long term interest in patterns and the relationship that patterns have not simply to design but the way that people reason, discuss and view and frame problems and their solutions. And curiously enough this crowd source open source book 97 things every programmer should know which as the editor I managed using everything I'd learned from agile development. How do you get a bunch of disparate volunteers who've never met one another to contribute and write well. This is an interesting problem. And it turns out that a number of the techniques that I learned were very relevant to this. However, let's go back to this which I hopefully has everybody has good site recognition of understands the message. But that first one individuals and interactions. We have to appreciate the relationship between these people, the tools that we're talking about, particularly when we push really, really hard. I had a chat with Brian Marrick, one of the original segmentaries a few years ago. He made a point and I think he made it again in a blog or elsewhere that he felt that the first one value should with hindsight have emphasized teams more. I'm going to come out and say I know what you're saying but I disagree. Because teams are made up of people. And that's where teams come from. They come from individuals and interactions. And whilst we do value the team, unless you understand why a team is a team and why things that aren't teams are not teams, you'll be forever perplexed. Individuals and interactions give you dysfunctional teams. Individuals and interactions give you self-organizing teams. They give you all of these. If you don't understand the parts, you have no chance of understanding the whole in this case. So I think the emphasis is correct. But then we are confronted with some things. This is the standard presentation of the scrum process taken from Mike Cohn's site. We've got elements of artefact. We've got elements of time boxing and process. And it's a nice simple picture. It's a very simple picture. That is its value, its simplicity. When people draw pictures that are hard, they tend to turn us off. They tend to lose communication rather than increase it. However, there is clearly more to scrum than just this. If everything was this simple, it would be fantastic. If this is simpler than a clock, a good old-fashioned wind-up clock, people are not this simple. Organizations are not this simple. Products are not this simple. So this is clearly a simplification. We have left something out. And it turns out that scrum is a very interesting game. It's a very challenging game. Let me step outside scrum for a moment and briefly compare it to a game of chess, which is not like scrum at all. I'll come back to that in a moment. Chess is very simple to describe. The rules are very simple. My seven-year-old can play chess. Fortunately, I can still beat him. But that's the point. What makes a great chess player? It's not just the rules. There's clearly something about how you work the rules. It's the spaces between the rules and how you flow through them. And this we see often with scrum, sort of flat scrum, which is the basic interpretation. Here are a set of practices. Here are the artifacts we focus on. Here are the time boxes. Versus the subtlety. And the subtlety will be different in each case. Now, we're then confronted with this rather interesting challenge that there is a certification scheme for scrum. And Mike Cobmire put this rather well. Scrum can't have its cake and eat it, too. It can't be a simple framework that is not prescriptive and then start certifying people on how to do this stuff. There is a contradiction. How can this be true? If all the subtlety is scrum, if scrum provides you with a framework in which you improvise, explore, and do these various things, then clearly the bits between the rules and beyond the rules are immensely subtle and are going to likely be very highly context specific. How do we certify people on that? Certification plays a very particular role. It plays a role not of competence but of guaranteeing a certain level of minimal basic knowledge in a domain. Quite frankly, with scrum, that's one side of A4. Everything you can write down that is consistent and constant. All the other stuff you can fill books with. And that's not two days. It's not three days. It might be three years. It might be three decades. I'm not entirely sure. But it's difficult to certify. It doesn't mean that there's not knowledge to be had, but it's not that kind of knowledge. So clearly we have a little bit of a challenge. Taking it from the source here, Ken Schraber made a very, very good point, which is kind of in contradiction to certification. But the challenge that we face in software development is that we do not undertake defined processes. Actually, that's not strictly true. I'll come back to that in a moment. Defined process control model requires every piece of work to be completely understood. Given a well-defined set of inputs, the same outputs are generated every time. We know this. They're called compilers. I put the same code in. I get the same stuff out. It does the same thing every time. That's a solved problem. The problem is we don't actually know what the inputs are for a software development project. We're not entirely sure of what lies within the box. And we're not entirely sure about the outputs either. It's very not defined. Now, I will say that I did work on a project that could be considered a defined process. But it's worth understanding the conditions under which it operated. First thing, four days long. Not four years, not four months. Second thing, one developer, me. Third thing, strong customer interaction. She was called Yvonne. And she was about that far away from me, where we sat in desks. Probably about 20% of the time spent discussing stuff. Fourth thing is the domain. The problem that was being solved was a defined problem. A lot of GIS data in one format, a desire to have that GIS data in another format. In that four days, the formats did not change. The staffing did not change. Company politics did not change. It was massively defined. It's a very rare occurrence. But you cannot scale that four days to four months. You cannot scale one person to 10. It doesn't work like that. Most of what we undertake in software is not that simple. It is inherently undefined and better tackled through empirical process models. Expect the unexpected. Frequent inspection and adaptation. Keep your eyes on what you're doing. Don't assume you're going to know everything. Whenever you offer something in the true spirit of empiricism, be empirical about stuff. The purpose of a sprint is not the delivery of story points. It's not the delivery of features. That is a purpose, but it's not the purpose. Each sprint offers you a period of time under which you are confirming a hypothesis. And that hypothesis, if you go for traditional scrum where you have a scrum goal, that hypothesis is that you can achieve the goal of the functionality, if you've chosen to split it out into stories. Given what we know, with the people we have, the tools we have, and the time available, we can do this. That's a hypothesis. At the end of two weeks, you get some feedback on that. That hypothesis may be confirmed. It may remain unproven. It may be contradicted. You're then invited to consider, what else might work? Do we need a change of design? Do we need to change of practice? Do we need to back off a little bit from our feature driven or feature obsession? Do we need to redefine our relationship with our customer? And so on. I don't know what the solution is. That's the point. It's not defined. I can't tell you why it is that it didn't quite work out the way you expected or hoped for. But it's not a failure. It's definitely not a failure. Because what you did is you formulate a hypothesis. Now, if scrum projects are run like this, I think I'd hear a lot fewer scrum failures, but most are not run like this. They are run as delivery engines with no sense of confirmation, no sense of observation. The idea we're going to posit something and then seek confirmation through the process. What does it tell us? Does it confirm it? Deny it? Is it neutral on it? So it's an experiential thing. There is this idea that there's a lot of subtlety here, because you can't tell everything in advance. Now, why is scrum and indeed any other agile approach not like chess? And what distinguishes? This is a question I was asked a few years ago, and this is the answer that I offered. What is the difference between approaches that are iterative and or incremental and agile approaches? Haven't we done this all before? I said, I think the difference is not simply the fact that certainly different agile approaches differ in their degree of incrementalism and iteration. It's not so much that they are all based on this to some degree or another. It's this bit, whether or not they're nomic. Now, nomic is a peculiar word. If you've not come across it before, the real reason I put this up. Peter Suba in the 1980s created a game called nomic. And he did this because he wanted to explore something. And he included a very particular rule. The rule is that you can change the rules of the game. One of the rules of the game is you can change the rules of the game. We don't do that with chess. We don't do that with poker. Oh, there are a lot of variants with poker. But for each variant, there's very definitely a set of rules. And when you don't play them, you find out very rapidly. Okay. These games are still complex in one level in terms of the behavior they can create. But they are bounded in a way that constitutions and political systems, legal systems are not. That's what Peter Suba wanted to do. He wanted to explore, just as with a legal system, a legal system can change its own rules for how it interprets its own laws. He wanted to do this. And his original intention, his original intention was that the original nomic you can find the rules online is a very boring game. One of the first things you'll want to do if you play it is change the rules to make it more interesting. So can you see the comparison with Scrum? It's a very boring game. It's really simple. It's a basic idea. It's a starting point. It's a framework. You're not supposed to end up with that. It's a starting point. Your job is to find the game that works best for your product and your team. Now, if after however many months you're still playing the same game, either you got lucky or you flatlined and you're not changing. In other words, you're not being nomic. So the process involves its own change. That's what makes a thing agile. It's not the speed. It's not the brandings. It's not any of this stuff. It is the fact that it is self-adapting. That's a very important but overlooked element. Now, it turns out that people are part of this. So how are people going to learn? Well, I want to dispel one thing, first of all. This one keeps coming up. It annoys me. Failure is a far better teacher than success. Where was this from? Financial times. So respectable paper. But this is wrong. Failure is a lousy teacher. It's a terrible teacher. It's a shockingly bad teacher. This does not mean that failure does not have a role to play. But as a teacher, I'm not impressed. If you're going to learn a musical instrument, how do you learn a musical instrument? You don't learn just by failing. My older boy, a while back, he was learning the piano. He's now going to learn the guitar. It's quite easy to get things wrong, particularly on a guitar. There are lots of people who can't play the guitar. They fail all the time. Most people who pick up a guitar get taught very badly. There is a way to learn a guitar, a ways to learn a guitar, that are built on success. Failure is only meaningful when it is given a background, a backdrop of success. So the way that, you know, how not to teach a child to play a musical instrument, piano. Here's some sheet music, but I can't read music. It doesn't matter. Just, you know, hit the keys as you see them. Bank wrong. Bank wrong. Bank wrong. Are we learning yet? Bank right. Brilliant. Which one was that? We have no context. We have no meaning there. Now, I can't really play the guitar particularly well, but a couple of times when I've sat down with Stefan with the guitar, one of the things we've done is we've listened to the original piece. Then I've played it through, then he plays it through. He knows what success looks like. He knows what its characteristics are. He knows when he's not doing it. I'm disinclined to call that failure. It's not getting it quite right, but failure is standing up in front of a bunch of people and saying, I'm going to play the most perfect piece of music ever and then failing to play it. That's probably failure, but getting something wrong during practice. I don't think that's failure. And again, I think this is a dramatic word, an overly dramatic word for the simple notion of the nudge that errors can give you, particularly when you have an idea of where it is you know what you want to be. Now, do we have any other sort of observations on this? Well, again, advice that gets peddled. Christopher Walken, great actor, terrible pedagogically. If you want to learn how to build a house, build a house. Don't ask anybody to just build a house. I don't think so. How much complexity is there in a house? How many ways can you get a house wrong? It turns out it's also, it's one of those cases, you built this house yourself. Do you know what a right angle is? There is this other thing that also happens when people, when they learn or when they do something themselves and when they are classically self-taught, as in they have had no mentoring, they have not sought advice. The fact is there is a vast body of knowledge out there about house building. It turns out we are not alone. And I'm going to say the same for software development. It turns out that one or two people have done this before. We know stuff. Our challenge is how do we get other people to know it? Not by denial. Of course, this, we have to let people experiment a little bit, but this is not the right kind of experimentation. Building a house, first of all, there are issues of safety critical concerns. You know, is it safe? But there's another thing, something I've noticed. It took me a while to notice. When people are self-taught, without having had any outside input, any form of feedback, any form of guidance, they often pick up some really weird ways of doing things. I think we may have all seen this in code somewhere. We may all know the person's name. They pick up really weird way of doing stuff. And they often say, well, you know, it works for me. It's how I do it. It's part of their personal style. In other words, that becomes part of their personality. It's part of the way they do it, her style. I'm kind of breaking away from the mainstream and all those boring standardized doors. What happens is you become attached to your mistakes. They become part of your expression. You are reluctant to give them up. So I find this a little bit concerning that we have, as a profession, we spend a lot of time being concerned about the propagation of knowledge. And then, I'm citing Christopher Walken, who is an actor and not a software developer, because somebody tweeted this a couple of years ago, somebody I respect very much, as a software developer. And I said, that's terrible advice. We've got lots of developers that do this already. We need fewer. This does not mean that we don't want to let people do their own thing at some level. But this is not the best way to start. Now, so in other words, we need to understand how individuals work. So let's go inside the brain for a moment. The assertion we can learn something from every failure is often hurt. Studied by Miller and colleagues at MIT, test the notion by looking at the learning process at the level of neurons. The study shows how brains learn more effectively from success than from failure. Brain cells keep track of whether recent behaviors were successful or not. When a certain behavior was successful, cells became more finely tuned to what the animal was learning. After failure, there was little or no change in the brain, nor was there any improvement in behavior. This is really important because it does mean that we are, by default, not hardwired to learn from things that do not work out. Now we may often say, well, hang on. What about retrospectives? What about reviews? What about all this reflection that is so commonly emphasized within the agile community? Well, that's why we do it. If it were part of our wiring, we wouldn't have to say it. If it were naturally in there, the fact that we can have books and talks on things like retrospectives and how to offer constructive feedback and how to learn and how to get a group of people learning tells us that it's not one of the things we're necessarily good at. It tells us we are capable of it but not in a vacuum. That's the point. We can help each other. I find the idea that we should not help each other rather peculiar. You can fail and I'll tell you I told you so. What did you learn from it? It's very easy to learn from failure where things are very constrained. In other words, where there is a right answer. Software development is not a domain like that. It is a complex domain. It turns out that we are equipped to do something with failure, just not quite the way we hoped. Now, I thought I'd go to the other extreme and give some redemption. I haven't listed it there but this is also from the Financial Times. It has become a commonplace to suggest failure is good for entrepreneurs. In this view, a failure that comes early in a founder's career can teach some important lessons about doing business and harden them up for the next startup attempt. This is orthodoxy. This is standard belief. Laws in different countries are arranged to allow people to fail based on this belief. It turns out the reason those laws work is not because of the belief. It's for different reasons. But this is standard orthodoxy. If you try and challenge this, I think if I were doing, if I were, if this were a conference of economists, I'd probably be booed off the stage at this point. I'm challenging orthodoxy. However, it's good to get some numbers here. I've got the exact numbers but in the UK the evidence is that novices are neither more nor less likely to be successful. So if you've had a failed company, it doesn't mean your next one's going to be any more likely. Because what you'll probably do is, as a friend of mine, he ran a failed company, there is that belief as you come out of it. You know what? That didn't work. I think we're doing, you know, we just need the right people in the right time. We just need to do everything just a little bit harder. We'll just do the same thing but with a lot more emphasis. There is, in other words, you narrow down instead of broadening out. I like the fact that more thorough work has been done in Germany. It's clear that those whose business had failed had worse performing businesses if they restarted than the novices. So basically we're saying that somebody who has no background in doing this is actually better off from a probability perspective than somebody who has a failed company behind them, regardless of what the reputation of that company might have been. So it turns out that there's a whole bunch of other stuff on the fact that we can of how we learn. There are certain behaviors, certain situations in which we end up with an error-correcting feedback but the default assumption no longer stands. Now why does this matter? It matters because we're the individuals with the individuals at the heart of the process but also a few years ago I noticed the language of scrum shifting. This is taken in the scrum primer. Scrum works by making visible the dysfunction and impediments that are impacting the product owner and the team's effectiveness so they can be addressed. The good thing is the paragraph does sort of talk about resolving problems in short cycles and experiments. This is good but there is a challenge I want to make to that idea. Making visible the dysfunction. If you've ever been in a long-term relationship has that ever improved your relationship by making visible the dysfunction of your partner? You know you're doing that wrong and you can do it slightly more subtly but you know that really doesn't work and how good are we at picking this up? Making visible the dysfunction. Turns out we are shockingly bad at it. Shockingly bad. Projects can fail in very consistent ways and the people inside the projects are still oblivious to why such failures occurred because they are human beings. They are naturally drawn towards a particular world view. So a project I had some technical involvement with a few years ago which ultimately failed for reasons well beyond the technical though they certainly contributed. I remember thinking that had the project involved public money I think I would have been inclined to either back away or blow the whistle. There was no way this project could be successful but the guy who got me in the process guy he was very optimistic and we both worked together on another project but this was a different political climate and every two weeks they fail to meet their sprint objective. Every two weeks they had technical challenges they had team challenges. Now I don't know about you but that sounds like feedback to me. You are being told this is not working. We may not know what's not working but we know that something is not right. There is a suggestion that continuing in other words the experiment the experiment that hypothesized given the people that we have with the skills that we have the architecture that we have the feature set that we are going to undertake we are doing it right this experiment is going to work let's try that out and every single experiment was coming back no no that's not right that's not working for you. Eventually the management decided to pull the plug on this project or other pull the plug on the agile process based on the fact that they did not sit there and go you know what dysfunction is highly visible we need to solve it they said we don't like hearing bad news every two weeks. This project overran to twice its original one-year length what you want all your bad news in the second year you could have it used to have it early used to have it every two weeks telling you you can't do it like this this isn't going to work this isn't of course that we don't like to hear that but that's the nature of the individuals in the whole process because that's how people think you don't like to hear bad news why would you like to hear bad news why would you like to have your work criticized even by something as abstract as a process yeah that's not how you read it you read the situation and you misattribute it to other things so it turns out people unfortunately matter why does it why is that unfortunate because people are really difficult so observation from Nassim Tower people overvalue their knowledge and underestimate their probably the probability of their being wrong this governs us in all walks of life it governs us in software development when people talk about estimation it also influences our ideas of the precise nature of the architecture that we might adopt for a particular system it's all over the place but at the same time if you get people right they can turn out to be surprisingly smart now the phrase wisdom the wisdom of crowds gets thrown around an awful lot the book came out ten years ago it's worth reading before you quote it or quote the title because it's a good catchy title James sir wiki is very very careful and very clear to outline what he considers to be the precondition for an intelligent group of people because we also know that a group of people together can act with remarkable stupidity the stupidity kind of an individual would never undertake so how can a group of people be so diverse well it's to do with the interactions diversity of opinion if you end up breeding a monoculture if your team is filled with people who think the same way then you're not going to get the wisdom of crowds because you are all fairly close to one particular point of view you've already started down the group think vote now that may mean for things that are right you are right it will also mean that you are very weak at recognizing incorrect assumptions friend of mine ran a startup and it was really interesting the way that he interviewed he interviewed basically for people like him and the other founder they went out of they went out of business eventually because they got I remember a couple of times where they where I met him in the pub and he'd say oh we just got we just had to let somebody go oh why because well he was a bit different to the rest of us and he had these other ideas and I'd seen the work the work was good but clearly just didn't fit in in why but he didn't he did not fit in because he was a bad person I think like that it wasn't because he was toxic to the team it's just that he didn't quite agree he countered he challenged some of the design decisions and the approaches you know that turns out to be surprisingly healthy independence and people do have to be thinking in a way where they are not massively influenced by other people when you operate when you get people in a group and you ask them something for example I'm not going to do it now but for a show of hands of something people to look around if you ask for group estimation I got a group of people together I get five people together and I say how would you reckon this feature is going to take to implement whoever says that whoever whoever breaks the silence first has the greatest influence and if they happen to be the most senior person at the table they're massive influence other people don't realize that they don't think oh you know what I'm going to think independently from him that's not how humans work we immediately are responding if somebody says I think that's going to take two weeks people don't immediately think independently I was thinking it was one day or one month they are now thinking do I think it's more or less that humans work in pair-wise comparison turns out we're very good at comparing the value of two things but we're not very good at giving off in you the absolute value of something out of the blue and we are also influenced it's a priming effect we are also influenced by other people priming effect is related to something known as the anchoring bias and there's a beautiful experiment a few years ago I forget the details but it was of the influence of numbers arbitrary numbers on our subsequent use of numbers and I think it was probably to lots of these Americans these experiments are done on American first-year undergraduates so we know a great deal about psychology of American first-year undergraduates but I was along the lines of exposing people to a series of numbers whether they were interest rates or numbers related to something utterly unrelated to what they were then asked the question they were then asked is how many nations in Africa are members of the United Nations and it turns out that when people were exposed to numbers that around 10s and 20s and 30s they gave their estimate of the number of countries in Africa that were members of the UN is around the 10s and 20s and 30s when they were given higher numbers in the sort of 60s 70s and 80s as their priming set they gave numbers of countries that were of that order in other words we are influenced by numbers that are entirely unrelated if I remember correctly the answer is 45 or 46 I can't remember that includes South Sudan but I'm sure some people check in on Wikipedia but it was right in the middle we are easily influenced we don't like to acknowledge it and we don't believe we do it but we are so some degree of independence decentralization the idea that people are able to draw on separate knowledge bases separate experiences the separate background that's really quite important if everybody has gone through not it's not simply a case of independence or it's not simply a case of diversity of opinions actually genuinely people actually look out to the market in different ways if you're looking at if you're looking at product development for the mobile phones you don't just want people on your team who are pure software developers but you also don't want pure software developers who are only interested in mobile phone products you want people who have interests all over the place and if you're doing Android development make sure you get an iOS person on the team because they're really annoying but they will give you something else and vice versa and aggregation none of this means anything unless you have a way of bringing it together otherwise you end up with these islands of opinion and knowledge so in other words the wisdom of crowds is a very delicate creature if you wish to harness that intelligence you have to be very careful because time and again we have discovered there is little correlation between a group's collective intelligence and the IQ's of its individual members we have seen super bright teams fail however one interesting observation which may come as a surprise to many of you but not a few of you okay it would be nice if it came to be less of a surprise this was a side effect of some very different research it wasn't the original intent of the research to prove or disprove or discover this but it was a it was an observation the experiment was sufficiently carefully run that when they looked at the results wait a minute this is interesting and I have to say ever since reading this a couple of years ago whenever I'm running workshops I have certainly made sure I've divided up the team is very differently as a result of it I think it's unfair for the women to get an advantage so they have to help the other teams but really what I believe what I believe is going on here is to do with the styles of communication you are necessarily including a very different style of communication or to be precise there's a whole bunch of stuff that you might say that you won't say what you won't say you would say styles of communication matter and this changes the landscape so it's a very subtle observations again this is not something you can get certified in so let's go to something else here let's talk about this other thing working software it turns out I want to go back to this question over the way that people often focus on value creation now of course if you're not if you're not developing software because it has business value then it probably has some other kind of value otherwise you wouldn't be doing it probably has some kind of intrinsic motivation it might be open source software that's a very different effect you might be doing it for a number of other reasons but I am a little concerned when people just say the only you know that's the most important thing here value creation is that really why we get out of bed in the morning I mean I can't really say that I got into software development because I was looking forward to adding value to somebody else's product and it just doesn't sound right it doesn't sit well with me that's not like how human beings talk this is how this is kind of how business people are supposed to talk in a caricature of business people it's not even how they talk but one of the things I thought was interesting was this observation client stakeholder value is the only thing is important add our methods balance two things one is maximizing a value creation the other thing is maximizing the chances of actually delivering something that's a very subtle distinction in other words what you are trying to do the group people is to actually try and deliver something and it may be that the thing that has the greatest value is the thing that has the lowest probability of being delivered in other words there is massively high risk so there is a balancing act here and the observation is that these two are sometimes in conflict this is why I've in the past I have I must admit I've had a little bit of a gripe when people talk about prioritization by business value okay how do you prioritize by business value how do you know what the value is how do you know in advance what the value is going to be it's an estimation exercise like any other estimation exercise is prone to error we believe that if we add these features we will get more users well that's a belief it's certainly one that we can go out and try but it's still an estimate it's an estimate that it'll be worth this much it could be worth this much it might be worth nothing nobody will buy it so in other words so basically prioritizing by an estimate of value is a different thing and then it gets more interesting is that the only way we can prioritize should be prioritized by risk as well in other words prioritized based on reducing risk prioritized based on things that we know are minimal effort minimal risk but potentially big wins well let's prioritize those there they may not be as big a big wins as other things but the idea is there is a balance here those priorities don't form a line they're not they're not linear there's a landscape now that sounds really hard and that's because it is there's a landscape that balances many factors and business perceived an estimated business value is only one of them now this idea of actually delivering something I want to refer to this thing a friend of mine Alan Kelly directed me to a couple of years back it's called that's not actually that good it turns out our costs our spending goes up our growth well doesn't it shrinks on the other hand if we try and become better at let's do this IT thing let's do this software thing let's become better at doing software improve our skills improve our infrastructure if you take a team of people that are currently as it were shielded from the flow of the organization and so they're not particularly aligned with the organization and they're not particularly effective in what they're doing and you throw them in the path of the oncoming train the alignment train it turns out they flounder they struggle they're not really given an opportunity to learn they get plenty of that failure we were talking about earlier but they don't actually get a chance to improve they're in constant firefighting mode it turns out firefighting is very expensive activity which is indicated there it turns out that they're now being rather than moving in elegant short feedback cycles they feel like they're being constantly harassed they never have time for anything they lack the time to consider well how would we do this right every deadline is sitting there like a vulture over them on the other hand if you continue main take so as I said not doing the wrong thing better continue whatever it is that you were doing but just try and do it a little better the way that we deal with the rest of the organization but let's see if we can improve something else continuous integration testing team togetherness meetings how we've laid out the tables socially whatever it is let's have some lunchtime learning sessions because clearly not everybody is up to speed because people don't have skills doesn't mean they can't have them so there is this idea that when you get a group of people who are incredibly skilled eventually at this when you ask them you know that wasn't quite the right requirement they're able to change it easily it turns out it's easier to change a lot easier I had experienced this one recently it's easier to take a well structured code base that is you know tested and all the rest of it easier to take a well-structured code base that does slightly the wrong thing and make it do the right thing then is take a messy code base that does slightly the wrong thing and make it do the right thing in fact it turns out it's easier to do the first case than it is to take a messy code base that somehow manages by luck and good fortune to do the right thing and now you want to do another thing with it well that's a big ask it turns out that's a very expensive thing to add it's a very precarious balance so it turns out that the concentration the emphasis on skills the ability to increase the probability of actually delivering something has a reflection in the way that we want to emphasize skills and learning now there is another interesting observation how do people get motivated Dan pink makes this observation if you want people to perform better you reward them but that's not happening here he makes an observation based on specific experiment but we also have some very interesting confirmation from the financial sector you've got an incentive designed to sharpen thinking accelerate creativity and it does just the opposite it dulls thinking blocks creativity turns out the people who are in knowledge work have a very different reward system once they reach a particular level of remuneration it's no longer money that matters the thing that people do care about is comparative remember I said earlier on people are very good at pairwise comparison so within an organization software developers look across at UI designers they're getting paid but they'll get pissed off but they won't look across the software developers in another organization make necessarily the same comparison so it turns out that money is not the driver once you reach a particular point another study had something to say on this one this was a study of six hundred managers across a number of companies for the engaged in knowledge work so not necessarily software development some very interesting things what they ranked the highest was recognition okay recognition for good work that was number one sadly they're wrong when they asked the people what they found is that progress is the most meaningful what motivates people is progress coupled with a sense of purpose and some degree of control over their own work autonomy when you sense that you are making progress if you've ever been on a really large project that hasn't got any obvious progress indicators you feel like you're stuck in the doldrums literally the project is going nowhere you like to feel progress either problem solved or other forms of progress this is why half of the things that are advocated in the name of agile development reason half of them work because they are visible indicators of progress there's a profound satisfaction in moving a card from one part of the board to another especially this one says done there's progress you actually got up and physically did it you run some tests that was good look I've got more tests than I had yesterday this progress and they're green today which is even better in fact sometimes you may notice this is an interesting one people undertake quite a lot of testing if they been away if they test if they're talking about a test suite that runs fast if they've been away from their desk for a while when they come back they sometimes just rerun the tests if they've left their IDE open they will rerun the tests I've done this a number of times because I mean what could have changed in the last few minutes nothing but there's a satisfaction look progress I've done something and it's more than what we had before all of these are subtle indicators of progress now you have to be very careful with certain indications of progress let me pick on burn down chart and you're very careful about how you apply this so take me because clearly this gives a formal progress okay so amount of work left to do and time you see progress if you don't make progress in the amount of work left to do then it stalls sort of but there's a couple of other things you need to take into account to make this work we have to avoid some of the very common pitfalls I noticed a few years ago that there was a strong recommendation that people should have estimates of items that they had fed into that they were actually going to burn down so in other words the current backlog they have to have estimates in hours now I regard hours as unnecessarily precise for something that I don't know about it turns out that when you use a particular unit of measure people will estimate and measure according to that measure give you a very simple example if I say I will see you tomorrow or I see you in a day you kind of know what I mean if I say I will see you in 24 hours I'm not actually saying the same thing if I say this will take about a month I reckon it's different to saying this will take 22 working days this will take about a year versus this will take 250 days or something like that in other words there is a difference there in how we do this if you're saying ours then people are if you say this will take this many hours people will track you by ours and it seems as I say unnecessarily precise given our inability to estimate correctly but I there's something else I want to draw your attention to the graph if we're measuring time in days and we're measuring work left to do in hours what's this graph a graph of it's a graph of time against time if you did a numeric degree and I'm sorry mathematics doesn't quite count computer science certainly doesn't one of the things you had beaten into you was dimensional analysis check check you check what you're doing this stuff with what's time against time what is that actually a measure of I mean currently based on my inertial frame of reference I'm currently undertaking approximately one second per second through space time but I don't think we're talking Einsteinian relativity here what we're talking about is utilization because if you say a standard working day is eight hours then I would expect on an individual level for a hundred percent utilization a bad idea that everybody's burning through eight hours a day and if they're burning through less and I obsess about numbers and I might get worried so what's this a measure of it's not really it's not it might be a measure of the goodness of our estimates it's not entirely sure what it's not entirely clear what we're doing with this particularly when sometimes people will actually put this like this they will say ours estimated and they will subtract the hours worked I estimated this was a 10-hour task I've worked five hours on it therefore I am halfway through it is entirely possible I estimated 10 hours I've worked five hours on it and I've got 12 hours to go good grief how did that happen well reality struck I'd have been better off saying I estimate a couple of days I've done a better part of a day and I still for I still think it's a couple of days it's the work remaining but it's easier to measure the work that you've done so I found some people micromanaging this one and obsessing over the numbers there so a more meaningful one is this one hours of work remaining looks different however it turns out that if I have a number of stories say 10 stories and equate a story to a task for a moment it turns out I can do 80% of my estimated work across all of them and still have finished nothing which is kind of a problem we need to solve it turns out that really strictly speaking you shouldn't mark it up until you've done it until it's reached done now what I'm describing here is very common failure modes that I've seen when people have not really understood the questions that a burn down chart is supposed to raise and the values that it's supposed to show and I've taken a very isolated set of very isolated environment I'm worried about requirements creep I'm worried about re-estimation to a great degree and yet still we are we find that there are problems with the current visualization the way that people normally exercise it for me I always find a build-up chart is slightly more positive because the graph goes up now people are people got to the habit of calling these burn-up charts a few years ago which I think is terrible you know I'm I'm both a father and someone interested in space travel and in both contexts burn up is really bad if a child is burning up it means they have a fever if a spacecraft is burning up oh that's really bad you don't want that so build-up is more positive we build up yeah it's a much better connotation now again there is the orthodoxy that assumes that although this is a progress indicator that a straight line is the right thing humans aren't very good with straight lines we struggle with them but also if we go back to the idea that we are not doing if we are undertaking a new development is genuinely new it's not like a new system that's like an old system that we did it's not like continued work on a system that is well understood if we are undertaking new work we're actually you know what we have no idea so one of my clients really interesting one of the things we did was separate out things that are familiar and we have some kind of confidence in versus we really have no clue we don't even know the answer to the question because we're not even sure what the question is so we separated those out so those did not sort of sort of one number didn't mess with the other and that's quite important because when you're doing that kind of work in the second category we actually have no idea whether or not this optimization this technique this feature will work is implementable or has any appeal to any of our AB testers we have no idea it's a complete experiment genuinely open and we can't tell you how long it's going to take and it's very very likely that your natural profile should look like this should have a kink in it or in fact even a curve like that because your learning slowly you tend to work a little better don't rush things explore a little bit you're not making any meaningful progress in terms of functionality but you're probably gathering a lot of knowledge there's a lot of time spent thinking around things the progress you're making is not as visible perhaps as other forms of progress but then if you measure the functionality in terms of delivered work then it'll rise as you get the knowledge so it's a very different anatomy to the way that people if you like reason and in full awareness of the fact that I am about to go over time I'd like to consider one more element of how our views on our practices are often distorted it's a thing called hyperbolic discounting organisms tend to value the present far more than the future in other words you take a reward that is immediate rather than a long-term benefit that is abstract if it's immediate and concrete you will take it this affects us in a very obvious way can back highlighted this if all you could do is make a long-term argument for testing you can forget about it some people would do it out of a sense of duty or because someone was looking over their shoulder soon as the attention wavered pressure goes no you test get written everything falls apart in other words even though you may have instilled in people the reason that you know long-term testing long-term is a good idea if that is the only thing you have been able to do and you've had to do it by active supervision and boxed box ticking it is very likely that people in spite of intellectually knowing that there is a benefit they will take the short term over the long term I love this quote from you scientists economists dislike hyperbolic discounting because it is irrational they prefer exponential the fact that it's what people do doesn't seem to disturb them which might explain an awful lot about the economy but the very fact that it is uncomfortable is something we need to take into account it is normal for people to not value long-term abstract gains especially when we are coupled when they're coupled with probabilities so on that note I'd like to end with an observation from Helen Sharp from the Open University she undertakes quite a lot of research on as it were the people side of stuff and she gave a keynote a few years ago at a conference where she just simply titled it software development a social activity with technical practices and that draws us right back to the emphasis on individuals and interactions the processes and the tools help us glue they also come out as byproducts but without the emphasis on the individuals and interactions we will be left in a constant state of mystery thank you very much