 So let's get started and I haven't heard the bell ring yet, but I don't know if that's going on at this time. Anyway, so this is it. This is the last day of the conference. Soon we go home. Feeling kind of sad to leave here. It's been a great week. So was anyone at the talk I did yesterday? No, great. So no overlapping audiences, which means I'll be making new acquaintances. So this is kind of a mysterious title, but I will explain in a moment the leprechauns of software engineering. So who's a software engineer? Not so very many of you. All right. So maybe this is kind of superfluous. But still a lot of what we do in software development is seen as belonging to that particular discipline, software engineering. And many of the things we talk about in the agile community take place within the context of a debate which started in the late 60s. 1968 was the date when people came up with this notion that there is such a thing as software. There are some disciplines concerned with the engineering of various things. So why not apply the notion of engineering to software? And this was born software engineering. That was not something which came naturally. Initially the phrase software engineering was seen as provocative. That's another talk, one I've given before on the history of software engineering and the continuity between the history of software engineering and that of agile. But I wanted to briefly bring some of that background to this talk. Now, a different kind of history. When I was a kid, I used to read the comics. Any of you remember that? The kiddie books and comics that you read as a kid. And it was not all adventure comics or funny comics. There were also things of a more educational interest. And one of the things I used to like a lot was the did you know section? So I don't know if that rings a bell for you. But interesting factoids, things about the world that may weaken the curiosity of a young person. And one of the things I came across was this notion that we only use 10% of our brains. Did you know scientists have shown that people use only 10% of their brains? Another fun one was did you know that the Great Wall of China is the only man-made structure that you can see from space? Actually this has kind of inspired other vocations. But the key thing, of course, about those two things, facts, so-called, is that there are not actually facts. Neither of them is actually true. We don't know enough about neuroscience to be even able to make such statements as how much of our brain we actually use. We use all of it. The brain is kind of a holographic thing. So that statement doesn't really make sense. It was probably derived and distorted from something that somebody said at some point which transformed into something else. The same goes for the story about the Great Wall of China. There are many man-made structures that you can see from space if you interpret space as a certain altitude and you can see nothing if you go high enough. The kind of thing I used to be tricked into believing and then found out much to my annoyance, embarrassment were not actually true. So a great resource at any point in your life because to my disappointment and surprise I keep learning that some things I believe are false even at the middle age of 40. So I periodically check something up on Snopes.com which is a site devoted to debunking urban myths. So what's interesting is that I think we need a Snopes.com for things in software engineering. So things that people will tell you but which in fact turn out not to be true. So I'm going to be telling you today about things I have come to call Leprechauns. Why Leprechauns? I'll get to that in a moment. But I want to start with a concrete example. So isn't this something we would all like to know? Where do bugs come from? And I advanced a theory, a hypothesis about that yesterday in my other talk which was that bugs in the source code tend to come from bugs in the human brain, those we call cognitive biases. But there is a different kind of explanation which goes around and has been kicking around for many years now in the software engineering community. And it goes something like this. You can probably use Google to find very close approximation to that code. 56% of all bugs identified in projects across the whole industry originate in the requirements phase. That's something I've read time and again. So of course the thing to remember is that this is folklore, it's myth, it's not fact. And actually it cannot even be fact. It's not possible for this to be a fact. But I will come back to that. Now I'll tell you the results of that investigation. But I think when you come across something like that, which is a sweeping generalization, which has a number in it. So that would seem to indicate that some kind of research was done at one point. And that is kind of an agaricate result from many projects. So you would think there were studies and several of these studies were summarized into that one number. So a very useful and healthy thing to do when you come across such a claim is something that has been encapsulated in the Wikipedia community as citation needed. That's the phrase they use. And there is this very nice illustration from the comic XKCD. If you don't read XKCD, you might find it interesting. Funny at times, funny in a tragic sort of way. So he's someone who really gets what it is to be a geek. And so he's got geek-league all over his work. So politicians are very prone to that kind of thing. They will say something with poise, assurance. I say that jobs in this country are being destroyed because of blah, blah, blah. And nobody usually thinks to ask them in the moment, wait, how do you know that? Do you have proof? Do you have evidence? So that's the kind of thing that the XKCD author is dreaming about. Someone waving a placard saying, where is your evidence? Give us a citation. But the more disturbing thing I think about the 56% claim is when I personally express disagreement with those kind of things. I tend to get a response which goes something like this. So I made the claim starker, more clearly false, so that you can see the logic. So say someone is telling me 56% of all bugs in our programs are in fact introduced by a leprechaun called Murphy. And sometimes it feels a little like that. Where did this bug come from? Certainly not from me. It worked on my machine. So this may even feel like a correct explanation for bugs. They are introduced into our source code by gremlins or leprechauns or something like that. But then of course we wake up and no, no, clearly that's not the case. So you would raise your hand and say, what? No, this is a false claim. And then the person you're talking to says, oh, you think it's false. So you think it's more than 56% that comes from leprechauns or maybe less than 56%. And of course the answer to that is no, it's not the number that matters. There are no such thing as leprechauns. So that's, you know, that's the analogy that came to mind when I had this kind of discussion. And that's the reason I coined the term leprechauns. So I had used this skeptical thinking reflex when I was confronted with that claim that 56% of all bugs come from the requirements phase. This is not just, you know, something that people say which doesn't matter. It's actually, it turns out to be a pretty big deal because if you think about it, the justification for the waterfall cycle initially was that because there were two things which kind of underpinned the waterfall cycle. One of them was, I will come back to that. If you don't catch the mistake in the requirements, it's going to cost you a lot more to fix it later when you get to the coding phase. And the other was most of the mistakes in fact come from requirements. So if you were convinced of those things, you would be very well justified. It would be a smart conclusion to come to that we must take requirements work very seriously. We must invest most of our effort there and certainly not rush to programming. So do a lot of work upfront in the upfront phase because studies have shown researchers tell us, well people tell us that researchers say that's where the mistakes happen and that's when it's cheapest to fix. So that's a legitimate conclusion if the premise is true. And because it's so important, it's important to check where that comes from. So where is the evidence? Where is the original study? So I started digging around using the citation needed reflex and it turns out that most of the people who are currently saying this are referencing someone else. So there are some citations but there are usually citations to someone who's citing someone else again. So what do you do? As good programmers, you recurse. So you want to see where that thing bottoms out. So you go looking for the citations that you go up the chain of citations and you find someone who says one study by James Martin showed that so it's no longer, you know, it's not, oh many researchers have worked on this and come up with that one number as a summary. It's one person, James Martin, in one book but that at least has the benefit of narrowing things down. So you go and try to get that book. It's not easy because it's out of print for decades. It's a book called, I forget the exact title. It's a book from the 60s I think by James Martin. And the nice thing now is you can actually order used books on Amazon and you can get them for very cheap because they are used library books which are returned to circulation through Amazon Marketplace by booksellers. So it's books that libraries are getting rid of basically. So I got this one for I think two euros. And so I'm grifting to the pages trying to find a source of this observation how many studies, how many scientists were involved, how many projects, how many do you think? What do you think was the sample size to use a technical term for that study which showed that 56% of all bugs in projects come from requirements? A guess. Of course, so this is a statistical result. So the bigger the sample size, the more convincing this should be. Any guess? So 10, OK. If you came across a study that say we studied 10 projects and we came to that conclusion, would you think it very convincing? Not a lot. Not a lot. How many would it take to convince you? Something in the hundreds maybe. I don't know why because it depends on so many other things but we would probably tend to be more convinced if someone came and told us we studied 100 projects across many different kinds of enterprises and this is what came out. Actually I'll spare you the hunt for the exact detail. The sample size was one. So it's this one company that James Martin was working with. It's a bank and he's not a scientist. He's a consultant. So that very widely cited figure of 56% turns out to be almost, not quite but almost made up. And yet it's used as a critical justification for many things in software engineering. So we're going to keep track of Murphy. I'm just taking you on a tour. We only have a short span of time so I can only give you a few highlights. Hopefully we'll have some time at the end to take questions. So be aware that there are more sites to see on this exploration of leprechauns in software engineering. I'm just giving you some highlights. The cost of change. How many of you are familiar with Ken Beck's book on extreme programming? A few. So you will remember this. That's the famous fabled cost of change curve. So now the interesting thing in extreme programming, in the theory of extreme programming is the claim that extreme programming flattens the cost of change curve but I'm not going to talk about that. I just want to focus on this and where it came from. So what this says, if you look at the curve in terms of what does it mean, so what's on the bottom? The phases of the life cycle. So one thing to notice is the curve is smooth. It's continuous. As if the bottom axis was something more continuous like time but it's actually a discrete curve. And the y-axis is a cost. So the unit would be what? Money except of course that in the software business we don't count cost in terms of money most of the time. So that's already something that we would be interested in finding the details when this was measured because this is a graph so it's probably something that came out of some studies. It would be interesting to see if the data points came from hours or dollars or some other unit but those are just a few of the questions that might pop into your head when you see something like that. Now Beck says in his book, he's very honest. He says, I drew this from memory and from memory of my university days so I got this from the teacher and maybe I'm not drawing it quite right but you get the general ID which is that it costs more to make a change in a software project as time or a surgate of time which is the phases of the life cycle goes on. So that's where I first came across that curve because I'm a fairly young developer. Some of the people who have been around for longer probably saw it originally in a different form. So I'm just going to show you the oldest form of that curve and do you notice the difference? This one is linear whereas the previous one was an exponential but of course this is a log graph which is why. Linear corresponds to a linear slope on a log graph corresponds to an exponential. So this one is more interesting because you can see there are error bars. That's good, right? It's a scientist being honest. We measured but we are giving you a summary of those measurements and there is some uncertainty so we saw measurements that ranged from this to that but overall it seems to be a very nice fit. So the originator of that curve is Barry Beam but there you probably notice another difference. Anyone spot that? So this is not the cost of change curve. This does not talk about right. So this one is about fixing defects. So 1976, what Beam was saying then was something slightly different or maybe significantly different depends on how you interpret that from what Beck was saying in 2000. So the original finding was if you divide the cost we suppose it's the average cost to fix a bug when you detect it here and you compare it to the cost to fix the same bug when you detect it earlier in the design phase you get a ratio. So that curve is a plot of ratios. So that's why it's a relative cost to fix curve. And you see this everywhere. I mean there are tens and maybe hundreds of citations of that finding in various forms. So this is one of the more exotic forms. So this is a pyramid interpretation of the cost to fix curve. But yeah, it's a little like the foot pyramid. So it's 20 years after the original finding. And this one is even more elaborate. So it's something that was published by the US Federal Highway Administration in 2007 and it's inspired by Steve Mcconnell who has done a lot of work to popularize some important ideas in software engineering and in particular some ideas from BIM. And what this one shows I think it's interesting for that reason. It kind of gives you a matrix between when the error was introduced and when it was detected. So of course if you introduce an error in the coding phase it doesn't quite have time to blow up exponentially. But if you introduce a defect in the requirements phase then it has time to go and blow up to more than 100 times the cost to fix originally in defects. Yes, that's an excellent question. That's an excellent question. So you guys know bugs, right? And you know that bugs come in all sorts of varieties. And you know that there is a lot of have any of you have that happen that you spent one hour, maybe two hours, maybe days arguing with the customer whether something was a bug or not a bug. Has that ever happened to you? Right. And we tend to do that for many things. So it's not just once in the course of a project. It's a recurring discussion. We spend a lot of time arguing about those things. But the scientists who studied that must have had a clear sense of what was and what wasn't a bug. Otherwise they couldn't do that research. So there's an interesting debate about that because the original research was in the 70s. There were no agile projects but there were not talked about much anyway. And so the model that most people would have in mind and anyway the way that the curve is framed suggests that the projects that they studied were waterfall type projects. But what's more interesting is you come across the curve itself and if you have the citation needed to reflect you're going to check the data. So what you want to say is show me the numbers. Where are the numbers? Row numbers, right? So this is... Do you think this is the same graph? I see some of you squinting. So it doesn't quite jump out at you. But this is one of the graphs from a paper by Beam published a few years later which looked in more detail at exactly the same project that this curve purports to describe. The one... So TRW survey, I don't know if you can read that, the black dots, those points are supposed to come from this data more or less. Or rather it's not quite clear because there are two different papers so it's an inference on my part that they were referring to the same project but there is a lot of text in addition to the pictures which describes where he got the data from and so those are actually students. So there were students that Barry Beam was teaching and he took some measurements of how long it took them to do various kinds of work. So my question to you is do you see an exponential curve here? Neither do I. I came across this paper and it's a lot of research to find them and I was hoping to see the data that corresponded to an exponential wise. Maybe not exactly, maybe you could see some error bars but I was expecting to see that which is kind of ups and downs but that's not the only series of data points in the curve, there were others. So again you go look for any time that the curve is used in a serious book you will usually see a lot of citations and so you can follow the chain of citations back to the source. And one of the more serious sources that I came across was this fairly recent paper referring to studies that use Hercraft. So also a very serious software installation. It's not your rinky-dinky start-up where you could always say yeah but these guys they're not very disciplined actually what they do is not representative. No, this is a big corporation. Okay. And this is interesting because it's the most detailed account of the raw data that I could find. So probably the best in terms of quality of the original data. And there may be all sorts of problem doing the measurements and that's also part of my issue with the whole thing but at least you have numbers so you can check whether the numbers tell the same story that the curve. Now I'm not going to ask you to infer that from just looking at the numbers I'm going to show you the curve but it's not in the paper it's a curve that I made using Google spreadsheet. No, same question as before. Do you see an exponential there? No, it's actually cheaper to use aircraft to fix a defect in maintenance than it is in functional testing. Contrary to what received with DOM and everyone who sites the curve is saying and so it's actually more expensive to fix a defect in the architecture phase than it is in the design phase so no, that's not the same story. I've only plotted here one data series if you remember the I can show you the original data so it's the same matrix that we saw from the Federal Highway graphs so they are actually tracing the cost of requirements based on when it was introduced and when it was detected you might ask how do they know that a defect was introduced in requirements? I don't know. I'll come back to that. It's an interesting observation so let me first show you what the source data said and we'll come back to this question of personal experience and what we all know. So here again this is from the other data series and to be frank, to be honest I have to admit that at least one series does seem to fit more or less an exponential curve and that's the red so it actually goes off the chart at the top but it's the only data series that does that so if you look at the other ones which so that's for the earliest defects the one introduced in requirements that's the cost of fixing them according to when they are detected and the red is the defects introduced in architecture that's the cost of fixing them according to when you detect and the same for design and code and so on so there is one which is an exponential and the other ones are well we can see that it's more expensive a little in the later phases but we don't see this exponential the nice smoothly rising exponential that everyone is talking about right so my conclusion was that I've been more or less taken for a ride tricked into believing something which is at the very least in the best interpretation the traditional curve is a lot stronger in what it says than the actual data you could probably say that there are many problems in interpreting the actual data because what's a bug, how do you measure the cost all of these things are hard so I would expect to see much more than one study before I came to believe an exponential curve so just to boil this down to a couple takeaways for you don't just demand citations just say, oh if you show me a citation I will believe and if you don't I won't believe it's interesting to actually read what people are citing make your own decision as to whether that is convincing or not convincing and especially when what people are showing you is a chart a histogram or a curve what's interesting is not just the overall shape or the conclusions but where did you get this data point, what did you measure to get this what does it mean because here is studying students and maybe they do different things at different stages of the project so maybe the way you measure the cost is not the same in requirements as it is in design and testing so are these costs even comparable with each other? I don't know, we need more signs I'll go quickly over this one because I think it's also interesting it's been used as an argument for agile the cone of uncertainty that's also from Mack Connell this is a modern representation this is something you would see today in a blog for instance notice it's the same timeline, it references the life cycle phases and there is a variant which says the best you can do is your initial estimates would be between four times and a quarter what the project eventually costs but that's the best you can do that's if you have very good practices in place and actually your estimate will remain fuzzy until the very end of the project if you don't have good practices so that's the idea that good practices kind of shrink the cone do you see a problem with the cone as it is currently shown? do you think, so what this says in terms of numbers is you compare, it's also a ratio you compare how long the project actually took that's what you know at the end of the project when you ship with your initial estimate and what you find is that your initial estimate may be as much as four times what the project actually took or it may be as little as one quarter so you could be very optimistic or very pessimistic in your initial estimate and that's symmetrical, yes? so let me ask you how many of you have had a project when you were early by a factor of let me just two thirds one percent? I don't need to ask that because nobody was early by a factor of four how many of you were on projects which were late by a factor of at least 20% late that by more than 50% some of you anyone had worked on a project where they were late late by more than two twice the original estimate so right now we have a it's maybe in a representative sample but you don't fit the cone why? where did the original data come from and why is it that when we polish but still you might be more or less representative so I looked for the data and interestingly enough that was a short search because there is no data this is the code I found in Barry Beam's book software engineering economics he says about the graph that is that came to be known as the cone of uncertainty these ranges have been determined subjectively you know what that word means? it means I made it up well based on experience so we come back to that thing about experience isn't it the case that we know from experience and common sense that it costs more to fix the defect later sure I could give you a thousand arguments why it's so but then again if you think about we use 10% of our brains we all know people that make us go oh yeah this guy he's using 10% of his brain don't tell me you haven't ever thought that's about someone around you maybe higher up than you I don't know so many things which are convincing from a personal experience point of view turned out to have no substantial basis and actually there are many bugs which even when you detect them very late in the process there are one minute fixes very quick very quick to fix very quick to deploy but what happens to those you forget about them it's very easy what stands out in your mind that's the kind of bias availability bias for exactly the same reasons that many people fear flying in a plane and they think they're going to die if they ever set foot in a plane and so they drive to wherever they go on vacation and they drive to work even though they're about I don't know how many 50 times that's right because you've heard about the plane crash it doesn't even have to be 9-11 every time there is a plane crash people get scared and they prefer to drive instead and more people die that's the way we are wired so just to end that section on a positive note you could get empirical about uncertainties in a project so this is something from one team and they are measuring this is a burn down chart they are measuring how close they are getting to done and there is some uncertainty because there is the strict extrapolation of that curve but it could be better to use an average of the previous velocities or you could try to average the past three velocities that's where the uncertainty comes from. Do you notice a difference between that and the cone another significant graphical difference? they are not oriented the same way so it's interesting that this one is called the cone of uncertainty but it shows what it says is basically the future gets more certain as we go toward it which is kind of a strange way to say it convincing. We know the past and the present but as we move along to the right of that graph our range widens I just want you to think about that this is an empirical measurement this is from a paper in IEE software which was very critical of the cone of uncertainty and said we actually did measurements in a company and we measured estimates given at various points and we compared them to how long the project actually took and we ended up with something like this which is I am much more convinced by that because it has the right asymmetry we are optimistic most of the time not pessimistic about project duration that's more like it that's better data but it took researchers to become researchers who became critical of the cone became skeptical and they went actually looking for the data and they found something that doesn't really fit a cone and maybe there is a lot of follow up research to do on that why is that area more or less empty it's kind of strange the way this plot looks so it makes me want to ask more questions rather than feel you know okay the matter has been settled even though the message of the cone is something I agree with which is don't try at the day one of the project to have a very precise estimate because most of the time you will be way off but you are not going to be way off in the way that the cone says a final one oh well so takeaways you know they are saying one picture is worth a thousand words so you must keep in mind that it goes both ways it's much easier it's a thousand times easier to lie with a picture than it is to lie with words so when you come across a picture you have to ask yourself what a chart or curve or a histogram what does that mean how did they measure what they claim to have measured is that measurement even possible is it even possible to measure the average cost of fixing a bug given the variety of things we call bug and given the amount of time we spend arguing about whether it's a bug or something else I don't think it's possible I don't think and I have the same trouble with productivity any study you come across that claims to have measured the effect of something on productivity to me the question that comes to mind immediately is what do you mean by productivity because do you count do you count the people who are actually do you count the hotshot programmer who writes a thousand lines of code in a day but then you have to spend weeks in the rest of the project for the rest of the team debugging his mess but you know because he's been hugely productive and the managers and he kind of saved the project in the client side because he wrote that feature that the client was expecting in one day and the client was hugely impressed well it doesn't quite work but you know there are just a few bugs we'll fix them is that kind productive I don't know what does that word even mean right so it's really key to think about the meaning of those terms hyperproductive scrum that's an interesting one because it's closer to home right it's it's not a software engineering it's not a traditional software engineering engineering leprechaun it's a natural leprechaun oh my what do you we have those two yes we have so this is from control chaos it's one of the founding papers in scrum scrum methodology similar to the iterative methodology but assuming that all requirements are not known in advance which is a good assumption I approve of that assumption where I have a problem is with the almost the next sentence I just snipped a bit which was not relevant productivity gains of 600% have been seen repeatedly in well-executed projects there's not just one anecdotal evidence it's we've seen this repeatedly and on average six times as fast six times as productive and not just you know it's not just us saying that it's not the scrum people saying that it's capers-jones have you guys heard of capers-jones he's a so maybe not and maybe you're lucky because I'm not sure he's quite as relevant today as he used to be a little like Barry beam so those guys are academics but he was hugely influential in academic software engineering those guys are like the popes and cardinals of software engineering so well so there are two things that you might be convinced by the number six times that's good repeatedly so it's not anecdotal and there is an authority backing that let's do scrum can't escape that conclusion scrum is the smart thing to do if you do anything other than scrum you're responsible for a failure to get six times productivity you're going to get fired and it's not just the scrum people saying so it's one of the cardinals of software engineering I asked exactly the same question but I'm glad you asked well capers-jones is still alive so I emailed him I couldn't find a source I couldn't find a source in the literature so I sent capers-jones an email don't know me I'm just this guy you're the cardinal actually he's very sweet he's a very sweet person so he took the time to write back to me he sent me tons of documents about he's the pope of function points I don't know if you've heard of function points he's the guy who invented them basically so he sent me tons of documents about function points and he said you should learn about function points but he was really sweet but he told me I don't remember that quote attributed to me I almost certainly did not assert that Agile had 600 person gains because no methods have ever done that and I have looked at many what? say again but no I was not actually surprised by that point because I was in the process of writing up my findings I'll get to that in a minute so leprechauns you find them everywhere they were they were the Agile leprechauns I like I like this quote by Twain it's not what you don't know that's going to kill you and your project it's what you know absolutely with total certainty but it's just not true so it's like have you seen those funny videos of people bumping into a glass door there's nothing there so I can just go straight boom so that's what kills you right it's the things you think are true which just aren't so if you had to take one key message of course from this it's think for yourselves that's don't have have a reflex of doubt whenever you can even so what I like some of my readers are great I've written this up in a book called the leprechauns of software engineering so we can get more lots more detail about a bunch of other things but I really wanted to take the time to dissect a few examples for you I have great readers they write back to me and said in chapter 3 you say this and that can you clarify for me where you got that I'm not sure if it's true you've got it so even what I tell you you might want to check so I've tried to make my arguments easy to check by including all the references making sure that you know where you can get the papers because some people will cite they will cite a book or a paper at you and it's an out of print book or it's a paper that you have to pay $30 from to a triple E to get that's not easy to check I know that it's a difficult proposition to be skeptical I have encountered that in practice time and again it's as if someone has decided that all the knowledge in the world must be locked away behind a paywall just in case oh, knowledge is a dangerous thing it wouldn't do for all these people out there all those practitioners to have too easy an access to all the works of researchers, academics they've worked so hard on this why should it be easy for you to get that's not the way progress is made science progress is a matter of generating knowledge so that you can be skeptical but you can also have justified belief in the things that it's correct and legitimate to believe so most of the time you will be able to find links to PDFs and you can get them more or less legally and that's about it thank you all I noticed we have time for questions so that was not one of mine it's something I grabbed from the web from a team that was measuring their scrum you know that's a good idea you can probably learn a lot from taking a scientific mindset approach with respect to your own project at least you can validate the data so it's a burn down right so part of it is in the past how many points did we have left in scope it's not a very good burn down either it's one of those that start out flat and then ooh there's a dip as people start to complete user stories maybe they're doing too many things in parallel but what I found interesting in this one was the inversion it's kind of flipped around so they used I don't know what best estimate is yes you have to be careful with measurements of estimate accuracy Linda had a great talk where she told us oh we need to know about why you cannot really estimate so at best it's something that gives you a way to think more closely about your project but it's not a crystal ball and part of our cognitive biases we are very bad at dealing with predictions and probabilities so this is not going to be published in academia it's a very low level tactical tool for one team but they're trying to be honest about their uncertainty that's why I liked and included this graph but they're not this is not I don't mean to say anything mean about the people who are behind it but this graph is not honest it's a caricature of reality and it's not meant to have you think about reality on the contrary it is meant to discourage conversation it is meant as a weapon against managers when they ask for estimates so instead of having a conversation about why so if I could give you an estimate what would you do with it and is there some other way that we can satisfy your concerns or allay your concerns and that's a good conversation to have instead what I used to do I was I was taken for a ride I was tricked what I used to do was I would tell my managers no I cannot give you I will not give you an estimate because the cone of uncertainty so I was shielding myself and my thinking about software development behind someone else's authority Steve Mcconnell says therefore I will not give you an estimate that's a very unproductive unconstructive thing to do another question that's an easy question we don't care about we especially do not care about the average cost of fixing a defect because what does it even mean to take an average because one bug is going to cost us one minute to fix and maybe 10 cents and one bug is going to kill the company so and that would be a black swan what do we care about the average that doesn't make sense it doesn't make sense to add up all the bugs and divide by the number of bugs it's a mathematical operation so yes in that sense you can do it with a calculator but it has absolutely no meaning whatsoever so you might want to ask yourself is the reasoning behind that contract actually based on validated experiment I don't know but so you may be lucky in that respect I think one problem that we face I will close with that is that many of the contractual assumptions so many of the hypothesis if you will that the contractual relationships that the very very structuring constraints that we operate under are based on maybe leprechauns to me that's kind of a big deal if we find out that we were operating under a false assumption we will want to revise the way we do things I'm going to close with that I'm going to thank you again and I'm around if you want to talk about this a little more but it's time for me to let you go thank you