 I'm Glenn Vanderberg. I work for InfoEther with Rich Kilmer and Chad Fowler and Bruce Williams, who I suspect a lot of you know here. And I'm here to talk about real software engineering. This is my, I've been at all four of the Lone Star RubyConfs and really enjoy them. So to start off with, software engineering doesn't work. At least as it's currently taught in universities and in training programs at companies, if you happen to work for one of the few companies that cares about teaching such things. The techniques that are taught and that are called software engineering simply don't work. They don't reliably control costs. They don't reliably produce quality software. Sometimes even when they're practiced rigorously by people who've been trained to do so, they don't produce working software at all. And that shouldn't really be surprising to any of us because this seems to be common knowledge among working programmers all over this country at least. But while it's not surprising, it is odd because in every other field that aspires to the title of an engineering discipline, the term engineering is reserved for practices that work. In fact, that's as good a capsule definition of engineering independent of any particular discipline that you're, as you're likely to find, which is the set of practices and techniques that have been determined to work reliably through experience. And yet in software, we have this set of practices that don't work, and we call that engineering. Now this has caused a lot of discussion, especially over the past year or so, about whether software is really an engineering discipline at all and whether software development really doesn't fit that metaphor. It's not engineering. It's more a craft or an art or something else or gardening or moviemaking or various different analogies I've heard. And maybe engineering is just an inappropriate metaphor for the task of building software. And I don't think that's true. I do think software is an art. I do think it's a craft, and I do think it is in some respects a science. But that doesn't mean it can't also be engineering. And I say that the problem is that the people who have defined the set of disciplines that we call software engineering have misunderstood two very important things, software and engineering. And that has resulted in software engineering being actually a caricature of an engineering discipline. So I'm going to explain what I mean by this. And to do so, I need to first start off by explaining what I mean when I say that software is a caricature of engineering and kind of how it got to be that way. And then I need to explain what real engineering looks like. Are there any, how many people in the room were trained as engineers, not as software developers? Okay, interesting. I'm especially interested in feedback from people who have more of a classical engineering background, because I don't. I've had to kind of learn all this on my own. But I think I've come to a pretty good understanding of where software engineering got it wrong. And then we're going to look at what real software development looks like. And in the middle somewhere, I think we can develop a picture of what real software engineering is. So the first time that the term software engineering really got bandied about a lot was in 1968 at a conference in Garmisch, Germany, organized by NATO, of all things. The first conference on software engineering sponsored by the NATO Science Committee. And at this time, people were dealing with what was called the software crisis. And software projects were really unreliable and flaky and error prone and failure prone and had huge cost overruns and they really didn't know much about how to manage them. And so they said, we need to kind of grow up as a discipline and as a field and start becoming an engineering discipline. And so they had this conference. And I've known about and heard about this conference for many, many years and never really knew any details about it. And so as I got interested in this topic, I went and read the proceedings, which are all online. And I was expecting to find, oh, okay, this is where the madness started. And in fact, no, I didn't find that at all. The participants in this conference were by and large really smart people who were working software developers. There were a few academics, but the academics had good things to say as well. And by and large, the findings of this conference were entirely reasonable and reflected a great deal of wisdom about software and its state at the time. In fact, mostly what you'll be impressed by as you go there and read those proceedings if you do, is how willing they were to admit that they didn't know anything. The conference is full of, you know, we're just kind of babes in the woods at this. It's a new field. It's different from other fields before. There's all kinds of stuff we don't know. We kind of think it might be like this. There was a lot of uncertainty. And Alan Perlus gave a talk at the end of the conference in which he sort of summarized the few things that the group was able to all agree on about what software as engineering should look like. And the first of the three things were a software system can best be designed if the testing is interlaced with the designing instead of being used after the design. That's pretty reasonable by our standards. Number two, a simulation which matches the requirements contains the control which organizes the design of the system. Now, the terminology has evolved since then. And so that doesn't immediately seem to make a lot of sense. But I think if you combine that with the third one, it is clear what he means. Through successive repetitions of this process of interlaced testing and design, the model ultimately becomes the software system itself. In effect, the testing and the replacement of simulations with modules that are deeper and more detailed goes on with the simulation model controlling the place and order in which these things are done. He was talking about unit testing and mocking and iterative design and development. Pretty reasonable. There was a second NATO software engineering conference a year later. The tone of those proceedings are entirely different. During that year, everything went wrong. Within that year, software engineering turned into what it was for many, many years, which is an entire field of processes designed in academia for practitioners. What went wrong in one year? I don't know. I was alive then, but I wasn't a computer programmer back then. And so I wasn't involved in these processes. But I have a theory about what went wrong. And I can illustrate that theory by talking about another place where some very reasonable statements were misconstrued and turned into completely unreasonable conclusions where we know what happened. And that is the birth of the waterfall process. Waterfall was introduced by Winston Royce in a paper at IEEE Westconn in 1970. And the paper was called Managing the Development of Large Software Systems. And Royce spent the rest of his career running around saying, no, no, no, I didn't mean that. I was misquoted. And I've heard that story too. And when I went back and read this paper, I was astonished to find, I wasn't surprised to find that he was telling the truth. He really didn't recommend waterfall. But what I realized was that the problem, the reason that people thought he was recommending waterfall is that this paper is a marvel of bad information design. If you wanted to study how to write a paper that conveys exactly the opposite impression from what you're trying to get across, you couldn't do better than to go read this paper. So let's take a look at this for a minute and assume that you're a manager of a group producing software in 1970. And you're struggling with how to reliably estimate and produce stuff that works and all these problems that have plagued software management since the beginning. And a friend or colleague or maybe one of your employees drops this paper on your desk and says, this might help. Oh, Managing the Development of Large Software Systems. That's pretty good. That's what I'm interested in. And oh, I see that there's a diagram there on the front page. So let's start by kind of looking at the diagrams and see what this is about, skim it first and then read. So that diagram says, you start with analysis and then you go to coding. And that's kind of simplistic. And even I know that's kind of simplistic. And so before I waste time reading this paper, I'm going to look at the rest of the diagrams and see if this guy has a brain in his head or not. And so at the top of the second page, I see this. This is the first time the classic waterfall diagram ever appeared. And wow, that's pretty good. System requirements and from that you get software requirements and then you move to analysis and design and coding and testing and operations. And look at the caption there. Implementation steps to develop a large computer program for delivery to a customer. I can understand that process. That makes sense to me. I see how all those things work and I got this meeting to go to. I got what I need out of that. Look at the very next line. I believe in this concept, but the implementation described above is risky and invites failure. The funny thing is that that sentence describes figure three, which is on the next page. To find out what Winston Royce has to say about figure two, you have to go back to the first page where he says an implementation plan keyed only to these steps is doomed. Well let's be optimistic and assume that the manager comes back from his meeting and picks up the paper again and tries to continue through it. And he flips to the third page and he sees this. To our eyes, that looks a little more reasonable. We can see feedback coming from later steps to influence earlier steps as you learn more as you get down into the details. And then Royce goes on to say that the feedback isn't even that localized. And sometimes the things you learn in testing have to go all the way back up and affect the software requirements. This is starting to seem confusing and messy and hard to control. Whoa, and gee, I don't know what this means and all these blank boxes and it just gets worse and worse. And then the final diagram, which is helpfully labeled summary, summary, there you go. I think I'll go back to that one. And within just a few years, waterfall was an established standard in the industry and it's easy to kick waterfall and it's kind of, you know, most people have realized that's not the way to do things. But I've run into groups that kind of religiously follow waterfall and don't see how anything else could possibly work as recently as three or four years ago. And as recently as the late 90s, the Department of Defense was still essentially mandating waterfall for all of the projects that they paid for. It has taken a long time for this idea, which was presented as a way of doing things that was obviously doomed to finally sort of work its way out of people's consciousness because people like simple solutions and they hear terms and map those terms to things they already understand or think they understand and latch onto that and run with it without thinking very carefully. That's just a characteristic of people and it happened with waterfall and I think it also happened with this term software engineering. H.L. Menken said, for every complex problem there is a solution that is simple, neat and wrong. That certainly applies there. Later software engineering, even as people began understanding the weaknesses of waterfall and moving beyond it, later software engineering shared a bias with the waterfall model, which is an attachment to what's called the defined process model. This is a description of that model from Schwabber and Beatles book on Scrum. The defined process model requires that every piece of work be completely understood. A defined process can be started and allowed to run until completion with the same results every time. The idea of process models was introduced by some chemical engineering researchers and defined process model is sort of one end of the continuum and there's another end that we'll talk about later. But most of software engineering has been biased toward the defined process model. When chemical engineers learn this they're often very amused to see that software has chosen a model that's so wildly inappropriate for the kind of things we do. But this bias toward the defined process model has pervaded the software engineering literature and so forth. Here's an example by one of the giants of software engineering, David Parnas, where in a paper in 1986 he describes what he calls a rational design process. You establish and document requirements. You design and document the module structure. You design and document the module interfaces. You design and document the uses hierarchy. You design and document the module internal structures and you write programs and maintain. Now look what he says later on in that same paper. The picture of the software designer deriving his design in a rational way from a statement of requirements is quite unrealistic. No system has ever been developed in that way and probably none ever will. And yet his attachment to the defined process model was so strong that he persisted and persists to this day in claiming that the way systems are actually developed is irrational by implication because he calls this the rational design process. And how to fake it, yes. But the name is telling. Yes, the name of the paper is a rational design process and why to fake it. You're right. But the name is telling. And in the paper, the tone of the paper is it's a real shame we can't do things this way because that would be way better. And by faking it, you gain benefits from this irrational, undisciplined way that we actually have to do it. So yes, good point. But the implication is there that that is the only way that is the rational process and that's the way we should be doing things. The bias towards defined process has led software engineers astray in other ways. This is the famous cost of change curve from Barry Beam in software engineering economics in 1981. It was well understood before then but Beam went out and surveyed a lot of real projects and gathered real data to show that this is the way things actually worked, the cost of finding and fixing errors, skyrockets as you go through the life cycle of the project. It's relatively cheap up front and it's horribly expensive out at the end. The software engineering solution to that was to say, well, if change is cheap up front, let's try to push all the change up to the front as much as we can and eliminate change and errors from later in the process. But of course that has secondary effects and quite often the processes that were introduced to try to identify all the errors as early in the process as possible were themselves quite costly and so that pushed the cost of the entire project up across the board from beginning to end. Real organizations don't like that, budget pressures resist and push back down on that thing and this is like stepping on a tube of toothpaste, right? It squeezes the things out at the end and things end up taking a lot longer. We know now that there was an unseen bias in Beam's data, which is that all the projects he was measuring were waterfall projects. Nobody thought anything of that in 1981 because that's the way software projects were done. Everybody knew that. Winston Royce said it in 1970. But what it meant was that Beam's data was kind of skewed by that assumption and it turns out that he was not actually measuring, what we believe today is that he was not actually measuring the cost of finding and fixing errors, the cost of change as a function of your position in the software development life cycle. We believe today that he was actually measuring the cost of finding and fixing errors as a function of the distance in time from when the error was actually made. In other words, he was measuring the cost of long feedback loops. And if you were to measure projects that are iterative and integrate and deliver working software every couple of weeks or every three weeks or something like that, you would find that that graph looks very different. Because you're mostly finding and fixing errors within a very short time after you make those mistakes and that's what makes them cheap. Another thing that, another way that software engineering kind of went astray was a focus on modeling and what I call math envy. I've heard a lot of people say software development needs to grow up and start working like a real engineering discipline does. And three years ago, I'm picking on David Parnas a lot, but I've just read a lot of him and happened to hear him speak at several different conferences. Three years ago, I heard Parnas say, in engineering, people design through documentation. The job of an engineer is to write documents. And he said, software engineering field needs to kind of get with the program on this. Here are some examples of design documents that have been proposed in our field. Top left one there is the Zed specification language. The bottom right one is one of Parnas's proposals, tabular mathematical expressions. Precise, maybe, understandable. The proponents of these proposals claim that they are given sufficient training and experience. In their long run, they're all dead. Yeah, oh Lester, it's been a long time since I've seen you. Didn't see you sitting there. And is it clear how to map this to code? Well, I don't know. Is it any easier to get right than code? There have been all kinds of things in this vein. And as was pointed out, they're all now dead, essentially. But misconceptions about engineering still abound. This is something that Bruce Echel wrote in a blog post last year. And to be fair, I think Bruce knows better than this. I think he was writing this as a throwaway statement on the way to another point he was trying to make. But nevertheless, he said it, so I get to pick on him for it. This is not some kind of engineering where all we have to do is put something in one end and turn the crank. What's real engineering like? Well, let's get something straight right off the bat. There is no kind of engineering where all you have to do is put something in one end and turn the crank. Engineering is a creative activity. It's about designing and building and creating new things. And there are always blind alleys and missteps and mistakes and discoveries and reactions to those discoveries and adjustment. We also know that different engineering disciplines are different if you look at structural and chemical and mechanical engineering and electrical engineering and industrial engineering and all those things, they're very different. They work with different materials, different physical effects, different forces, they work at different scales. They have different degrees of complexity in requirements, processes, artifacts, all those things they work with. There is a varied level of reliance on formal modeling and analysis on math versus experimentation, prototyping and testing. And they have a varied degree of reliance on defined versus empirical processes. We've seen the definition of the defined process model. Here's the definition of the other end of the continuum, the empirical process model. It provides an exercise as control through frequent inspection and adaptation for processes that are imperfectly defined and generate unpredictable and unrepeatable outputs. Chemical engineering tends to be heavily biased towards an empirical process model, often industrial engineering is the same way. Another thing we know about real engineering and quite often I've heard people talk about software development needing to grow up and become a real engineering discipline and those same people will say things like cost shouldn't be an object when it comes to doing it right. In real engineering, cost is always an object. I love this quote from a 19th century bridge engineer named Arthur Mellon Wellington, railroad bridge engineer. And I've kind of updated his language from flowery Victorian prose that's hard to read just something more modern. But this is what he said. Engineering is not the art of constructing. It is rather the art of not constructing or it's the art of doing well with $1 what any bungler can do with two. In real engineering, advances usually come from practitioners not from academia. I wanna illustrate this with the story of two bridge builders. The first is Robert Maillard who built this bridge among many others. This is the Waldschildbach in Switzerland. Maillard was an early 20th century Swiss bridge designer who was interested in this new material at the time called reinforced concrete and how it could be used. Reinforced concrete was already being used in bridges at that time but it was being used as a kind of better stone, a less expensive stronger stone. So reinforced concrete bridges looked basically like older stone bridges just made out of a different material and Maillard realized that reinforced concrete had different properties that he could exploit to build different kinds of bridges and he started building beautiful arched bridges like this very lightweight graceful structures that don't look anything like earlier stone bridges. Maillard was vilified and ostracized by the European structural engineering community. The reason for that is that he did not have mathematical models sophisticated enough to prove the soundness of these designs. He was called a charlatan and a thief. He was stealing from his customers because he was taking their money to build bridges that would fall down. He was endangering lives, they said. Today, all but one of Maillard's structures are standing and the one that's not standing was destroyed in an avalanche. How do you think Maillard looked in the mirror and slept at night knowing that he didn't have mathematical models that could prove that these bridges were sound? I know it's only 10 o'clock but I should be awake. He slept the sleep of babes. Why? He slept up every two hours at night. Anybody have any ideas what he did? It was fine, he kept doing it. He demonstrated the soundness of these bridges to himself, to his own satisfaction, through testing, prototyping, modeling. He built models of these bridges in his workshop and rolled barrels full of concrete over them and jumped up and down on them and invited friends over to jump up and down on them. He tested them every way he knew how to prove that they were sound bridge designs. Yeah, so modeling is not reality, right? But so he built models at various scales. He saw how things changed as he scaled up. He over-engineered the bridges according to the models that he developed and he also had a very good intuitive understanding of the statics involved and where the forces were being distributed and things. And later, other engineers developed math that was sophisticated enough to model these bridges. Now, the next bridge builder I wanna tell you about was a contemporary of my arts. He was also European. His name was Leon Waseff. He did not stay in Europe. He came to the United States and he developed what was, at the time, the most sophisticated and accurate model of the mechanics of suspension bridges that had ever been designed. And, you know, it doesn't help if I'm not actually plugged in. So he built this theory of how suspension bridges worked and suspension bridges were all the rage at the time. There was a lot of big suspension bridge building going on in the United States and around the world and Waseff kind of became a star of that field and he consulted on the Verrazano-Narrows Bridge in New York and the Golden Gate Bridge and several others and finally he was given his own project and he took his theory, his mathematical model which was called deflection theory, to the limit. He wanted to prove exactly what it could do and so he built the Tacoma-Narrows Bridge in Washington State, one of the biggest civil engineering disasters in US history. What went wrong? Well, as we've already observed, models are not reality. They're an approximation of reality and why we use them in engineering is often misunderstood. We'll come back to that in a minute. Deflection theory was believed to be much more accurate than previous models of how suspension bridges worked and as a result, Waseff was able to make the deck of this bridge much thinner than any suspension bridge deck before relative to its span and the result was that the deck of this bridge became subject to a force that had never been seen or noticed in suspension bridges before because none of the decks had ever been that thin. It began acting as an airfoil and deflection theory accounted for all the known forces that were working on decks of suspension bridges which were downward force from gravity and the additional weight of things on top and sideward stress from wind and various other things but it never accounted for the possibility that there might be upward force from the deck acting as an airfoil and this started happening in the deck because of its thinness began oscillating and within six months of being built, the bridge tore itself apart. I tell these stories to illustrate three points. The first one I've already mentioned, the important advances often come from practitioners not necessarily from academia and then they're brought into the academic world and refined and so forth but they come from practitioners. The second point I want to illustrate is that software is not the only engineering discipline that occasionally loses the plot about what engineering is all about. But the third point is to illustrate the point that mathematical models were introduced to engineering as a cost saving tool. It would be easy, if you hear software engineering people talk about math and modeling, it would be easy to get the idea that it's the only way to do things right and that it's a tool for robustness and you don't know it really works unless you have math to prove it. I've heard all those statements but that's not why it was introduced to engineering at all. It was introduced to save cost because in other engineering fields they're working with real materials and things that require people and labor to go out and build and building prototypes and testing them especially at scale is extremely costly and if you can build a model, an approximation of reality, you can save money. How does that work? Well, here's an example. Calvin, one of my favorite Calvin and Hobbes cartoons Calvin says, how do they know the load limit on bridge is dead? And so I've modeled myself as a dad on Calvin's dad and whenever my kids ask questions about the world and how it works, I always try to come up with some wildly inaccurate but humorous answer and this is one of the best ones. Well, they drive bigger and bigger trucks over the bridge until it breaks and then they weigh the last truck and rebuild the bridge and this happens so much in our family that I photoshopped an actual photo of my wife into this final frame. So that's ridiculous. It's funny because it's ridiculous and why is it ridiculous? Because that would cost too much. So do engineers never actually build real prototypes and test them? No, of course they do. If you haven't seen the video of Boeing testing the 777 wings to the breaking point, I encourage you to go seek it out and do it. They built a full scale Boeing 777 prototype. Most of the fuselage empty, of course, but the wings built to spec and hooked the tips of the wings up to these two winches up in the ceiling and started cranking them up and they did it until they broke. And if you watch the video, it's quite amazing and it'll make you feel really good about flying in one of those planes because they almost touch at the top before they break. But when they do break, they break at exactly the same time. And no, they don't flap when you're flying. I saw that back there. When they have engines, they're just like a normal plane. When they do break, they break at the same time and all the engineers get really excited. Why are they so excited? It validated their models, who said that? Excellent, it validated their models. They thought their model was accurate, but they didn't know for sure, but it happened at the exact point they predicted with their models. Models are not reality. Models are approximations that save us the trouble of doing this as much, but they don't save us the trouble to do it at all. Engineers still build models. Aerospace engineers build a lot of prototypes and test them in wind tunnels and do this kind of thing. Electrical engineers build models a lot and test them and prototype. Ask your friendly neighborhood aerospace engineer how much modeling he would do, or how much math he would do if building a model of his design and testing it were effectively instantaneous and free. The answers may be probably not none. He would probably still do some mathematical analysis, but the answer is a lot less than I do now. This is my favorite definition of engineering from the Structural Engineers Association. Structural engineering is the science and art of designing and making with economy and elegance, structures so that they can safely resist the forces to which they may be subjected. Notice the tensions in this definition. Science and art. It employs the findings of science and yet is done with creativity. Designing and making. Engineers don't sketch something on paper and validate it with math and throw it over the wall and never look at it again. They participate in the building of what they design. Structural engineers have hard hats in their office because they go visit the site and talk to the construction people and work with them during the process. With economy and elegance, cost is always an object. Based on that definition, here's my definition of software engineering. I'm not claiming that it's the best one possible, but I think it's pretty good. Software engineering is the science and art of designing and making with economy and elegance. I wanna preserve those three tensions. Systems so that they can readily adapt to the situations to which they may be subjected. We know that software engineering will be different from other kinds of engineering. We know that because other kinds of engineering are different from each other. Software systems are usually more complex in terms of number of moving parts and things like that than artifacts in other engineering disciplines. We're not working with physical materials. We have different laws we work with and it's rare for a software project to run up against the limitations of physical materials. Very rare. And we knew this from the beginning. This is a quote from, we knew that software engineering was different from the very beginning. This is a quote from Royce's paper about that introduced the waterfall process. The testing phase is the first event for which timing, storage, input, output, transfers, et cetera are actually experienced as opposed to just being analyzed. These phenomena are not precisely analyzable. Yet, if these phenomena fail to satisfy the various external constraints, then invariably a major redesign is required. In effect, the development process has returned to the origin and one can expect up to a 100% overrun in schedule and or costs. So what's the answer then? Do we go back to those confusing diagrams that Royce proposed in 1970? I think the answer comes from a man named Jack Reeves who pointed out the most important difference between software and other kinds of engineering. In a paper in the C++ Journal in 1992 called What Is Software Design? Reeves talked about the analogy that we often make between software development and engineering. And he said, in engineering, you have these engineers and they produce documents just as Parnas said they do and that is the design of the system and then they hand it to laborers who build the finished artifact. And so, if that's the way engineering works, then software engineering ought to look like having engineers and they produce a design and they hand it to laborers who sit in cubicles and build the finished artifact, which is the source code. And that's the way many projects work today. I had a guy send me a note a couple of days ago that asked me if anybody had recorded and posted this talk yet because he said, where I work, we build software as if it were bridges. So the problem with this analogy is we've tried and tried and this manifestation of that process on the bottom doesn't really seem to work. And the big problem is that second phase there, we've never really found a good way to write down a software design in a way that is comprehensible and precise and thorough and doesn't require those pesky laborers in the cubicles to redesign a bunch of it, which they're clearly not capable of doing because they're not engineers. Well, Royce looked at this problem and he said maybe we've got this analogy wrong and since that's the troublesome part, let's get rid of that. And let's say that we let the software developers be the engineers, be the designers. And that thing over on the right, that source code, that's not what the customers are paying for. They don't care about source code. That's really the design. Maybe not the only design. It's still helpful to have diagrams and documents that describe the structure at a higher level view, but in reality, the source code is the detailed, thorough design of our software system, the one that's precise enough to actually build the system out of. So if this is the way we draw the analogy, what corresponds to the laborers and the builders that build the thing? Compilers are programming language implementations. And the thing that the customers are paying for is actually running code on systems, not files full of source code. If you draw the analogy this way, where's the model in the source code? Where's the document? Where's the document? Thanks, Adam, but that's not what I was looking for. Somebody said it just a minute ago. The source code. That's a document. Why hasn't anybody ever realized that that's a design document before? Well, there's fairly good reasons, because one of them is because programming languages have only recently kind of reached the point where it's possible to write code that is almost as easy to read as it was to write. So things have changed to make this a little clearer, but I think this is the right analogy. The source code is the model. Programming languages are formal languages. Some of them even have mathematically specified semantics. But even if they don't, they're formal languages, and we understand the math behind them at least as well as your average structural engineer understands the math behind the procedures that he was taught to do to validate his designs. Programs themselves are models of the solution to the problem that we're trying to solve. Our very tools are mathematical in nature. I'm running a little behind, so I'm going to skip these quotes, but this was understood by people even back in the 1968 Garmish Software Development Conference. Now, one more quick thing. I'm already a minute over, but I'm going to finish quickly. This is a simplified example of one of David Parnas' tabular mathematical expressions that he's promoting as the right way to build software requirements. I've simplified this a lot so that I don't have to take time explaining the details of it. It's fairly obvious how something this simple works. And it is precise, and it is a document, so it makes him happy. But let's look at what happens if you view code as the model and the document and as the math that you need. Here's another specification that says exactly the same thing as Parnas' tabular mathematical expression. And here's another. And here's another. And here's another. I'm sure all of you recognize at least one of those representations. They are just as much documents that specify that operation as the tabular mathematical expression on the previous slide. But the problem is that the tabular mathematical expression is merely a document. Test unit, or R-spec, or Cucumber, or Fitness tests are no less documents, but they're not merely documents. They are documents that can run and analyze and validate a system against the documented requirements and be more than just a document. And I remember when Jack Reeves wrote his paper, testing was still expensive. It was still very cheap to compile your design into a prototype, but it was still very expensive to test it. In the 18 years since then, we've learned a lot about how to make testing cheap. So our turn to my earlier question, how much math and formal analysis would your friendly neighborhood aerospace engineer do if building a prototype of his design and testing it were effectively instantaneous and nearly free? Because that's the situation we have in software. This is a diagram of extreme programming, the granddaddy of agile processes. And these are the dependencies between the practices and Kent Beck documented these as a way of argument and saying, yes, all these practices are flawed, but it works because they have these other practices to backfill and fill in for their failings. About five years ago, I started thinking about how messy that was and wondering whether there was any way to make sense of that. And what I learned was you can take those practices, there are a few that aren't truly practices, they're more standards, but you remove those and you take the practices and you can lay them out and they apply to different scales of artifact and decision in your system. At pair programming, you're working mostly with statements and methods and in unit testing and continuous integration, you're working at the level of classes and interfaces and to some degree design and all the way up to short releases are all about validating that you've built the right solution to the customer's problem. And additionally, they map well to different time scales. The smaller decisions and the smaller artifacts we're gathering feedback about those at the level of seconds and minutes and hours and the larger decisions and the larger artifacts, it's more costly to gather feedback about those so we gather feedback about them at the scale of days, weeks and months. Agile processes are economical, cost-tuned feedback engines. This is about as empirical process design as you can get, but it's no less disciplined, it's no less rational for being empirical rather than defined. Software engineering is based on a bunch of assumptions and some of those were once true but are no longer true. Code is hard to read. Code is hard to change. Testing is expensive. Some of those assumptions were once widely believed but they were never really true. Software engineering is like structural engineering. Programming is like building things. Modeling and analysis are about correctness rather than controlling costs. The reality of software engineering is that it is very unlike bridges and buildings. The additional complexity we work with hinders requirements, design and approval. Source code is a model. Building and testing our interim designs is effectively free, certainly compared to the other engineering disciplines. And empirical processes are in fact rational processes for software development. So if we want to grow up as an industry, as a field, the answer is not math, it's not models, it's not documents and it's not copying other disciplines. The answer is to learn from practitioners, bias towards empirical processes, encourage continued innovation in processes and do like other engineering disciplines do. Take the practices that work and call those engineering. Software engineering today is called Agile. Sorry for running over time. Thanks.