 So, I'm Gerard, Gerard Mazaros. Some of you may know me for probably the biggest book ever written. Holds open doors and is used for adjusting monitor height all around the world. Don't recommend carrying it with you in your bag on the bus because it's a little bit too heavy for that. So, I've been asked to talk about unit test craftsmanship. And so I just want to get a little bit of an idea here where we're at in terms of this. Who here is doing unit testing? You can keep your hands up for a second. Okay, it's a little bit of exercise, you know. Now keep your hand up if you think you're doing it well. Okay, we lost about 80% of the people there. Okay, well, let's get to know. Thank you. So, what I'm going to be talking about is what does it take to be successful at writing unit tests? And because basically what we're doing is writing code to test other code, we need a little bit of experience writing software. We need some experience using the unit testing framework of choice, which we colloquially refer to as the X unit, you know, put whatever in there instead of X for your language of choice. And we need some experience thinking up test cases or things to try out. Some people call this not testing but checking. I'm ambivalent. I use the terms interchangeably. You'll probably notice that even in my slides. If we put these three things together, do we end up with robust automated unit tests or unit checks? And the answer is unfortunately no. This is not enough. So, something to think about is if you do a good job of this, you're going to have an awful lot of code because you're probably going to have as much test code as production code. So think about that for a second. How do you write all this extra test code and not increase the effort? I mean, this stuff is supposed to be helping you work faster, right? So the challenge is how do we prevent this from doubling the cost of writing our software and maintaining our software? Because every time we change our software, we're agile, right? So we're going to be changing the software regularly. We're going to have to change our tests. So if we compare our objectives when we're writing production code versus test code, it's important for a production code to be correct. But it's even more important for our test code to be correct because if we write the wrong tests and then because they're failing, we change our production code. We're actually putting bugs into our production code and we're feeling really good about it. It's important for production code to be maintainable, although it often isn't, but we don't really have a choice. We have to maintain it. But it's more critical for test code to be maintainable because if we don't have maintainable test code, eventually we're going to abandon it. And it's important for production code to be fast. Test code, not so much, not as critical. Production code, we typically want reusability of our production code. For our test code, for our tests themselves, we don't want them to be reusable. They should be testing something very specific. We may want to have some reusable test code to make our tests easier to write. And I'll show you what I mean by that in a moment. And we want our production code to be flexible. Our tests should not be flexible. If your tests are accommodating of different circumstances, you're actually writing multiple tests in one, and you don't know which one's going to execute. And our test code needs to be simple because we don't have tests for our test code. You can write tests for automated tests, but then how do you test the tests for your automated tests? It starts getting really messy and complicated. Our production code is often not easy to write. Our test code needs to be very easy to write. And when we read our production code, it's often going to be very hard to read and understand. If we have that in our test code, we're not going to be keeping our test code up to date. So what it comes down to is that the tests are going to need to be maintained along with our production code. And if there's going to be all this test code, we need to make it really easy to understand, really easy to write, and really easy to change. Otherwise, we're just going to say, forget it. It's just too much bother. We have to write production code. There's no one's actually forcing us to write test code. And we're going to give up on it because it's just too much effort and bother. So the critical success factor seems to be writing your tests in a maintainable style. So what do I mean about that? So if this is our development effort over time, and now we're going to add on writing tests, automated tests or checks on top of that, we've just increased the amount of effort involved. We're doing extra work. Now early on, we're going to have to do a bunch of learning. And that's going to have to create this hump. But over time, our cost should come back down. And this is the incremental cost of writing the test code. And we better have some savings to compensate for that. Otherwise, our overall cost down here is going to be more than writing code without tests. And the question is, how do we keep this savings here at least as much as what we're spending on the tests? Because otherwise, the tests are giving us a net cost. If we don't save as much, then we are behind the eight ball to start with. And if our tests cost us more and more to write maintain, then we're going to end up in a situation that's not sustainable. So that's what we're trying to deal with. So enough about all this talk about theory. Let's look at an example. So suppose we're building some little building system, and we're going to produce some invoices. And here's a bunch of tests that we have for this invoicing code. So let's drill in on one of these tests here. Suppose we've got a test that looks like this. Anyone have tests that look like that? I should hope not. I see at least one honest person in the crowd. These tests are going to be very hard to understand. They're very complicated. And you're going to find yourself, when you're forced to work with tests like this, that you're going to be questioning the value of these tests, because they're just so hard to produce, and they're so hard to understand when you need to come back and look at them because they broke. Which, in theory, is their job, right? They're supposed to break and tell you when something changed. But with tests this complicated, it's really hard to tell whether or not the problems in the code they're testing or the problem is that the test is written wrong. So let's take a look at the core part of this test and see what can we do about a test like this to make it more understandable. So just kind of looking at this test, it appears that there's a bunch of stuff here that's being set up at the beginning. It says set up fixture. And so this is kind of establishing, I think, the preconditions of what we're about to test. And then down here, very conveniently highlighted in blue, which doesn't happen in real life, is the actual code that we're testing. Now often you'll see several of these one after another at different points in the test. And this thing just goes on and on. And it's really actually much harder to tell which is the thing that you're actually testing and which is setting up preconditions and which is checking outcomes. So this is actually much simpler than many tests that I've seen in real life. And then down here, we have a bunch of asserts. And the asserts kind of give us a clue that we're checking outcomes. Although I've seen asserts in the front part of the test as well, so it's no guarantee sometimes you get these misleading things going on in the test code. So let's see, what can we do about this code? Now this code is hard to read, I apologize. You've got to squeeze it all on the screen here. So let's zoom in on some of this code here and see if we can make this code easier to understand. So that's the exact same piece of code just in the slightly larger font. What is this doing here? That took way too long. If you had to spend that much time figuring out, that is not well-written test code. What that is saying is we should fail the test. So why are we saying assert true, blah, blah, blah, false? When we could just say fail. Wouldn't that be a lot simpler? If I had a nickel for every time someone coded that wrong, well, I wouldn't be rich, but I would definitely have more than enough money to buy a very expensive coffee. So let's take a look at some of this other code here. There's a bunch of numbers hard coded in here. What do these numbers mean? I don't know. If I'm new to this code, I'm not going to know what those numbers mean because I made up this example. I know exactly what it means. It's a 30% discount off of a certain price and blah, blah, blah. But you're not going to know that as a reader. One of the issues with that code is there's an awful lot of asserts. And when you go look at all those other tests that were in that list at the beginning, they all look like this. So there's an awful lot of repetition from test to test. And so one of the things that we want to do here is get rid of all this repetition from one test to the next test, and so on. And what we can do there is instead of comparing the things literally inside the verification part of each test is we can create the object that we expect and then assert on that object. And so that's what we're setting up here to do. We've constructed this expected object, and now we're going to go and do an assertion on them. Now, we're comparing against fields of individual fields of that expected object, but this is still two verbose. But we can just grab all this code to a quick extract method. Do you know the shortcut for that in your IDE? And the clip is Control Shift M and so on. Very important thing to learn how to do. And look at that. Our test just got way shorter and easier to understand. So this is a custom assertion. It's just an assertion that we wrote with the help of the IDE instead of coming as part of our test library. Very powerful tool, very simple to use. Saves a lot of code. And that piece of test code is now testable, because I can write unit tests for my custom assertions to make sure they're working properly. So let's collapse this code and look at, what else can we do to this? What's this if statement doing here in the test? What's wrong with having if statements in your test? It means we have conditional logic. We don't know which path in this test we're going to go through. And depending on which path we follow, we may or may not know what, in fact, we're testing. This is actually just checking to make sure that the right number of idline items are on the invoice. We can replace that very easily with a guard assertion. And it will just fall through if the size is right. And we'll continue on with our assertions. So slowly, but surely, we're making this test code simpler. So now we've got that test code down small enough that we can get it all on one screen at a slightly larger font. Let's see what we can do about some of this fixture setup code, zoom in on that a little bit. So we've got all these numbers and things here. Which of these things are relevant to the outcome of the test? And which of these things are just here because the object constructors we're using happen to require them as parameters. So if we go look in the other part of the test and see which of these things are actually referenced, it turns out that a whole bunch of this hard-coded test data is really quite irrelevant. This data is obscuring the intent of the test from the reader, because the reader doesn't know what's important and what's not. So this is introducing accidental complexity in our tests. So what can we do to get rid of this? Another side effect is depending on a test you're writing, these hard-coded values can actually make your test unrepeatable. Because when you run the test over and over again, the test runs into collides with previous instances of itself, so to speak, the data left over from it, especially when you're dealing with databases and that kind of stuff. So that's something to watch out for as well. We can make this more obvious to the reader to say, we don't care about all this data. We need an address, but we don't care what it is. So just create one for me. And I use this convention, create an anonymous address as a way of saying that. Don't bother looking at this address that's not relevant. Now, we can go a little bit farther here. We can look at these things and say, and which of these things do we really care about? We can see down here, there's no reference to a lot of these things, like addresses and so on. So why do we care about them? So let's highlight the things that we don't care about. They seem to be just used in one place, which is as parameters to the create customer. So let's get rid of that. It's irrelevant information, so let's just strike them out and get rid of them. So now we've made our anonymous customer creation method a little bit more powerful. It just deals with addresses for us. If I cared about the address, then I might want to pass some information into this customer creation method only the information I care about. If it's like if the state, for example, affects what tax I calculate, I would pass the state as a parameter and have it struck the appropriate addresses inside. I want to make it really clear to the reader what's important. Here's what you need to pay attention to. When I have too much stuff lying around, I don't know what to pay attention to. So let's keep looking at this. Where else is customer referenced? Nowhere else? Can we get rid of it? Sure, why not? Seems to be irrelevant. So let's boil this down here. So what I'm basically trying to do here is make sure that only things that are referenced here are created at the front part of the test in line. I may need to create them in behind the scenes just because those objects need to exist. But as a reader of a test, I do not care about them. They are not relevant. So let's get them out of the way so they're not obscuring our view, so to speak. So let's keep looking at this. What else can we improve here? We have a bunch of statements down here at the bottom to construct the line item and get them and so on. What is this really saying? If you were to explain this to me in English, what would it be telling me to do? We want to elevate the language from just pure Java and whatever domain objects we happen to have lying around to something that expresses the intent more clearly. So what we're really saying down here is something more like this. There should be exactly one line item on this invoice. And the line item should be with a reference to this invoice for this product and this quantity and so on. So we're slowly making this test easier and easier to understand. So that looks pretty good. We're down from 20 lines of code-ish to kind of five lines of code. Now 10 years ago, I would have said that's good enough. But I think we can keep making this better, make it easier to understand. So one of the things I want to talk about here a little bit is about terminology and how terminology influences the way we think. When I wrote my book, I always divided my tests up into set up, exercise, verify, and tear down. And we call this the four phase test. One of the reasons we talk about four phase test is because we want to make it clear that it's a single test condition that we're testing. If we have six phases to the test because we set up and we exercise and we verify and then we exercise some more and verify and exercise some more and verify, we actually have three different test conditions or checks that we're doing there. But they all happen to be strung together. And that makes it much harder to understand. Now you'll also see another set of terminology which is the Arrange Act Assert. And this is a wonderful little alliteration. Bill Wake came up with it. He's really good at coming up with things like that. But this suffers from the same problem as set up, exercise, and verify. Which is it's focused on mechanics. It's saying what you're doing, it's not saying why you're doing it. So one of the things that I've found myself doing more recently is change in the way I think about these things is that I've adopted the BDD terminology. So what we're really saying is here, given we have a product and an invoice, and if we say, when we say add an item on this invoice for this product with this quantity, then we should end up with a single line item on the invoice that looks like this. So the question is if I was explaining this to a person using this terminology, why aren't I using that terminology when I write the automated check? So what can we do to make this clearer? So let's look at it. Well, first of all, let's start by renaming the test. Now this uses the old convention which is starting the name of the test with the word test. Modern frameworks allow us to use whatever terminology we want because we use things like annotations or attributes or whatever to identify the test. So let's identify what's being tested and what particular circumstance and even what the expected outcome should be in the name of our test. And now let's see if we can make that given one then more obvious. So let's look at maybe something like this. So let's use the same terminology in expressing our test as we were using to describe what the test should be doing. There should be exactly one line item on this invoice and the line item should look like this. This is my expected line item. So now when I read this, I can see that this is expressing an expectation and it's very clearly describing that expectation. So I know this is the then part of my test. Similarly up here, is this code that I'm testing or is this setting a precondition? This is part of my given. If it's part of my given, let's use given terminology. Given any product, given an empty invoice, notice I'm not just saying create me an invoice, I'm saying I want an invoice with no line items on it and I might have other methods for creating line items or invoices that have line items on them. And sorry, a little premature there. So I've basically now moved the test code to express the domain concepts very directly. Another advantage of doing this is that it now has given me what amounts to a DSL, domain specific language, for describing the behavior of this code. This code around line items on invoices. And I'm gonna have a whole bunch of tests and they're all gonna have very similar starting points or ending points. There'll be a few variations on these things. Once I've written one of these tests, it's gonna be very easy for me to write other tests using this terminology that built up. And I just wanted to throw out a kudo to Arlo, I don't know who here knows Arlo Balshi. He came up with this concept of naming as a process. We probably went through about four or five different names there for some of these things. And in each case, we weren't trying to come up with a perfect name. We were trying to come up with a slightly better name than the one we had before. And his process actually starts with coming up with a completely stupid name because that'll force you to then come up with a better name later on. It's kind of a placeholder for thinking more about this name, but don't get hung up on the name because the name, perfection is the enemy of good enough. So one that's a little bit better than what you had. So now let's look at having another one of these tests. So this was the, oh, that's the old style naming with tests, I should probably rename all these tests to be clearer to describe what each one is checking. And by doing this now, just by looking at the names of the test methods, I can see very clearly, what are all the different combinations I'm testing? It's much easier to see what I'm missing out on and for each one, what the expected results should be. So let's drill into another one of these. So here's my, I basically just copied, cloned, cloned and twittled, classic programmer reuse mechanism, right? I cloned and twittled, I cloned my existing test method, and now I'm gonna change it a little bit. Now which case am I dealing with here? Add item, okay, so it's duplicate product and I expect that I should have a single item on it with the sum of the quantities of the two things. So now I'm gonna take this and modify the expectation described here and I'm starting at the back for a very good reason. I'm gonna change this to express what should the result be. So now I expect there to be one line item with the one product and the quantity is gonna be the sum of the two quantities that I'm gonna add and the total price is gonna be the product price times the sum of the two quantities. Now that tells me, well the compiler will very quickly tell me, the IDE will tell me, that I need to declare quantity two and I need to declare a second, well I'm using the same product so that won't be a problem there, right? So that tells me I need to do this and now I need to do that and so in just a few seconds, 30 seconds probably it took me to write another test for another test condition and it pretty much reads the same way I would have described this as a narrator if I was leading someone through this code. So let's look at another example here with a slightly different outcome. I'm gonna add a different product and I'm gonna end up with two items on the invoice. So let's start again with the expectation instead of being exactly one line item on there, there should now be two line items and I'm gonna have two expected items. Now you'll notice I'm passing two distinct line items to this custom assertion and there's a reason for that. I could, being a software engineer, I could be clever and pass an array of line items or a list of line items or something like that but there's several reasons I won't do that. One is that pushes complexity back into the test which is what I'm trying to keep as simple as possible. So I'd have to construct the list or construct the array, et cetera. The other reason is I really don't need a general solution. I haven't seen a need yet for more than two line items on a test so why would I start building a way to do assertions for any number of things? To use Kent Begg's terminology, I'm borrowing tomorrow's trouble. I may not need that so I'm not gonna do it yet. And in fact, it keeps this code really simple because the implementation of this custom assertion is much simpler if I only have to deal with two line items than if I have to deal with an arbitrary number. So it's all about reducing complexity, making it easy to understand the test and also making it easier to write these utility methods. So now I've got my second line item. That's gonna tell me that I need to add the second item on there and I need to construct a second product and I need a quantity for it. Now you'll notice because I'm working from the back, if I was in an IDE, I wouldn't be able to get this demo done in the 40 minutes, but if I was working in an IDE, every one of those things would have been pointed out to me by the IDE that I need to do this in the form of a compiler or a red underline on the thing, et cetera. So once I've made the decision to change this part here, it's pulling into existence all the other things I need to do. It's also helping me to name things because I'm naming them before and using them before I declare them. So I'm not inventing names that may or may not be evocative. In this case, I'm calling product one and two. In other cases, it could be the first product or the last product. The way I use them in the expectation definition down here would make it clear to me what they would need to be called. Now one of the things you've probably noticed here is I've been sprinkling these whens and thens throughout the task. I've renamed them from a setup, verify, and our setup, exercise, and verify. One of the philosophies in the clean code movement is that inline comments in code are deodorant. They're telling you that the code isn't easy enough to understand. But now that we're using this convention and that we're going to construct our preconditions using methods to start with the word given and we're expressing our expectations by using methods described with things like expected in their name or should, it's now very clear which part of the tests we're dealing with. So the when and then comments are really quite superfluous. So I can just get rid of those. And this test happens to have a few more objects than others. Very often we're going to be only dealing with three or four lines of code at which point the number having two or three extra comments in there just makes things longer and doesn't actually add to the understanding. So the benefits to adopting the style of writing tests is that it's a lot quicker to write tests because you've got this language that you're writing the test and you're not writing the tests in your programming language, you're writing it in what amounts to an internal DSL that you've implemented in a just-in-time fashion. And it makes the test a lot easier to understand because there's a lot less code to read. And because the writer has focused on highlighting to the reader exactly what's important, these tests are written for the reader, not for the compiler. We're focusing on our future audience, which may be ourselves six months from now when the build breaks, or it may be the next person that we hire, and they have to go in and understand this stuff. We can look at two tests side by side and very easily see what's different between them because we're using the same language throughout them. And we can see, oh, yeah, we're adding an extra product here. Or we're expecting two items instead of one item. The deltas between things are very easy to see. So we can see what the differences in expected behavior are. And because most of the interaction, especially the incidental interaction with the product code, is encapsulated behind our given methods and our expected and should methods, when tests break, we typically only have to change one or two places in the code. If you add an extra argument to a constructor, you're going to have to change a given or an expected. You're not going to have to go and visit tens or hundreds of tests. So it actually saves us a lot of trouble. Makes our tests much less fragile. Now, the question you might be asking is, is refactoring the only way to get to clean tests like this? And should I start by writing big, long, ugly tests and then refactor things out of them? And the answer is, you can do it either way. If you happen to have a bunch of ugly code, use your ID's refactoring capabilities to help you get to the cleaner code. And if you're writing brand new tests, then you might start thinking as you're typing. So here we're writing a test, and we're writing a bunch of code. This is just in the last part, the when part of the test. And as we're typing away here, we start saying to ourselves, or maybe if we're hair programming, or if Paris says to us, this is getting kind of messy here. Are we describing intent or are we describing the mechanics of how we express the expectations of this test? Maybe we should elevate the language of this test a little bit higher. So let's back up out of that and just type in, what are we really saying here? This is what we really expect in plain language. Now that we've expressed it in plain language, now we can go off and the compiler will help us fill in these things. Yeah, we're going to need to write this should method. We're going to need to write this certain voice header is method. And we can even use TDD to implement those. We can write unit tests for those custom assertions. So it's all about avoiding getting down into the weeds. Instead of saying, yeah, we need to call the constructor, and it takes these four objects, and that means I need to go and construct all these objects. Just say what you really mean in the test, and then go back and fill in the details in the utility methods. All right, so going back to our original question, what does it take to be successful at having sustainable automated unit tests? We need our programming experience, but this is relatively simple programming once we've figured out how to do it, understanding the X unit frameworks. They're relatively simple, and we're in fact encapsulating a fair bit of its complexity from ourselves as well. Coming up with the test conditions is always interesting, and there's a bunch of techniques around how to do that. We also need to really focus on naming and avoiding complexity. We need to refactor regularly anytime we're starting to see a lot of duplication in tests, especially across tests. We have to ask ourselves, is that actually detail we want duplicated, or is that too much detail in the tests? And there's a bunch of other things we don't have time to talk about today. But the key thing really is this fanatical attention to keeping our tests maintainable and easy to understand. And these things together will help get us towards robust automated tests. So just in closing, some questions to ask yourself. Are your automated checks helping you deliver value continuously? Because that's what agile is all about, right? Do they help us understand what we need to deliver for doing TDD? Do they help us understand what the code already does when we're looking at existing tests? Are the checks helping us make safety a prerequisite? Are they making it easier for us to work with the code, reducing the likelihood of breaking things? Are they helping us experiment and learn continuously? Are they giving us fast feedback on the changes to the code? If we change something in the code, do we quickly get a list of areas that are impacted by that change? Because we have failing tests. And are we making people awesome by automating the checks? Is the life of our developers better because we have tests? Or is it worse? Are they spending a lot of time dealing with broken tests because they're not cleanly written? And is it helping us deliver quality software that makes our users happy? So with that, I'd like to turn the floor over to whoever has any questions. Oh, I'm even five minutes early. Yay. So we have lots of time for questions. So we have a microphone coming around if anyone has a question to ask. Just put up your hand, please. Yeah, I noticed that you were extracting a lot of methods. But doesn't it just introduce more complexity, like the assertions? But normally, the readers of unit tests are developers as well. So is it easier to understand the code itself? So is extracting a bunch of code into a method increasing complexity or reducing complexity? That really is the question. And if that piece of code that you're extracting, 9 times out of 10, no, 99 times out of 100, when you do that extract method, if you're using a capable IDE, it'll identify at least a couple more copies of that same code that it's going to replace with the method call to that thing. So it's actually reducing complexity. It's reducing duplication. And if you follow the process of naming and coming up with good names and constantly improving the names, you're only ever going to have to go into that method once as a reader, a new reader of this code to check to see what does it say inside there. But it's going to save you an awful lot of time doing pattern matching, looking at a whole bunch of different tests, saying, is this test actually expressing the same expected outcome as this other one? Because there's five lines of code here checking the outcome. And there's five lines here. And the names are slightly different of all the variables. And it's using different values. But that's just because we use different values in the given part of the test. It turns out they're actually expressing the exact same thing. But you can't tell that without spending time understanding it. So by doing this, we're actually reducing the complexity of the test. We're not increasing it. It's really just good software engineering applied to our tests, the same level of discipline that we would do if we were to spend the time on it for our production code. And the nice thing about this is even if you don't think that you have the liberty to do this in the production code, because it's, oh, it's too real-time critical. We can't have that many method calls or whatever it is. That's the reason for having 100 line methods in your production code. You can actually learn a fair bit about how to use your tools more effectively and gain confidence in your refactoring skills by applying this to your tests first. And then later on, starting to move those practices into your production code as well. During the range phase, sometimes you need to do a lot of steps to construct a desired initial state for the test to be performed. So those actions actually probably are tested in some of the other test cases in the previous methods you defined. So one will end up that you have to repeat a lot of steps to construct a step. And conveniently, sometimes we will just piggyback on the existing test cases, because you have already did all those things and the third thing works. And now we have a test case as 31, and 32, and 33. So what do you think of this kind of scenario? Because constructing that initial state may take a lot of effort, so yeah. Right, OK. So just to make sure I understand what you're asking, if it takes a lot of code to set up the preconditions of the test, the given for any particular test, and we already have another test that results in that as its outcome. I'm reading between the lines, but is that what you mean? Should we use that other test to set up the preconditions of this test? Yeah, I know it's not a good practice, but in this case, what can be done to simplify the setup? Maybe that's the right question. So the question to ask yourself is, which parts of all of that complexity of setting up that precondition, that the givens for this test, are actually important to understand the expected outcome? And do your extract method on all the code that does that setup, and only pass in as parameters the things that are important, the things that the reader should pay attention to, and which would vary from one test to another, from one test condition to another test condition. So what I was describing here, this process of extracting and extracting and extracting until we boil it down to just the essence of what's important to understand. You can do this regardless of how complex the code was to start with. Now you may end up with a bunch of different utility methods, which then have below them a common set of utility methods that are more flexible, but which take various parameters that sometimes we don't want to pass to them. But that's fine. What's important is that the set of utility methods you've left for the test to call directly are simple and only pass in as parameters things that you want the reader to pay attention to. So if it's important for the reader to understand pass it as a parameter, and if it's not important for the reader to understand the rest of the test, and this is going to vary from one test to the next, don't pass it as a parameter. Does that help? Good. So we have one over here and another question in the back there. He had his hand up first, so he's got a microphone. Let him. So I think your point here is very valuable for testing, but one of the things is very important, like you can only test if you write a testable code. And the other things I think is very bring a lot of value to unit testing is mocking techniques. So what do you think about those two things? The first thing is write a testable code and how the mocking helps the writing unit test. Yeah, so designing for testability is critical to doing a good job for unit testing. It's one of those other things that I said I didn't have time to talk about. The best way to make code testable is to write the tests before you write the code, because then you're writing the testability spec in your new unit test that you're going to test drive the construction of your code with. It's much harder, it's always much harder to retrofit tests onto existing code because it just won't have the affordances that you need to put things into the right state or to inspect the state afterwards. So this is why TDT is so powerful. If you go back to the economics slide, that savings part at the bottom becomes much larger when you're doing TDD because you spend less time messing with the code and dealing with issues around making it testable. And because you've got the tests running all the time, you spend so much less time in a debugger. So the part above the line gets smaller because it's easier to write the tests and the savings below the line get larger because you're having to spend less time doing manual debugging. Writing your code, manually debugging it, and then writing the tests is almost always going to be more effort than writing code without tests or test driving your code. So that's question number one. And question number two is mock objects are wonderful when you need them. But if you find yourself using them a lot, the odds are your design is wrong because what you're forcing yourself to do is express in your tests design implementations, implementation details of the code that you're testing. So if more than 20% of your tests, I'm just making up a number, need to use mock objects, look at your design and say, why do I need to mock so much in here? So I think I've just been given the signal that it's time for lunch. So thank you all for your question and for being here. Enjoy the rest of the conference.