 All right, good morning. My name is Jeff Kazmer. I started a company called Jumpstart Lab. And this is a talk called There Are No Tests. It's about testing and rescue projects. Yesterday, I had to buy some of your favor with cupcakes. I hope it worked. I wanted to inject a little happiness because this talk is a little bit of a downer. So it's like my happiness offset. We went happy. Now we can bring you back to normal. So it's going to be a little bit serious here. If you write software, you have written crappy software. Ryan, when was the last time you wrote crappy software? Thank you. If you don't write crappy software or you don't write software that you think is crap, the bad news is you will. You must be writing it today. Your banned software becomes someone else's problem. Maybe that's after you get fired and somebody else comes in your place. Maybe it's future you six months from now that looks back and says, what was I thinking? Why did I do it that way? And if you haven't done this, you will. I do it frequently. I look at stuff that I've used to teach people. I look at stuff that I've written. And it's just like, oh, god. No. Projects go wrong, but that doesn't mean they can't be turned around. In writing software, we have a unique privilege that most professions would love to have. If you were a doctor, imagine what it would be like to ensure that your cure would fix your patient. Medicine is mostly about prayer. They have these ideas, things that have worked, oh, six out of 10 times. Try it. Maybe. Maybe we understand the systems in the body. Maybe we can fix you. But there's a decent shot. It's not going to happen. In software, you have the privilege of ensuring that your work succeeds. Yet many of us throw that privilege away. This is a quote from my friend, Joe O'Brien, who I heard planted his sister here as a spy somewhere here, I don't know, from last week. And I think it links back to what Michael mentioned yesterday about the software architecture reflecting the person architecture in the company that develops it. It's the same thing. I think projects that go wrong are rarely for technical reasons. It's usually people reasons that technical problems become a symptom. So tech solutions then can't fix these problems. We can't fix people by writing tests. We can't fix people by flipping them up to Rails 3.1 and like, hooray. But they can be a guide. They can put us in the positive direction and set us along the right processes. So let's talk about these rescue projects. There is a perception, I think. Am I going in and out a little bit? No? I'm OK. All right. There's a perception that rescue projects are for people that can't get better projects. Like, oh, no funded startup wants to hire you to build their app. Well, here, take this previously funded startup and try and make it stop crashing. The truth is that I think rescue projects are the most noble projects. I think they are the hardest. When you go into a fresh application, you have the chance of being successful day one. When you go into a rescue project, day one is going to be ugly. No one has brought you in because they're like, oh, everything is cool. We want to call it a rescue if everything was cool. Day one is going to be ugly. Day 10 is going to be ugly. Hopefully month 10 gets to be better. So you go in knowing that it sucks. And that's OK. Life is not about doing easy work. I think rarely will you look back and say, like, man, that job I did, it was easy. And that was the best work of my life. We'll look instead at the work that was hard, the work that was challenging, the work that made us question what we do and how we do it. And those are the ones we'll be proud of. When I started my professional career, I started with Teach for America. I wanted to get into teaching and decided that, all right, I could go and teach at the private high school where I went to school. They would have taken me unwisely. But I would prefer to go into a place that other people didn't want to go, that other people were unwilling to go and try and fight the problems, fight the challenges, fight the tide of failure. And so I said, all right, let me go do the hard job. With rescue projects, you're making the same choice. Like, I'm going to go into the fire. I'm going to push back. I'm going to be stronger than those that came before me. So seek challenge. Because life is not Greenfield. You can't just burn it down. Step one of a rescue project is not RMR of star. Like, you have to get in there and understand why it went wrong. Fixing systems, fixing complex systems is much harder than starting them. If you want to be a hero, you want to be a programming hero and superstar, you do rescues. First thing you need is expertise. A novice cannot fix a project on the wrong path. You need the ability to break down complexity. You need the ability to fix architecture. You need the ability to understand how code and business interact. And if you've been programming for six months, you can't do those things. You've been at it three years, maybe five years. OK, now I might hire you. You need passion. You have to love what you do. But more than that, I think you have to love people. And that's not something we necessarily excel at in software development. Sorry. You have to understand that you're dealing with a people problem. You're dealing with the software is the symptom of a people problem. And you have to go into the project saying, like, I care about you as people. I care about your business. I will help you succeed. It's kind of like believing in the change that you can make in the world. And then lastly, determination. In Teach for America, we talked about the relentless pursuit. That was always the quote, relentless pursuit. When you get down and you sit in your classroom, I remember my second day teaching third grade, I sat in the classroom and cried. I was 22. And I just sat there just like, what have I done with my life? And I came back to that relentless pursuit. And I said, all right, what can break me? Nothing can break me. I can go. I can go. I'm not going to let some third graders be the ones that topple me over and make me quit. And so in a rescue project, you are going to deal with frustration. You're going to deal with frustration from the code you wrote, the code someone else wrote, the people that have been scarred by those old developers, the person who came and fucked everything up before you. And if you don't have that ethos of a relentless pursuit, you'll give up, too. So next, goals. You've decided you're going to do it. Now you need to set goals. An expert without goals, without a plan, is dead. If you don't have measurable goals, success, by definition, is impossible. You would never know when you got there. You would just stop at some point and be like, oh yeah, that seems good. So the only way to succeed is to set measurable goals and then measure them. You are entering a jungle. You are walking into a project that you know is fucked up. If you don't have the direction, the guiding light of measurable goals, you will just walk or code in circles. In the big picture, there are three possible outcomes. First, there we go. First is failure. If a rescue project fails, there's a temptation to say, eh, it sucked before I got it. It didn't work out. It probably wasn't my fault. It's probably the guy who had it before me, or the lady who had it before me. But there's more to it than that. A lot of these projects, a lot of clients we're working with, some of them are these massive companies with billions of dollars and it doesn't matter. But a lot of these projects, I think, get into a rescue status because they scrape together their every dollar they had to put into that project. They couldn't afford to hire edge case. They couldn't afford to hire pivotal. So they went with one dude in his basement and said, come on, please, this is my life dream. And now it's going down the tubes. When a project fails, what gets lost? Whose life savings, whose job, whose confidence, whose family is damaged? It's more than just software. The second outcome, and I think one of the most common is survival. Survival is stressful. It's uncertainty. It's mistrust. It's like, oh, yeah, it kind of works. It's staying up. Nine out of 10 people can check out from our e-commerce store. OK? But it's no way to live. It's tense for everyone involved. A survival project is one where you're getting those text messages at 11.30, like, oh, my god, everything is down. You can keep it going. You can keep putting foil on the ball. But you're not making real progress. The third possibility is to thrive. Success, profit, happiness, trust, what new things will come from a truly thriving project? When it's got money, it's making money. Everybody's happy. Everybody loves each other. And they're like, oh, let's start more. Can we hire you to do more things? What do you want to build? Could you imagine that? What if a client came to you and said, you did such an amazing job. Can we fund you to build something? It's possible. Why not? There was a sad quote I heard about a year ago at a business conference. And they were talking about customer feedback. And it's really valuable for your sales site to get quotes from your customers, right? Oh, working with Jeff is the best experience of my life. And the asterisk on it was, get it early in the project, because that's usually when they're happiest. Damn. Like, is that really the standard we're going to set for ourselves? And isn't it true? Can I look back at projects? And you think of those first days and everyone was holding hands, like, woo-hoo, we're doing a project. And then by the end, you're like, oh, god, we're doing a project. How many of those are there? So progress. You have goals, but goals don't mean anything without measurement. Being agile doesn't mean having no plan. It means constantly correcting course, heading towards that guiding light of your goals. You can't correct course without new information. If I'm trying to get someplace, I look it up on my map and then I just start walking. Like, there's a 50-50 shot I might get there. But it's much more likely, if I'm watching that phone, getting my little blue dot walking along the way with me, like getting constant information to correct course. So how do you measure? The first benchmark Michael mentioned this yesterday, coverage. Coverage is definitely a flawed benchmark. It's kind of an artificial. You can pump up code coverage. But I think it is rare that a project has good coverage and has terrible testing. Like, maybe people have gamed the system and so forth, but coverage never hurts. I never say, oh, this code is too covered. It's too, right? Where it gets more interesting, and I don't think a lot of people really pay too much attention to this, is the velocity, both of new features and of fixes. How long does it take for a new feature to be added? When we do those estimates and you give it a point, you say, this feature is going to take three points, what percentage of the time is the estimate on mark? Is everything we estimate to three points, taking five points of time? Is everything we're estimating to three points, taking three points? Then we know we're making progress. As a software project deteriorates, that velocity slows now. It rarely, rarely speeds up. But in the rescue project, you have the opportunity to speed it up, because it's probably already very low. Complexity, we're fortunate to have amazing tools. I credit Seattle RB for some of their work in this department. I said last week, if you want somebody to screw up your software in an automated way, talk to Seattle RB. They have built some great tools with metric foo and flog and flay and all that that can give you these numbers. And it's like, oh, this method is a 15. What does that mean? I don't know. And it doesn't matter. But if you have 10 methods that are 15 and one method that's a 36, you're like, hmm, area of complexity. That's interesting. What's happening there? Why so complex? You have faults. What percentage of your users encounter a fault? What percentage of transactions encounter a fault sometime during their life cycle? It's one that's really commonly overlooked, I think. Your response time is easy to measure. Response quality is a lot harder. Like if your Google response quality might mean, when someone searches, how often do they search again? When someone uses this feature on our site, does it give them what they were expecting? Is the response what they were looking for? And then lastly, most important value, whatever your business domain is, sign up, sales, et cetera. So if you measure all those things, you can see progress over time. But none of it is possible without breaking the cycle. Projects don't tend towards order. They tend towards chaos. They tend towards complexity. They tend towards that graph Michael showed, like, 80 billion god-awful nodes all interconnected. Just changing the developer isn't enough. Coming in as the hero, it's not going to change the project. You have to have hard conversations, people-to-people conversations, because people created these problems. Just changing the developer isn't enough of the mix to really guarantee success. The client has to be willing to make change. There's an interesting book called The Clean Coder. Anybody read Clean Coder? That's just a few. You might check it out. I liked it. In it, Bob Martin talks a lot about being a professional. And one of the things he, I think, does a good job is analyzing language, especially the word try. Try, try. Like, what is the word try? It's bullshit, right? Like, it's non-commitment. It's like, oh, I'll try and meet you guys for lunch. I'm not meeting you for lunch. I'll try and deliver this feature Monday. I'm thinking it's going to be Friday. And that behavior breaks confidence, right? It starts to separate you from the stakeholders and starts to break down trust. When you're dealing with a rescue project, you're dealing with damaged people. It's like someone who just got out of a long-term relationship that broke their heart. If you normally email clients once a week, you should probably email your rescue client once a day. You should make videos for them. I think this is something great that MB Labs does down in Florida. Every week, their developers will make screencasts of the features they built that week and send them to the client. How reassuring would that be as a client that you've had this terrible experience. Someone has come in promising to save you, and now you get this constant feedback of like, oh, we fixed this. Oh, we built this. Oh, we did that. And you say, oh, shit. Maybe we can actually do this. It's all about telling them that you are there for them. All I care about is you. All I care about is your success. I am here working for your success. Discipline and trust. Trust comes from expertise consistently applied. This is something that I think you'll see if you read the books about like Zappos and so forth, that the customer fanaticism isn't about delivering great customer service often. It's about delivering great customer service always. It's the consistency. Imagine what it would be like if expertise were inconsistent. I go to the doctor. You might have seen me. I have this jacked up wrist. I go to him and I say like, oh, my wrist is messed up. He does an MRI and he's like, oh, you have a torn tendon in there. And I say, no, no, I don't. And he's like, yeah, you're right. You don't. Like what? How much trust would I have in that doctor? Like your expertise should stand for something. And if the client trusts you, they'll trust that expertise. If you give them that constant feedback, they will believe what you say. When you say like, hey, we need to switch off from this cloud provider to this one because we're gonna get better IO or whatever. They'll say, okay, instead of saying like, oh, $10 a month, I'm not sure. With trust, clients can turn into evangelists. And that is extremely powerful thing for business. That is what Zappos has captured. That is what a lot of these model companies have captured. So process. Let's talk about process for a minute. Tools matter, but they don't matter that much. For me, when I work on a project, I have a few kind of non-negotiables, which are GitHub, Holo and RSpec. If I'm really gonna be comfortable in a project, those are the three things that I need. Most other parts I can be flexible on. If somebody really pressed me in the test unit, okay, I could do it. I won't feel as happy, but it's okay to switch the tools in a project as long as it is for demonstrable gains. It's not okay to say like, oh, you did test unit. Yeah, I don't really know test units, so I'm gonna flip everything over to RSpec. Like, that's not enough. Now, if there are no tests, hey, use whatever framework you want. So it has to be about measurable gains, hardware, services, spending money. I think that's one place that clients get themselves in trouble as they get preoccupied with monthly bills. Like, oh, this thing, this service is 150 a month and this other one is 200 a month. Like, well, I'm 150 an hour, so that's 20 minutes. Our conversation about this, you just wasted $50. So just switch and stop asking questions. I'm the expert here. One of the first keys or a commonality I see in projects that are struggling is deployment. If a project can't be deployed in one command, that should be, I think, the first priority. Before any hotfix, before anything. Deployment is not just about shipping features and saying like, oh, yeah. It's not just about getting new stuff out there. The moment you deploy is when software changes from monologue on my hard drive to dialogue with customers, with the stakeholders and everyone. The more you deploy, the faster the feedback loop. The faster the feedback loop, the faster the correction on course, the more likely you are to succeed. To abuse my friend Brian's phrase, I would say maybe deploy all the fucking time. It teaches the client that you can do this. You're making progress. If you sit in a room and you're like, oh yeah, we'll deploy this in like three months. The next version is gonna be amazing. It's never gonna work. Once you deploy, then it's time to monitor. You can monitor runtime. You can monitor that value, monitor coverage, and monitor complexity. That's what we talked about. And monitoring them constantly. Something, especially with complexity and coverage, I see people monitoring those occasionally. When they first get a project, they'll run all the metric food and be like, oh yeah, this product. Person wrote such shit, I'm gonna come in here and clean this all up. And then they don't run it again for like six months. Why not run it as part of your deploy hooks? Why not run it on your CI? Why not run it every commit and see how your trends are happening? That graph Michael showed us with all the complexity information growing over time. If you can't generate graphs like that about your project, you're not collecting data. Tools, strategy, client, now it's time. You gotta just do the work. Without tests, without automated tests, long run success is unlikely. I won't say it's impossible, but it's unlikely. Professionals do professional work. Tests are not about validation. I wanna strangle people when they say like, why should I write a test? Because I can click on it and see it work. It's not about it working, stupid. Yeah, it worked. Buggy software, it worked at some point. So it's very unlikely that the person just shipped it without ever trying it. The problem is when you add feature B tomorrow, whether you're gonna break today's feature and are you gonna go through and click all 10,000 features? No. Will it work tomorrow? There's this rule of thumb that projects spend about 30% of their time in construction and 70% in maintenance. Does testing slow down construction? I think it does. I think unless you're extremely proficient at testing and really on the top of your game, you probably develop software more slowly than someone with similar experience, but not writing tests. 30% of the project is gonna be slower. 70% of the project is gonna be dramatically faster. I don't know if you guys saw there was a link floating around in the last couple days of some like 2009 research from Microsoft, that even at Microsoft, this holds true. That the initial development was slower, the long run maintenance was dramatically faster. Like 15% slower in development, 60% faster in maintenance. Building software isn't about building software, it's about reducing the maintenance, right? The long-term lifetime. Most of the shitty software we write is because you intentionally made it shitty, right? Like my worst programs are the ones that I'm like, I'm just writing this, we're gonna use it like one time and then that's it, we're gonna trash it. And then two years later when we're still using it like, why did I write such crap? There are no miracles when it comes to writing automated tests. There's not going to be the magical day that the client says like, hey, you know what? You're gonna spend the next month, just like, just write some tests, man. Not gonna have it, sorry. It would be beautiful if it did, but it's not. The only progress you can make is one small step at a time. Projects are built and destroyed with one commit, with one commit after the next. So the fact that your project doesn't have any tests, okay, well, you get to write the first test. You've now increased the percentage of tested code by infinity. Hooray, tomorrow your progress won't be quite so awesome. But if you focus on being the developer that you want to work with, you'll say like, all right, there is not going to be the light that shines down and fixes all problems. I will just fix this one tiny little problem now, test it and have confidence in it tomorrow. So places to start, I don't want to be just all theoretical with you. If you're talking about a Rails app and if you're working on a rescue project, it's probably a Rails app. Some easy places, low hanging fruit are relationships. A lot of people don't test relationships because they're like, oh, I'm testing active record. Actually, you're not. You're testing the fact that this active record object is hooked up to that other object. And that's a little bit different than testing the internals of active record. Validations are one where I write tests for every validation and some people look at them like, what's the point of that? Well, actually almost, I would say three out of four, maybe four out of five bugs I encounter are due to validations earlier in the data's life cycle. Like almost every UI problem I see is because of a new value in the database that shouldn't have been allowed there in the first place. So testing validations, testing calculations, anything you see in a model is obviously easy to test, drop in those unit tests and the business logic. And then last one, I have this love hate, but it's more like love hate relationship with helpers, but they are also very, very easy to test. Refactoring for understanding is an idea I stole. My next two ideas are just stolen from Jim Wyrick. So thanks, Jim. Refactoring for understanding is the idea of instead of reading code, people talk about, oh, I read code. I don't read code. I can't read code, really. A lot of times when people throw code up on the screen, I'm like, I don't get it. When is this, like, I have to use code. I don't really understand it until I start pulling it apart and seeing how all the strings are connected. And that's what refactoring for understanding is about. It's not necessarily refactoring to improve quality. It's let me just refactor this and see how the different parts are affected by change. You can combine manual testing with extractions. Let me show you how that goes. You find an area of complexity. So maybe you look at your metrics, find a method that is abnormally complex. First, test it manually. Click it, as they say. Extract the smallest component you can out of there. What's one control structure? What's one if-else that I could extract out to a method? Write a test for that method, re-implement the code, and then validate that the test runs. Use the tested code in the original spot and click it to make sure it still works. You do that, you can extract pieces out and start to lay that foundation that will be your test suite. Second one is comment-driven development. This is the process of commenting out code, making sure the system breaks. I have seen before, not infrequently, code that I look at and I'm like, I don't understand what this does. Comment it out. Turns out the system wasn't using it in the first place. Like, oh, we've had code here that is completely unused. Fantastic. Delete. So you comment it out, you find the breakage, then write a test that tests its breakage, and then the crucial step to me here is don't uncomment the code. Still follow your normal TDD process and build the smallest thing that could possibly work. And what you'll see is as you build a parallel implementation right next to the original comment, you'll hit these things like, why don't they have this little check there? Those are your edge cases. Those are the parts that you don't know yet because you don't have that depth of experience with this code. So I like to do them in parallel because you can see, what were the weird things they were concerned about? Should I also be concerned about those? So rescue projects are often about putting out fires, right? They don't come to you because it's like, oh, things are going all right. They say like, things are fucked. Like, it's crashing every two hours. We need you to fix this. Your process has to balance firefighting with long-term investment. If you just constantly fight fires, you will be fighting fires until you give up. If you're not investing along the way, you can't repeat the mistakes of the same amateur that puts you in the spot, especially if that amateur was you. Code with problems is probably the misunderstood part, right? Nobody goes in deciding to write complicated code. Code always starts simple and then you add on another protection, another guard clause, another if condition and all of a sudden you have this morass of crazy stuff. It's probably a foil ball and that's where your bugs are most likely to occur. Investing in those pays off greatly in the future. Another technique I like to use called pending to excess, or really I think of it in my head is pending my ass off. If I don't have a lot of time, I'll go in and write a test for like the critical functionality that I have right now, maybe happy path or whatever, and then I will write a bunch of pending tests. It's like, all right, I can't implement all these right now, but I'm gonna make notes so that when I do have time or when I do come back to this for another bug, I can see what I was thinking last time I was here. So if tests are great documentation, pending tests are like stub documentation. Gives you some context for future debugging. So I've been talking a lot. Renee, let me ask you a question since I saw your face out there. When a client has a bug, right? The stakeholder has a bug. How many emails does it take before you implement the code to fix the bug? Would you say like on average, you can make something up to it? Lots? Like four, six, right? Usually you get the email that's like, it's broken. Oh, sweet, thank you. This is of no help. Can you tell me what you were doing? I was using it. Oh, fantastic, right? We have a language communication problem, right? This comes back to a people problem. What if we taught standards of communication? What if we worked on bug reports like this? What if we taught people to write bug reports in a mad lib style? As, when, then, but, but. Something went wrong, right? And then you gave them, you know, when you're writing these bug reports, user types, here are eight examples of user type. Unauthenticated user, authenticated admin. Admin that's doing something wrong. User name this, et cetera. If they had a structure like this, they would actually give you better bug reports, right? Clients don't want to waste their time as much as they don't want to waste your time. But they don't know how to do this. They don't know how to just like, write you an email. Like, oh, by the way, as an unauthenticated user, when I clicked this thing, I expect to see the username and password. Oh, that's part left, there we go, right? If you had a bug report like this, how easy would it be to fix? Pretty damn easy, right? You're talking two emails now, right? The one they sent you and the one you send them back to fix, done, right? How hard would it be to massage this into a cucumber test? Pretty damn easy. Bug reports are integration tests. You should treat them as integration tests or maybe flip it around that integration tests are kind of like bug reports, especially ugly path integration tests, okay? So that's enough for me. This is about rescues, starts off, be brave, solve hard problems, right? Make goals, you can't succeed without them. Oh, come on, too much clicking. Don't repeat the mistakes of people before you. Work in small steps, work in small steps. Fires are points of weakness. They should be your areas of focus. Save projects, people, jobs, money, the world. It's inefficient for the world, for the economy, for projects to fit, right? So if you save a project, you are saving the world. That's true. My name is Jeff Kazimer. My company is Jumpstart Lab. I teach the best Ruby and Rails training classes on Earth. And that's my Twitter, you can't have it. It's only two letters. So do I have any time? I don't, I'm already over. Break? Who needs a break? You got to stretch right. Questions, objections, hate. I know you probably put the hate on Twitter. You know what I'm saying, it's true. You got to ask questions on emails. I've just done a situation for corporate clients where it's literally taking over one-third of the email for single-shading, that's what you are. Yep. 130 emails to report and fix the bug, right? I wish, I wish that was so crazy. No other questions? That means I talk too much, right? Yeah. Yeah, this is beautiful. I've had a little trick in this, like the Zing. I've had them send me like Swift files, Shockwave files, and I'm like, I don't even know how you got this out of there. But yeah, maybe send them a screencast on how to make screencasts as bug reports, but they are fantastic. If they're not going to use a structured format like that, I say, hey, just click this button, make a little video, send it to me. I think bug reports as screencasts are also excellent. Yeah. Nobody writes that. There is another set of stuff that wants to fit as long. You write it. Right. So you owe it to your friend to explain it to me. I actually didn't set it up. Yeah. Right. Well, that's the easy thing to do, right? It's easy to continue the trend, like of a moving ball you can just keep pushing. It's much harder to stop it. So it's not sexy to take over a project and say like, I'm going to deliver no features. I'm going to make your project better, but you can't see it. And that's part of where the metric stuff can come in and say like, all right, we're going to, maybe we're not going to focus on the features. Your thing is working. But let's try and improve response quality. Let's try and improve response quality. Let's reduce complexity. Let's set you up to benefit from distributed systems approach, that kind of thing. And if your client, I mean, it sounds like it's a client who kind of knows and cares what's going on under the hood, right? Otherwise they wouldn't, if it were working, they wouldn't be calling in. And so I think they could probably be receptive to arguments like that. Like, hey, your thing is surviving, it's cool. Let's make sure it's badass 10 years from now. I do, I do, because I think, so the question was, is it too ambitious to get clients to write Selenium tests? I think clients have the feeling of like, hey, I'm doing your job. And that can be a little bit tough. I want the lowest possible barrier to entry for clients. Like I want to take it off. I want to get just the information I have to get. And then be like, okay, now go do your thing. I'm gonna do mine. And so that's why I think the text approach, everybody likes email, and they can just email it on their phone or whatever. So I think if you have a highly technical mind, if you're working with his client, that clearly cares about what's going on under the hood, then I would say, yeah, maybe how do we do something like that would be cool. There's a cool project. I forget the name of it. It's out of a shop in Australia where they have like Cucumber for the web, where like you can write Cucumbers scenarios. Cucumber like scenarios and it'll go and exercise your live production site. And I could see clients doing something like that. Sorry guys, I forgot to stop.