 All right, so let's talk about unit testing, my example. Automating test is typically fun for about a day. What happens is it grows to a large code base. So we start with a few tests, then it's a few hundred lines, then it's thousands, then it becomes spaghetti, like every other code. And then you just feel like you added more work to your plate. And you don't see the point. You're just writing more code, basically. And deadlines are coming up, and a lot of people give up because of that. So my name is Anna. I'm a project rescue expert. So I develop, I train, I speak. I own a company. I also organize two conferences, one in Montreal and another in Vancouver, Canada. So I said testing is hard. I want to show you that it doesn't have to be like that. It can actually be quite enjoyable. The other thing is that I see a lot of test suites that just test a whole lot of stuff without thinking about whether it has any use. So I want to show you that. I basically want to show you how to create tests that are useful and actually ensure that you are eliminating bugs. And also a lot of developers I speak to, they say that they grow gray hair every time on the release date because it's like 10 PM and suddenly they discover there's bug. And this bug is making the whole application not work. And they have no idea how to solve it. The deadline is coming. They have to finish it. They're tired. And it really sucks and nobody wants that. So we can avoid the stress and I'll show you how to do that. So raise your hand if you ever started past 11 PM on the release night because of a bug. I can say about a third. The others are too dishonest. All right, or maybe you're that awesome. Okay, good for you. All right, let's fix that. Let's fix this pre-release stress. One important thing to remember about testing in general is that you don't become a testing expert overnight. You don't just go and read a blog post or read some manual and then you know how to go and apply the theory and it just works from the start. You have to start somewhere and if you haven't had any success with testing, I recommend that you start by writing tests after your code is already written and I can see some stones flying from the TDD fans back there. Don't worry, it's just a way to learn until you know how to apply this theory and tackle the next stage. So it's a process. You have to practice. You cannot just go on stage and perform. You have to practice before. So I'll show you the different steps to accomplish that. The other thing is pretty controversial is you don't start with 100% coverage. I need to clarify, you want to aim for eventually 400% coverage but you don't need to aim for it when you start testing because that's just an unnecessary goal. It doesn't really help you learn and apply that. So you have to start small. You have to practice and then once you understand how the tests work, their interaction with the code and how to keep them lean and how to avoid going back and rewriting your tests because they no longer work with the new code, so how to reduce the maintenance on that. Once you get all that, you can start going on to the next stage and get all the way to TDD because you do have to acquire the skill and you do that by practicing. So in case you still need some convincing, what does it look like when you test without automation? So when you do things manually, what happens is you would probably open your web browser, you would click around, maybe submit a form and then you check your database and you see whether things have saved correctly, whether things got deleted inadvertently. Maybe you output some variables with a var dump. And also you don't always retest everything. So once you've gone through all of that process, once you finish building the feature, you test that feature, you don't necessarily go every time and retest the entire application once you complete the feature. You might do one last sweep before a release, but really what happens is that you only test things once or twice and that's it because you cannot afford to retest everything every single time. So you might have inadvertently affected something else in the application and you wouldn't know because you haven't tested for it. So basically you end up releasing code when you deem it good enough and that is a very subjective metric. So the problems with that, it leads to last minute bugs. So you are on the release nights and you did that final sweep, but you maybe not tested everything, you haven't tested certain scenarios. Now you have this bug and it's not too bad when you can still fix it before you release, but often what will happen is you would release and then the customers will discover the bugs. And when the customers discover the bugs, it's a bit more embarrassing because you should have picked it up. And when you go and fix the bug, you apply a patch because you're afraid to change things too much, right? Because now that it's been stabilized, all the bugs have been eliminated. Now you need to add a little change to the code and you try to change as little code as possible because you're afraid to break everything. It cannot go and refactor anything because the code controls you and not the other way around. It's like you're walking on thin ice and you're afraid that if you move too much, if you put too much pressure here, all of this is gonna break and you're gonna fall through and you're gonna die. And that's how it feels when the code controls you. So the unit test gives the control back to the developer. So with automation, as I said, you have control of the code and not the other way around. And you can also solve some of the more obscure bugs or even discover bugs that you did not know that existed in the first place. Scenarios you never conceived, but once you start unit testing, you stumble upon the scenarios just through the methodology of the testing. And so yeah, you find bugs that you weren't even looking for. And then you become more confident and you can go for refactoring larger chunks of code because you know that you have the safety net of the unit test. So if you break something, they're going to signal right away that you have broken something. And in the end, what happens is you write more features, you write fewer bugs, and this is a lot more fun. So unit tests, although they seem like a big pain in the beginning, they do tend to make working a lot more fun later on. So I'll show you all those steps I talked about. Basically, it's a four-step process. There are different opportunities that you will spot in your development where you can apply unit testing and just learn through those gradual steps. So let me give you an example with a shopping cart. Let's say somebody goes into your shopping cart and they can enter any quantity, just a text box. And then somebody enters 0.1. What will happen? And I know because I did that mistake once. And so I gave somebody a very big discount on a conference ticket. So they only paid one-tenth. That was a fun bug. I discovered it right away because it did not up. So it's a great way, or I say great. It's really a funny way to lose all your profit margins. So the first opportunity to test is when you discover any of those bugs. You encounter a bug and that's your first opportunity. Here's what you should do. You go into your code and you find the exact location where the problem occurs. And then you write a test case. So I'll show you an example. Here's a test case you would write. For example, you start with a new cart item, overwatch, 30 pounds, set the quantity to 0.1. So this is basically the steps to reproduce the bug. And then you assert that the quantity is equal one. So this is the desired result. You want it, even if it's set to 0.1, you still want it to translate to a quantity of one. But the thing is, as long as this bug is present, you know that get quantity will return 0.1. So this test will fail. Basically you write the test you want to pass, but it will necessarily fail at first because you haven't fixed the bug yet. Right, so you write the test first, you run it and you make sure it fails. If it doesn't fail, then it means that your test is wrong because it's not proving the existence of the bug. So you have to go back and change the test. But then you go ahead into the code and you make this change. For example, in set quantity, what you do is you just route up to the next integer and that way, if somebody enters zero, then you get zero. If they enter 0.1, then you get one and so forth. So you go and you fix that function and then you run the same test again. So you fix the code, the test stays, rerun it. Now it should pass. And it starts failing and then fail, fix, pass. That sequence is what you're looking for. That's how you write unit tests. That's how you guarantee the existence of the bug. You prove its existence and then you go, you fix it and then you prove that you have actually fixed it because now it passes. So basically you are expressing the intent of your application through those tests. So it does sound very simple. Like why would you test for something that you could have just, you know, var dumped in the second? Well, the thing about those tests is that once you fix that set quantity, you might break something else in the application and you would not know that. So this is a way to give you sort of safety nets so that you don't fall all the way down. So yeah, you want to prevent future breaks or inadvertent side effects on the rest of your code base. Future breaks means that if you are going to break that functionality in the future because you changed that method, then your test will start failing again and you'll know that, oh, I should not have changed that. It was there for a reason. And that's my other point is you don't always know why code is written a certain way. So you come into the, you stumble upon the code base and you have some weird conditions somewhere, maybe if order ID is greater than 12,561, then use this SQL, otherwise use that SQL, which is kind of weird. I find that in legacy code all the time. And you don't know why it has been written this way. Has the schema changed or is the data now stored in the same schema, but differently? What's up? You don't know because there's no comments, there's no documentation. It was like three developers ago, right? So you just come there and nobody knows why it's like that. And when you write tests, what happens? It's a form of documentation. So it tells you what the code is supposed to do and then you can then gain some insight into how, sorry, you have insight into why the code has been written that way. So it's documentation for future developers, but also for yourself in the future, maybe one week from now, you might not remember why you put certain condition in your code, for example. And you can go back to your tests and you can read them and you remember, oh right, I was trying to prevent this bug. So by reading the previous code I showed you, you would know that there used to be a bug with decimal quantities. Therefore, this is why it rounds things up. So that makes sense and you would leave it there, you would not touch it. And you might think, okay, well I can always just write a common block on top of that thing, on top of that method and explain why it's implemented that way. But the thing is the effort for you to explain the bug in the comments and the effort of writing a test are about equivalent, but the test is automated and offers a whole lot of other advantages. So you might as well just skip the comments altogether, write code that is readable and write tests that explain some of the more hidden things as to the why, which is not always clearly expressed by the code, even if it reads like English, you know what it says but not why it says that. So you can do that. Might as well just write the test, sorry, write the test and save yourself the efforts of testing manually later as well. And the thing is if anybody later breaks your stuff, if you write a lot of these smaller tests, if anybody breaks your stuff, you'll know right away because you run the suite and then suddenly it explodes and you realize, oh, I changed someone else's code that I didn't know it was supposed to prevent some bug. So basically you get two for the price of one. What we have here, what we wrote is a regression test. So regression testing is basically making sure that whatever you fix today does not break tomorrow. So it's to prevent regressions basically so that you progress instead of regress. So you can only move forward, you cannot move backwards because you have these tests that prevent you from moving backwards. So that was the easiest way to write a test. There's another opportunity when you are improving existing code. So when you make changes, for example, we want to, what was that example? So we want to give, for example, free shipping for orders that are over 40 pounds. So if you buy over 40, then you should get zero shipping. So what we do is we put two items that cost 30 so that would bring the total to 60, therefore higher than 40 and you should receive free shipping. But the code doesn't do that yet. The code always has shipping fees. So what happens here is that when you run the test, it will fail, then you go and you change the code. You change the code by adding a condition where you say, you know, if subtle is greater or equal than 40, return zero, otherwise return 15. And once you make that change, instead of just returning 15, once you make that, the test will pass. But you have to test both cases because if you make a mistake somewhere and you inadvertently give free shipping to everyone, that's not fun either. So always think about, so if you have a condition there, you have to test both cases. So the second case is a same product, but only one quantity. And you assert that you get 15 shipping. So basically, you now know how to write tests after the code is already written or while you're improving code. But there's also another possibility where you are writing completely new code. So the get total, let's say it's a completely new method. You want to add some taxes on top. So what you do is what most developers do is they would go and they would write a few lines of code and then they will var dump something, that the response of that. So for example, you get the taxes and then you var dump that you actually have an array and it contains all the stuff you need. And after that you go and you write the rest and then you var dump again. And that's how a lot of developers code. So here's what a var dump will look like. So taxes, so these are Quebec taxes. You have two of them. You have the GST, which is the federal tax and QST, which is the provincial tax. And you just add both. So I var dump this and I see my array with the name, with the percentage. And here's the thing about that, the disadvantage to manually inspect your var dump is because you can make a mistake just by reading it. So for example, if instead of 0.05, it says 0.5, you might actually miss that when you read it because we're humans, right? We get this sort of blindness sometimes when we read numbers. So you can make a mistake yourself and then you continue and your code doesn't work as expected, but you think it does. So you can really trust yourself that much. And the other thing also is that you will inevitably erase that var dump and you will never see it again and you would not know that you've broken something. So if the method changes, the get applicable taxes method changes and now starts returning something else, you might never know in the future and you will get a total that is the same as the subtoll because you have zero taxes or maybe something just explodes. And you wouldn't really know unless it explodes and then you have to go back, but you have to var dump things again and you have to inspect everything manually once more. So it's a lot of work, it's a lot of back and forth and you want to have some sort of, once you check that once, you wanna make sure that it stays that way, it doesn't change. Otherwise, the rest of your application gets impacted. So what you do is when you write new code, you can instantiate the card, you can get the applicable taxes and then you can check for that array. So you check that it's actually an array that it has two elements, you grab the first one and you make sure that it has a percent there and it's 0.05, right? So this is your test that will pretty much inspect what you would do by eye just by reading the var dump and writing this test is pretty straightforward. I mean, it's just a few lines of code. It's very simple and what I'm trying to show you here is an easy way to start creating a lot of tests if you want to start increasing the coverage is to spot those opportunities where you are doing a var dump. So every time you're tempted to write var dump something, think about, oh, maybe I should write a test case for that because I'm testing for something, right? By doing the var dump, I'm going to inspect the array. So why not just write the test and then you prevent regression, you get all of these advantages. So it's a very neat way to just go and start writing more tests more quickly. And it will make sure that you never fall too far. So I always like to compare this to rock climbing. What happens is you climb a bit, then you put this anchor, you get your rope through it and if you lose your grip a bit higher, you only fall as far as the last anchor. So the more frequently you put those anchors, the less far down you will fall. So you will not lose all this progress and also not die. But in case of unit tests, it's a bit less fatal, but who knows? People die when applications fail. You dispatch two ambulances to the same location and you don't dispatch an ambulance there because somebody didn't write a unit test and yeah, somebody dies. So I don't know if it makes you feel better or worse. Maybe I should avoid such drastic examples. All right, so yeah, this is just like rock climbing and you never fall too far and now you know how to write tests after your code is written or while you're writing new code. But what about writing tests before you write your code? How does that work? How can it test for something that's not even there? Well, it's actually not that complicated. So here's an example. Remember we're writing an e-commerce application, right? We have a shopping cart, we have a catalog of products and those products maybe arrive to you from CSV files. Somebody's gonna do an Excel spreadsheet, they're going to export it to CSV, send it to you and now you have to import it. All right, so you have to write this new tool to import it. You'll probably need to just parse the data and store everything in the database, maybe do some validation, make sure that things are sound in there. And at that point, you know what you want to do but you haven't decided upon the implementation details. So the implementation details can be like, do I use an F open or do I use a file get contents to grab the file from the file system? You don't know, maybe it's not even coming from a file, maybe it's coming from a database or something. It doesn't really matter. All you know is that is supposed to come from somewhere. And yeah, so you don't have to make any decisions in the beginning because you can all postpone that until implementation. You only need to write the test first and then when you write the actual method that does these things, then you can think about all those implementation details. So here's the kind of test you would write. Here's an example, we create a new class called catalog imports and we parse something from CSV. We check that we get an array back, that we have two products that got from the CSV. Obviously the CSV would be made specifically for the test so that you don't constantly have to change your tests. Make sure you always use fixed data specifically for the tests. And then you check that it has a name, it has a price, you can also assert for other things. So you do that. At this point, what do we know? We know that we're getting an array out. We know that we're supposed to have a file somewhere called catalog.csv. We don't know exactly where it is or how we're reading it. But you can already write this test because you know what you're trying to do. You represent your expectations. This is what you want your software to do. You want it to accept a string with a file name and you want to spit out an array. That's all you need to care about. And now what you're going to do is you're going to write just enough code to make this test pass. And when this test passes, you have finished writing the feature you wanted to write. That's it, that's it. So it's a new mindset that will make you write less code because let me just go back to the test maybe and explain. So at this point, there is no class. The class catalog import does not exist. The method parsed from csv does not exist. So you have to go and create it. So you write this test, you run it, it fails, obviously none of it, this exists. And then you start creating the class, you start adding the method. And you would know that the method will take one argument and you know that it will return an array and you just make some of those implementation decisions inside, it will probably be five lines of code. In the end, your code will probably be shorter than your tests because the tests, they kind of force you into writing just the minimum necessary to make those tests pass. It's very useful. So you will write less code because the instructions are so clear as to what you need to do. You just read the file, extract the names and prices for each and return it. And that's it, that's all it's supposed to do. It helps you really stay focused on what the objective is, the objective is to make the test pass. And because you're focused, you don't start writing more methods to do like a whole bunch of stuff, it's not necessary. And not only will you write less code, but also you end up writing more elegant code. It's going to be clean because it's going, because you're so focused, it's going to be straightforward. And the code is going to be very readable. And as a bonus, it's going to be working right away because once the test pass, well, you don't have a bug, right? I mean, you can think of other scenarios later on, but at least you know that this code is passing. So you don't have to spend so much time debugging it later. Some things to consider. It's not just, you're not supposed to just test how it's supposed to work, but also make sure you test how it's not supposed to work. So think about all the exceptions. So let's say you have a shopping cart that stores data in a session, okay? People come in, they go and they add an item, and then it goes into the session, the quantity, you then calculate, from that session, you can calculate the total and display it to the user. But what happens if at that time price changes? The price of the item has changed. So it's no longer sold at the price it was before the checkout, but it's already in the session. What will happen? Will you just stealthily charge the person the new price, which might be higher and risk losing your customer in the future? Because if it's 30 and then suddenly it's 40, you can just check out. So you have to think about those scenarios. So these, you know, plan for all these exceptions, then you would, or you could, for example, say, we can no longer process this checkout because the price has changed. Maybe you would have a process where you say, do you accept this new price? Does that work for you? Which is what most online travel agencies, they don't do. If the price has changed, you have to redo the search. But I mean, I still want the same flights. I don't care. Okay, the price increased $100. I still want that flight. But no, they make you refresh everything. So it's a business decision you can make. But at least you should plan for those things and not allow people to check out if something becomes unavailable suddenly because that will make no sense. So let's say you deactivate the product. So now it's, you know, in the database it says, is active equals zero? Therefore you're not supposed to be able to check out anything that is no longer available or maybe it's out of stock or any other exceptions you can think of. So you can write tests for all these things and prevent those scenarios from occurring. So you have to plan ahead a bit. But at the same time, you don't want to spend too much time thinking about what could happen because some scenarios are not realistic and you have to use your judgment to see what is realistic and what is not because you can spend all your life writing these tests and still not finished, still not account for every single thing. So I'm a pragmatic person and what I do is, for example, I would expect a database to become unavailable. I would expect prices to change, things like that. But I do not expect setting the total variable here and the next line, it's no longer available. I mean, that just doesn't happen. Of course it's a simplistic example just for illustration purposes. But certain scenarios are just not really going to happen or very unlikely to happen unless there's, I mean, there's obviously a reason for this to become unassigned. You have like some RAM failure on your server. I mean, these things can still happen, but highly unlikely. You're not, you shouldn't have to test for all these things. So yeah, use your judgment, see what's likely, what's not. Also certain projects or like I work with clients, certain clients, they can stomach a lot more risk. So certain things I don't need to worry about because they say, well, I mean, if this fails, it's not really a problem anyway because I can just go and manually do that and it takes me five seconds and I really don't mind. And that can save me a whole lot of trouble because sometimes setting up certain types of tests can be tedious. So yeah, see what's the risk level that's acceptable and just go from there. Also I would say one of the things that you shouldn't test is when, say we have a getter, like a get shipping, if it just returns, like there's no conditions in there, sorry, there's no conditions in there, it just returns the property of the object, then you probably don't need to write a test for that because nothing can go wrong in there, it's so straightforward. So that's another unrealistic thing to test, in my opinion. So now you think, okay, so how many test cases should I write? Should I write more or less for each method? And it's actually, there's a scientific method to that, which is cool, just for fun. Raise your hand if you get that reference. All right, so more people get this reference than people who stress before release, awesome. Or maybe people are just digesting lunch and not paying attention enough. So cyclomatic complexity is an interesting measurement. It basically says, basically means how many paths can your code take when it executes? So how many execution paths do you have? That's what it means. And you can calculate it, you can just eyeball it or you can use the calculation. So here we have two execution paths. So here's how it will execute. If the subtoll is greater than or equal to 40, what happens is you will execute the line that returns zero, but you will not execute the last line. And vice versa, if it is less than 40, then you would skip the body of the condition there. You would not execute that line and you would execute the last one instead. Obviously here is also a simple example just for illustration purposes. But if you have multiple lines of code there, you might want to test both of these possibilities. So this is useful because if you test with, let's say just a 100 pound subtotal, then you're not testing the last line. And if there's a bug there, you accidentally give free shipping to everyone if you don't test for that. Or the shipping is more expensive or whatever the problem in the rest of the code is. So you wouldn't know unless you test both paths. And the way you do that is by calculating how many decision branches you have in writing test cases for each one. So decision branches are execution paths. So basically when you see the if, you branch into that. You make a decision and you branch into that chunk of code, otherwise you branch into the other chunk of code. And that's how code flows. Conditions are obviously decision branches, but loops are also decision branches because a loop has an implicit if statement in there because you need to know whether you are executing the body or not, right? Or whether you loop again, basically. And it's important because based on whether the loop's body has been executed or not, it can affect the code that follows it. So I usually add two tests for every loop, one for executing zero times and another one for executing multiple times. And I'll explain why with this example. So in this example, we iterate over the products, we add to the total and then we do something with the total. If we do not, see, we haven't initialized the total, so that can cause problems later on, the total will not exist. So if we have zero products, all of that will be skipped, the body of the loop will be skipped and you will get to the other line and total is undefined and you'll get an error. Hopefully an error and not just a notice, right? So you do that and you are basically opening yourself to the possibility of if there's no products to break your execution. So you will need to write a test for having zero products to make sure to cache this bug and to fix that what you'll do is you'll just initialize the total at the top, right? Okay, so now you wrote a test case and you fixed the bug, the underlying bug and here's the thing. If you execute this zero times, it works. If you execute it one time, it works. But multiple times, you're not gonna get what you expect because for those who might have noticed and if I can point here, it says equal. It's not plus equal. So if you execute zero times, it works one time, it's fine because you assign and you get the right amount in there and you test for it, great. But then you execute twice, it overrides and now you don't have the right amount and your code is not executing properly. So now you have a bug. So to be safe, I recommend always to test for zero products. So set up a test case where you have zero products and set up another test case where you have maybe two products just to be on the safe side. You have to worry about the different permutations just zero and two and you should be fine because if you only execute once, that serves no purpose and running a test for zero, one and two also is redundant because the thing about the test that has two products already covers all the things that can go wrong with just one product. So always have zero or multiple. Another type of test that I really like to write and this is not a unit test anymore but I still use PHP unit for that. It's quite useful and I always get questions about that at the end so I made this into a slide. What I do is when I write APIs, I would use something like Gazel, which is just a wrapper for curl. I use Gazel to make HTTP requests to my API. So I would call an endpoint, maybe pass some headers. I will call that and then I would grab the output of that, the response from that and I will do a sort JSON string equals JSON string and I compare it to what I'm expecting of that API. And the reason I use JSON string equals JSON string instead of just equals is because this one would show you exactly where the problem is. It's gonna say, oh, I expected an array here, you gave me an object or this thing is missing. It's gonna show you like a diff pretty much of what you're supposed to have. Whereas equals, it's just gonna say not equals, sorry. That can be a problem with those large strings because then you have to go and hunt for the difference. It's very tedious. And the other thing also because it parses the JSON internally, your spacing doesn't matter in the test so it doesn't have to be the exact same spacing. You can expand this as much as you want even if the API returns something that's very compressed. So you don't have to worry about tabbing and all that. It's doing that for you. So it's very handy. So I would use Gazel, I would call that, grab and compare the response. And the really cool thing about that is it establishes a contract between the API and whoever's consuming the API. So I use that on certain projects where I would have say an Angular developer writing the front end or just JavaScript. And I was writing the back end. So we would agree upon end points and upon the format of the data that is output by the API. We would make this agreement. We would basically write this string out. We would write those tests together. And that serves as a contract between the back end, the API and the consumer of the API. So I can start working on making those tests pass because this is TDD. I roll these first. Then I start writing my code. I write my end points, all of that. I return the proper string. And when it returns the right string, then I know that I finished my work because I've achieved this objective to make my test pass. Now for the front end developer, it's really cool. The front end developer doesn't have to wait for me to finish writing the API because they already know what they should expect from which endpoint. And what those developers usually do, they would go and grab that string, save that into a file. And you can even create a file instead and read a file rather than writing the JSON out right now. And they can point to those JSON files. And they grab those static JSON files and they can start building their interfaces on top of that. And once they're done and I'm done, instead of pointing to that static file, they point to the proper endpoint and it just works. And I know it works because the tests are there. They're using the exact same string to build on that. And I'm making it output the exact same string. So the match is immediate. There's no time necessary to put all the pieces together, which is pretty nice. So a quick recap of what I said. Testing takes some practice. So don't expect to be an expert overnight, as I said. Don't necessarily aim at the beginning for the 100% coverage. Don't worry about TDD. Start with something small. Start with something that makes sense to you right now. And what will happen is as you ease into this whole unit testing business, you will start to understand how it works and because it's quite abstract. And then this idea will grow in your mind and you will become comfortable with it. And then you can tackle the second and the third and the fourth step. And then TDD will become natural to you. But if you go straight to TDD and you skip all of that, it's like you want to play Vivaldi and you don't even know how to play, I don't know, twinkle, twinkle, little star on the violin. You cannot just go and skip all of this process. And I find it sad that not too many people teach that. And also, so the four steps you can take is write tests when you see a bug first and then you write tests when you improve your code, like what we did with the added shipping, free shipping. Then you can test as you write new code so when we did the whole taxes thing. Then you can write tests before you code and there's many ways you can do that. The thing about the API, but also the parsing, the CSV file. And always remember that you have to test those unexpected scenarios. So use cyclomatic complexity to make sure that you write tests for all the branches that you have in your code. And this way you will ensure that all of your tests, all of your code is covered. Every single line basically that can execute has been executed through the unit tests. So one, before we finish this, I would like you to see testing as preemptive debugging. And that is the reason why it's so hard to sell tests, because it's great. I mean, you learn all of this, you're gonna go to the office and you will try to convince your boss to allow you to write unit tests. But you shouldn't, because that is none of their business. You don't have to convince anyone because testing is preemptive debugging. It is not something that is separate from the code. Can you go and write code and say, well, I'm going to charge you that much to write the code, but if you want me to spend time debugging it, that's extra. I mean, nobody does that really, right? You cannot, you have to debug the code. It's part of writing the code. Well, if testing is preemptive debugging, therefore testing is part of the creation process. It's not something that you can just remove, because basically instead of debugging later after you discover the bugs, you write tests before you discover them. And it's less effort, it's always less effort to debug now than to debug later. So if you write those tests, you're going to spend, I don't know, maybe five minutes writing a test, but if you have to chase that bug a few months later, it's gonna take you maybe a day. So you can see it's really a big difference. So you'd rather debug as early as possible by writing tests. So once you start seeing testing as an integral part of your development process, because it's really just debugging, from there it becomes so much easier to convince anyone. You don't even need to, you just write your tests and in the end you would spend less time on the entire project by writing tests than by not writing tests and debugging later. So it will shorten your development time. Although it seems like you're adding more work, you're actually shortening the second part, which is writing the code and debugging it. So yeah, you will eventually, as you get enough practice, at first it's overwhelming, it takes time, but as you get enough practice, it will save you a lot of time on the project. In my case, I save about half of the time just by writing tests. So something that would take otherwise six weeks is now taking three weeks. So I don't even estimate projects in months anymore. It's weeks, sometimes days. I once rescued the project in five hours, just by writing tests. So it's counterintuitive that you would save time by adding tests, but what you have to do is really give testing a fair chance to work. Don't give up, start small, follow the steps. If you go to the TDD and then it suddenly starts to not make sense, go back, do the other thing, and then when you're ready, try again the last step TDD and see how it works for you. So yeah, that's pretty much it. That's all I had to say. I will tweet those slides. I think this conference is unjoined in, so go and write some comments. I blog about a bunch of stuff, and I'm ready for questions. Hi. Hi. What approach do you take when you want to introduce tests to code that maybe has lots of dependencies and isn't testable? Right. I should really make a slide about that one day. It's also a very common thing. I will actually pull another slide from another presentation, which I also used to explain that. So that's from my refactoring talk. It doesn't matter what it does, really. What I'm saying here is this is a very long method, right? And there is no test, and there's a bug in there. So it accepts a whole bunch of arguments. It's just so unwieldy. And what you can do with that is you can find exactly, so you go in and you find where the bug is, right? So I mean, you don't just start writing tests for the sake of writing them necessarily. As soon as you encounter a bug, you go in there and you say, okay, so the second block there, which I don't know if you have the contrast, but the second block gets coordinates for a location. So it uses the Google Geocoding API and you give it the brewery and it's going to give you the address and everything, latitude, longitude, all of the good stuff. So you have a bug in there, so you grab that block and you can put it into a separate method and just call that method from this one, right? Now you have a smaller method and you can write tests for it. So you write the test first, you fix the bug, and then the test will pass. And that's how you can start, instead of worrying about unit testing this whole method, you split it into smaller chunks at the same time refactoring your code and making it more testable. I've seen, so this is actually pretty simple, but code that I cannot show due to a contract that I had with a client had a method with 2,000 lines of code. And I found a bug in there that was just a small chunk and I just put it outside in a separate method called up, unit tested it, and this is a bug that will never happen again, I know for sure. Does that help? Yes. I mean, there are tools where you can, with reflection and everything, we have other tools now that allow you to test, I think even private methods. Yeah. So what's the best way to test API data from third parties? From third parties. Like if I want to make sure that data always coming or certain structure, but it could bring different things, obviously, but. So what I do, let's use this example of the geocoding API. When I was building this initially, it's actually code from, I would say paraphrased from something I've built. So I'm sending it something like location and yeah, the key and everything. And then, so I grab that URL, I curl, I get some data back, and then I cache it. And then I use that. Then I know that this URL correlates to this output. It's hopefully consistently outputting this thing, right? And I can just save that file and then I can start writing tests against that. And then when I build on top of it, hopefully it doesn't change. But that's one way to do it. Does that help? Okay. Hi. You mentioned not to go to your boss and tell them you're gonna start unit testing. But in order to understand a problem, you normally need to understand the business requirements for that. See, you can't just look at code and think I'm gonna fix this. You need to know why it's wrong. And that kind of gets some context around it. So... Are you talking about getting into existing code? Yeah, existing code, yeah. All right. I suppose my question is, you said don't go and tell your boss. I mean, is that really good advice? Oh, then don't tell your boss that you're writing tests. Yeah. Because what I mean is you need to get a better understanding of the problem before you can write tests to fix it. Yeah. Be communicative about it. Yeah, I understand. So what I'm saying is if you need to gather requirements, go ahead and gather the requirements. You don't have to justify that you're writing a test. You don't have to say that it's because you're writing a test. You're debugging. I mean, I'm not advocating for lying to people and everything and hiding. I'm just saying you don't necessarily have to always say, oh, I'm going to debug this by doing a var dump. You're not gonna tell people every single thing you do. So once you start seeing it like that, don't separate that in your estimates. Don't show, well, I'm gonna spend that much time writing code and that much time writing tests. That doesn't make any sense. Just include that into the entire estimate and present it as an indivisible unit. I do have one more question if that's right. Sure. Yeah, it's just one in the back. It's just random. Obviously, writing your own tests, it's self-believing. Would it be a good step to actually, if you are working as part of a team rather than individual, to get someone else to write the tests and then you implement those requirements or? Yeah, okay, so yeah, we're talking about separating the QA team from the development team and that's perfectly fair. It doesn't have to be, but it can be and it's fine too because one person takes care of gathering all the requirements and putting it, encoding it basically into tests and the other person is responsible to read those tests and make them pass. I mean, yeah, you can separate that. Okay, there was one on the balcony there. It's so hard to see. Hi, so at Mocking Libraries, would you use mockery, fake or PHP units native way of mocking objects? I personally use PHP units for pretty much anything. I mean, it has integration with Selenium and all that, so I don't go too far. Okay. Sorry, I can't necessarily compare them side by side. Anyone else? Just wondered if you use any particular tools or anything for measuring things like cyclomatic complexity? Sorry, could you slow down? Sorry, I just wondered if you use any particular tool for measuring the cyclomatic complexity? Yes, the cyclomatic complexity, you can see that when you do what's it called, the report, the HTML report in PHP unit, and you can see all those variables there. Okay, thanks. Yes, coverage, thank you. There's another question over there. Hi, obviously when we're talking about unit testing, to isolate systems, we often mock dependencies so that we're only testing the system we're concerned with. I think my issue that I've experienced with multiple dependencies and mocking is that you get caught up with your test is very concerned with the implementation of the system you're testing because it needs to know what methods are called on those mock dependencies. And so you inevitably end up when you're refactoring, even though the expectation of whatever you're testing hasn't changed and the output and input hasn't necessarily changed. The implementation has changed because your test is so concerned with that implementation, you end up having to maintain several tests and every time you refactor even a small bit of code, you end up having to change multiple tests. And for trivial things like if maybe you called a method on a dependency that you've mocked twice previously, but now you've refactored a small bit of code and you're now storing that to some variable and you're only calling it once. You now need to update your test and say, I should only expect this method to be called twice. So it seems counterproductive when you're refactoring if you've got large test coverage and you've separated loads of concerns so everything's being injected by dependencies as we all feel it should be. I just wanted to know what your approach to handling that kind of system is when your tests are no longer maintaining your tests for the sake of finding bugs, but simply because you're having to get them to pass because you're changing some implementation, though you know for a fact you're not introducing breaking changes because by the nature of what you're doing. Yeah, of course, and it's a big problem when you start mocking a lot, basically mocking is you create dependencies in your tests and they kind of become almost integration tests because you rely on these other methods. So if you get into a lot of mocks in your test case then that's a warning sign, basically. Some of the ways to reduce that is to split your code as much as possible so that instead of accepting dependency they can accept something more basic like a string. So for example, I don't know, this geocoding, I could either accept like a very complex class or I can accept, so once I decoded everything I can have an object that's very complex or I can just provide a string with latitude, longitude or just two numbers and try to use primitives as much as possible when calling methods and that can avoid using a lot of those mocks. And yeah, just in general, having methods that don't necessarily depend or aren't concerned about dependencies as much. So in your object, instead of calling this some dependency call method, maybe that method can from a different one receive the raw data from the dependencies and then act upon it. I don't know if you understand what I mean. So let's say you're writing a controller and you have the request object. So maybe instead of grabbing the request object and getting something out of it, you could have a method that just gets the, let's say you're concerned with a certain header from your request. So you grab that from the request and then you pass it to this other method that does the actual logic. Then you can unit test it much more easily without being concerned with dependencies. So it's just a way to refactor things so that your methods aren't concerned with the state of the object so much. Instead you have a method that kind of puts everything together which you don't necessarily need to test and then that dispatches to the smaller methods that do the actual logic because you only care about testing the logic really. And then you can have integration tests for the others and those are a lot more high maintenance. So yeah, try to have your unit test as isolated from even its own object as possible. That's my approach. Does that help? Yeah, I guess. Okay. There was a question up there. You raise your hand. No, that person's on the phone now, so, all good. Okay. Maybe asking the Twitterverse. All right, well thank you very much. Thanks for joining. Thank you.