 is the driven. That's the important bit, because that's what actually changes when you actually do it like that. So it's a software development process that is based on a very, very short cycle that you repeat very often. You start by writing a test, and then you make it run, and it fails. And this is called the red phase. And then you write the minimum amount of code to make the test pass. And then you write the test, and the test passes. And this is called the green phase. And then, because you have written the minimum amount of code, maybe also a rough code, you want to refactor. You can refactor both the code and the tests. I suggest not at the same time, because it's not wise. And when you start with it, it's really useful if you take very small steps. So you write a little bit of test code, and then a little bit of code to make the test pass. You refactor, you go back. You add another little bit. You make that pass. You refactor. And so on and so forth. And the more you do this, the more in your brain, basically you learn how to work with this kind of back and forth between tests and code, which means that at some point you're going to be able to take longer steps. And then you go like, I'm taking longer steps. And you take them really long. And at some point you're in trouble. And so you go back to short steps. You fix whatever you have to fix. And then you try to increase a little bit more until you get some comfortable size for you to work with. But where do we start? It all starts with a business requirement. So that's very important. You have to understand the business requirement. And then when you are clear on the business requirement, you enter the TDD cycle. You start writing a test. You make it pass your refactor. Test, pass refactor. And so on and so forth until that business requirement is fulfilled. And then you get to the next business requirement. The frog at this point thought the training was completed. But the masters agreed. And they kept giving examples. They said, this is just the mechanics. If you know how to use a steering wheel, the gear switch, accelerators and brain, but you never driven a car, would you call yourself a pilot? Of course not. And I've talked to some people, some coders, they say, I know all about TDD. I read an article. But have you ever tried it? No, it's not really for me. Well, if you don't try it, you don't know. So you need to try. What changes in your brain when you use TDD? Well, without TDD, you go from the business requirement to the code. And when you think about the code, you have to take care of two things. What the code needs to do. We need to deliver this functionality. And how it needs to do that. We need to cycle using a for loop. We need to call this function. So it's two very different things, the what and how. And we take care of both of them at the same time. So our brain is fully concentrating on two things. On the other hand, when you use TDD, you basically mostly take care of the what when you're when you're writing your tests, because the tests are testing what should happen in the code. And then you take care of the how when you're writing the code itself. So you have your brain concentrating on each of those two different times, which means it's like it's like having two brains. It's much more it's much you become like much, much more powerful. Your code changes immediately. There are some common aspects to to working with TDD, the key principle to keep it simple, stupid, which means basically by forcing yourself to have a test that represents something that your code needs to do and to again force yourself to write the minimum amount of code in order to deliver that. The code basically remains simple. You cannot go and take care of, you know, super architectural things or whatever the code automatically stays simple, which simple is really, really important. And then the Yavni principle, you ain't gonna need it. If you're focusing on understanding the business requirement, writing a test and making it pass, it's not likely that you're going to over engineer your code because you don't you don't have the freedom of a code that says I need to do this. And then while you're coding, there's nothing telling you what that code should do is just in your mind. And you go like, oh, I'm also gonna add this because maybe tomorrow I need it, which is something that with TDD you don't do. Three strikes and refactor. I've taken this off the test driven development with Python from Percival, which is a very nice book. I just read it a few months ago that basically says when you're in the factor phase, and you find that you've one functionality and basically the same functionality, just wait for the third occurrence of something really similar because if you group these and factor out the mixing or whatever too soon, when the third occurrence comes out, you may realize it's not as easy to make that work with the mixing that you just done. So you may also have to do a lot of refactoring work. On the other hand, going for four or five, that's not good. So three strikes and refactor is a nice balance to achieve. You can do all the architecture design when you refactor the code. And the beauty of it is that because you have the tests, you can refactor with confidence. And then you do triangulation. Triangulation was puzzling for the frog. So let's see an example. Let's say that you're writing a test and you're just making sure that your square function that takes minus two is producing number four. If you think about it, at this point, all we have in the code base is to fulfill that requirement, which basically says I just want to know that your function returns four. So you can do this, you can cheat. It's actually suggested by TDD authors. And it's called fake it till you make it. But of course, we don't want to have that code in our code base because that's not correct. So we do triangulation, which means we pinpoint to the same function from two different angles, which would cause us to have to fake it in two different ways, which is not compatible. At that point, we need to write the real logic. So you write the actual logic when you have triangulation like that. And this is very nice, not just because it's like a theoretical thing, but it's because you get something done, you get a test that basically is there. And then when you triangulate, you already have a test there. It builds on your test base. So the main benefits are you can factor with confidence, because you have a set of tests that when you touch the code and you change something, like in the first example, we had the boundary that was jiggling a bit. You have a test that fails, readability, because it's much easier to see code that was designed with tests, tests first, because basically, even though it's like a given, you take care of the design part when you're writing your tests, because when you're writing something, unless you're writing integration tests, but if you're unit testing, you have to test a unit of code. So you have to think, how do I structure this code? You can't just write something. You have to give it a thought, which means you're thinking twice about the code. When you test and when you actually write it, which means it comes out much better, much more readable. It's more loose-coupled. It's easier to test and maintain. It's easier to test, of course, because it's coming from the test. And it's easier to maintain because it's well-structured. And when you do test first, you also have a better understanding of the business requirement, because in order to start writing your tests, then you have to have very, very clear in your mind what is the business requirement that will drive the design of those tests. If you're not clear, you will find yourself blocked at the test level, which basically will prompt you to go back and try and understand better the business requirements. And then by testing in small units, it will be much easier to debug. And also, you will have the perks of having the tests act as documentation, because by having small tests easier to read, you just go through it very, very quickly and you say, oh, okay, my code does this, which is sometimes even easier than read an English sentence that describes that functionality, because English can be misleading, especially if I write it. So having a test, which is Python code that you know what it does, it's very useful. And higher speed. It takes less to write tests and code than to write code and then having to debug it. I can tell you this by my personal experience. I was working for a company called TBG. We were competing with other companies to get the basic preview access to Twitter's advertisement API. And we had to deliver a proof of concept in about six weeks. We succeeded. It was a monolithic Django application. And the order from above was no tests. We don't have time to do them. So no tests. We do the coding. We do overtime. We go on Saturdays and stuff. And the last two weeks were spent just to fix two bugs that they drove us crazy. And it was just one, one small Django website. But it grows so complicated, touched in such a short amount of time by six, seven people, that basically we were going crazy in debugging. Because we were changing something here. It was breaking something there. You go fix it. You break something there. And then it's you've got a ripple effect that flows through your code. On the other hand, had we done tests first, that time that we spent debugging, we wouldn't have needed that. The main shortcomings of this technique is that the whole company needs to believe in it. Otherwise you're going to fight all the time with your boss, which is something that me and some of my colleagues know very well. We need to write the test. There is no time to do the test. But then we'll have to debug. It will be a problem that we will have later. Okay. So we go and write just the code. And then, oh, it doesn't work. It's your fault. So it's, you don't want to, you know, you have to really, really convince everybody that TDD is the way to go. And because it's really hard to see what happens in the long term. All we, myself included, most of the time, we just see what happens tomorrow. We need to deliver tomorrow. We need to make the client happy tomorrow. And we tend to forget about the long term. Blind spots, if you don't understand the code, you'll miss read or if you don't understand, or if they are incomplete, the business requirements, it means this will reflect in the type of tests that you write. And this will also reflect in the code. And without the business requirement part, take a look at the tests, perfect. Take a look at the code, makes the test pass. We're done. So the code and the tests, they would share the same blind spot, same thing that you miss from the business requirement. So in this case, it can be harder to spot something. Pairing, for example, is, helps on this, because you have to discuss what you're doing, which means you bring up a discussion about the business requirement. And then you realize, oh, I understand it this way. My colleague understands it that way. You go ask whoever it is that you have to ask for clarification. And then this does not help. And also badly written tests are hard to maintain. For example, tests with a lot of mocks, they're very hard to maintain, because when you change a mock, you're changing an object that basically just a puppy, you can do whatever you want with it. And you're not really sure if you're breaking something, because you just change a mock and you make it do something else, and then you make your code pass. But if that mock is representing a real object as it should, and that object is no longer in sync with the mock, then you have bugs. So when you have mocks, you have to take extra care in order to refer to your tests and your code. So these are the shortcomings. I'm going to give you a few real-life examples, because one of the things that I'm told many times is like, yeah, okay, it's all good and nice when it's from a theoretical point of view. But then in real life, what happens? So for example, you hide by a company, you tell them, I really want to do tests, they say good, they don't say you are the first, and they have a legacy code that is not tested, so you have to cope with that. You have to change something, so what do you do? You read the code, you understand how it works, and you write tests for it. And this is wrong, because if you read the code, understand how it works, and write the test for it, what happens? You're inverting the cycle. Basically, you're going from the code to the tests. What we want to do is to go from the business requirement to the tests, to the code. So a better way of approaching this would be, read the code, try to reverse engineer what are the business requirements that were behind that code, and then write the tests for it. Concentrating on the what part, not on the how. If you do this, it's very likely that your tests will concentrate on the how part. Changing a horribly long view, let's say we have a Django view, and we need to insert pagination, filtering and sorting in it. We do a bunch of things, we get the data, the data is from a search, so it could be empty, it could be 10 things, it could be a million things. Then we do another bunch of things, and then we render template with some context. So we need to add pagination, filtering and sorting, and it's not possible to do it at the API level, because it's too complicated and it's not tested. So something that I did after discussing it with my colleagues was let's write filter data, sort data, and paginate data, insert that into the function, into the view, without changing anything that was happening before we get the data or after we got the data. But those three functions, let's write empty dd. So at least we're changing the code, yes, but that bit of code will be rock solid. And this is a very good way to go about it, because in the end the function was not caring before about what data was coming from the search, so it's not going to care now if you have paginated, sorted it, or filtered it in any way. Introducing a new functionality into an existing code, which is not test. So that's a nasty piece of code that does a lot of things, like this function. Uncle Bob would cry if he saw it. So how do we change? Because of course you will not have the test, the time to go through every if clause to check if that for loop is correct or not. So what do you do? Well, one possible solution would be to come up with one test for the new functionality that you're trying to insert, and then you change the function, which was not tested before, it's not tested now, but at least that new functionality is done. The next time you go back to this function to refactor, what happens? You have one test, you come up with another test, and then you go back again, and at some point you will have a bit of tests behind this function, which means either the function was well written, and therefore you just keep adding tests, or at some point you realize the function has a bug, because you've got, you start having tests for it. So the frog was in Zen mode after all this. He went back to the princess and passed the exam. They married, and when the minister said you can kiss the bride, nothing changed. It was just a talking fog after all. So what's the moral of this story? The princess should have tested first, and so she did. Thank you very much. Great, we have a few minutes for questions. Do we have any? Also, first of all, thanks for the talk. It was really enjoyable, and my question is, how do you put your test in directories structure? Because I have some tests of all kinds, but I have trouble to checking if the code I intended to make already have some. So how do you put them? So, for example, when you start the Django project, you've got tests in the application folder. First thing that I do is to remove that, and I reproduce the code structure in a test folder, preparing everything with test underscore. This is two main advantages. The tests, basically, are easier to find, because if you know the, if you know the tree of your files, you just go to the same tree with the tests, and also when you deploy, you just delete that folder. Because sometimes, when you deploy and occasionally someone runs those tests on production, you get your production database that goes, that's not nice. So, I just reproduced basically the tree of my code. Probably there's also other approaches, but this works, it's good. They're out of the way, and then for functional tests or integration tests, you may want to have a separate folder as well, or maybe even a separate repository. For example, if you are doing integration tests, it's likely that they won't be testing what's just in one repository. Maybe it's more repository, more services, or more application. So, you want to have like a whole project dedicated to your tests. We had a question here somewhere. Hi, great talk. As humans, we are, I believe sometimes there can be mistakes on the test data, either by typos or bad hand calculations. In your experience, how often do those mistakes happen and how do you cope with them? They happen more when you have to deliver by yesterday. When you have the luxury of delivering by tomorrow, they happen less. What you want to do is to take good care of your fixtures, and especially when you change the tests or you have migrations, take good care of those, because basically those are the things that you test against. So, they need some love. Me, for example, I tend to write unit tests more like an interface style, which means I have some inputs and I check on the outputs, rather than mocking everything out. I tend to mock the least possible, because it's dangerous to mock. And what was the second part of your questions? I mean, in your experience, how often, for example, in the examples you printed before, one of the numbers could be wrong by a typo, and you don't notice that while you are writing tests or the assertion, how often does that happen and how do you detect it? The way we detect it is to have pull requests. So, we use a branching system, we have story branches, and then we have a staging branch and before master branch. There are at least two pull requests for other people other than the guy who wrote the code to take a look at that code and we read it. So, most of the times the stuff is detected on either of these two passes. That's very helpful. Thank you. My pleasure. Hi. I was recently involved in discussions about TT and some guys were defending that if you focus too much on tests using TDD and writing the test first and etc., you may live for later thinking about the architecture of the code, the design of the code, how different parts of the code is coupled between them, the performance, etc. What would you say to them? I would say that TDD is not a type of methodology that fix everything and solves every problem. You still have to take care of the architecture, take care of the design and you can do it in the refactor phase. What people who do this and find themselves neglecting the rest of the code is because probably they believe TDD can do too much. TDD gives you a very good way of writing your code and a very good solid test code base that basically shields you a bit more when you're a refactor but it's not that you can go refactoring like a monk because you've got tests. You still have to give it your best shot and take good care of the code you write. With TDD you've got this extra thing, extra like a guardrail that basically keeps you on the right, on the right path. It's just something that you have more but will not solve all the other things that you have to do. Great. Do we have one last question? No? All right. Now I've already won. Thank you.