 Hey everyone, I'm Martin. I want to introduce myself first. So I'm Martin annual. I'm from Bulgaria, which is this small country over there. If you haven't looked at it, my home town is called Burgas and it's on the coast of Black Sea. So if you don't have any plans for the summer, welcome. I work at hot soft, which is with this awesome guys. We are outsourcing company and we mainly write Django and React. We also organize a head conf, which is a conference for programmers early and taking place in the end of September. So if you have, if you want any other information, meet me outside. Also I've just graduated as a software engineer in the last two weeks. So make some noise. Yeah. Thanks. Okay, let's dive in. Today we are going to talk about testing in Python and Django and the talk is mainly about unit testing since it's a beginner's talk. We're gonna talk a little bit on Django substructure and in the end we're gonna finish with some other testing methodologies. So before we start, I just want to know a little bit better. So how many of you have written Django? Raise your hands. Wow, fair amount. And how many of you have written a unit test in Django? Cool. I can go out now. Okay, let's start with unit testing. What's unit testing? We have some principles and advantages in it. And the first one, of course, is that one test should test only one unit. All our unit tests should be easy to read and fast to execute and should be run in isolation. The main advantage about unit testing is that it helps us modify our business logic and they serve as a documentation from developers to developers. Of course, it has some trade-offs and the first, of course, is that one test should test only one unit, which sometimes is impossible and I'll tell you why in a few more slides. Of course, when our project gets bigger, our test code becomes larger and it can be hard to maintain. I'm sure all of us have ever had a randomly failed test in CI and that's why you can't push to production. But the mocking, which is the main part of the unit test, to me, this is an disadvantage because it may lead to a regression and it may cover some nasty bugs. The other problems of unit testing are the developers because we are lazy, because we are developers, and we tend to break its principles, which is bad and lead into non-exhaustive tests. Okay, talking about unit testing in Django's context, there is one main thing. Proper unit testing in Django is bounded up with proper Django structure. If our Django structure is bad, we can't actually unit test properly. Let me illustrate my thoughts with an example. Here is a simple invoice create view. We just create an invoice. When we get to the view, we just surrender a form and when we submit a post-request, we create an invoice. Very simple, right? Here is how the unit test for this view looks like. We have a setup, then we test if we can access the view, and then we test if we actually create a new invoice and through direct after it. Okay, cool. But the main question here is, is this properly unit tested? Let's raise hands again. Who thinks this is properly unit tested? Okay, my bad. Is this properly unit tested? Yeah. Okay, cool. So the main problem here is that we have two asserts that's related to the HTTP response request cycle and we actually test the status code of the response, right? And then we have a unit test that tests if an object is created in the database. Is there anything common between these two things, except the crude, I guess? Your database is an external thing, so you're now not testing just that unit. Yeah, that's the main thing. And the problem is that one does too many things, right? We have a view that's capable of the response request cycle and calling the ORM, which has nothing common between them. And in the real world, actually, this example is pretty much more complicated because when you create an invoice in 2018, you don't just create an object in your database. You send emails, make some validations called third parties and so on and so on. And the problem here is where should this logic live? And here comes the services concept. A colleague of mine, Rado, gave a talk about it a few hours ago. It was pretty awesome. I'll put a link to this if you haven't been there. So what's a service? A service is actually a functional object where you put all your heavy lifting business logic. And why you should do it? It's because it gives you reusability. You can use your services in APIs, in views, in other services and so on and so on. And the next pretty good thing is that it gives you a unit distinction. Let me show you the next graphic. Now we have view, which just communicates with the HTTP response request. The client calls our view. The view on its side calls a service. And the service calls a Django ORM, calls tasks, calls other services, make some validations and so on and so on. This maybe looks like this unit, the service unit, actually do a lot of things. But if you look more abstractly, it's just our business logic. And now we can put the line between them and define actually a view unit and a service unit. The view unit is responsible for the HTTP response request cycle, taking the request rendering template and calling the service. And the service unit side is responsible for actually heavy lifting logic. Here is how our service look like. This is pretty dump example, but it makes the thing. We validate them out, create an object and kind of call a task, for example. This is our view now. It's pretty much the same because of the sake of the example, but we just, in the form valid, if you see, we just create, we call the service now. Now I can create an API view with, really, really easily, again, just calling the service and having a serializer response. That actually allows us to unit test properly now because we can distinguish the units. Here is the invoice create view test. Now it just tests the response request, the response request request. Here is the test from the API view test. It's pretty much the same, but I use the client from test plus test case. And here are the tests from the invoice service. Here, I don't have enough information about the service, but I use the client from test plus test. Here are the tests from the invoice service. I don't have enough information about the service, but the main point here is that we have distinguished the units. We now test if we create a new invoice. We test if we call the task by mocking it again. We test if validation is not correct or validation error is raised. We test if we, we shouldn't create an invoice, right? We test it. Going away from the example, we should ask ourselves, what did we achieve? We just define the units in our system, actually, which gives us straightforward boundaries between these units. And this actually allows us to properly unit test them. And this actually gives us maintainability and modifiability in the system. That's the main point in having services. Since we have distinguished these units, we have now unit testing groups. For example, we have test for views, APIs, services, models, utility functions, and maybe others. Or this is how our tests directory in the app looks like. Now I want to point out some common mistakes I do as a developer, and I recently seen in my colleagues' work. The main problem about unit testing is that sometimes tests are non-exhaustive. And this is the biggest problem because let's take a look to this example. We have a service that is used in a lot of places in our system. One day, we decided to change its implementation. We check if it has unit tests. Yes, it does. Are they correct? They are. We run them and everything is okay. We change the implementation, run again the test, and everything again is okay. The CI tells us that actually everything is working. We pushed the production, of course. And here is where the most nasty bugs come from. Because we have a non-exhaustive unit test that didn't catch the bug that we introduced. And the problem here is that these bugs are usually caught by our users. The next common mistake that we do is to actually over test or put too many asserts. For example, if you have the creating voice service tests and we have 10 different test methods in it, we shouldn't put the assert true in voice is created every time. This actually breaks the principles. We tend to hard code values because it's just simple. Sometimes it's just simple to hard code one. For example, if you need an integer or some random string. But it's bad. The more we get better in Django and in Python specifically, we tend to add too complicated logic in the test. In my opinion, we should have, we had better have to inner force in the test method instead of putting some obstruction over it because it's just easier to read. And when I go to the unit test, I see, okay, now I have two force, two loops. And it's just easier to read. The next bad thing is to have misleading test names and actually test something that is not in the test or not telling in the test. And missing up units or not mocking. As you may notice, I've mocked the validation service every time I test it in the validation service. And that's actually how you should do your unit test. You should, if you don't mock the services or the utils that are code inside either your testing service component, this actually breaks the isolation principles. And the question is how to get along with these problems. The first thing is to keep up with the principles of testing. This is how our project should be, how our work should go. The next thing is when you code review the code of your colleague. Don't only review the parts of the code that he's written, also review its unit tests. That's the place where we actually code a lot of books. And again, unit tests should be simple. They should look like a documentation, in my opinion. So keep them simple. Now, let's talk a little bit about the tools we use. Of course, the first thing I want to mention is the test models. And the first thing is the unit test of Python. It's such an awesome model. It actually gives you everything you need to create the unit test you need. The main part of it is the mock model, which gives you the decorators, major mock instances, and a lot more. And the second main part, in my opinion, is the test case object, from which you inherit, and it gives you the setup method, the tear-down method, the all asserts you need. Another nice feature of the test case model is the sub-test context manager, which I really like, and we'll show you an example in the next slide. The next test model we use is the Django test. It doesn't give you a lot more, for example, but it's a nice wrapper around the test case, and the test case from Django test is actually the thing you should use if you want to communicate with the database. Another nice thing is that it gives you some nice shortcuts with the asserts, so we can easily assert your responses, for example. And the next thing we tend to use is the test-plus model of Django. It's a site package, Django test-plus, but it has a test case wrapper around the Django's test case. We really like it because it has such a nice API to use. Now, let me show you an example for the sub-test context manager. Here is how two tests actually look like from the previous slide. And with sub-test context manager, we actually can combine them, and it can lower the code lines. Here is how it looks like. It's pretty much the same. You just post and checks if you're redirecting, if you create a mock, if you code the service. Another nice thing is that it gives you strings in the context manager as an argument, which can serve as a documentation, as I tell. Something to note here, don't overdo it with sub-tests because this may lead into too long and hard to read tests. If you, for example, enter a sub-test with a sub-test in it, and the sub-test in it, you'll get three levels of going kin, and to me it's just hard to read this. For factories, of course, we use factory boy. I think most of us do. It has such a nice integration with Django models, actually, the Django model factory. I think one to mention here is the lazy attribute. This actually will create different values for the same test run of your object. For example, if you want to create different voices in one test case, if you don't put the lazy attribute, both of the invoices will have the same amount. Another cool thing here is that it gives us a lot of customization because if you want, for example, to use the creating voice service here, we can just really find the underscore create method of the factory. For fake values, we use Faker. Here is how we can customize its methods. You just give another provider, the so-called providers in it. For example, I want all my pions to be positive, so I just change the Faker pion to my pion provider. For test runner, we use pytest. We don't have enough time to tell all the features that pytest give you. Go to the documentation, check it, but let me just, here is how most of the pytest runs look like because of these arguments. The create dash-dash-create-db creates a new database and runs all your migrations again. If you use the dash-dash-reuse-db, the migrations won't start again. This is actually really fast in the developer process. The dash-dash- web stays for last failed and it will run your last failed test. Another nice argument I've used is the dash-dash duration where you give an integer and it will give you, for example, 10th most slow test you have. That's nice for monitoring. You can now use the pytest markers that I won't mention now, but you can just put a marker in it over with the decorator. It's such an easy to use and well-documentated in the documentation. Mark the test slow and don't run it every time in the CDI. Now, I have a couple more minutes. I want to go more further and talk a little bit more on some other testing methodologies. Okay, unit testing is awesome. We use it a lot, but it has its limitations. When you go to the boundaries of unit testing, we should take a look and go with some other different testing methodologies. As a developer, I'm not really experienced with them. I'm not a QA guy, but recently we needed such tests in our project because it went quite big and we wanted to be sure that some crucial parts of the system are working. Well, the crucial parts of the system were actually bounded up with third-particles, so we used some kind of a mixture of mixing end-to-end tests with validation testing. For those who don't know what's validation testing, it's actually a testing methodology that verifies or validates the cost from your system with the third parties. Let me tell you the approach we took. We decided, as I said, to use end-to-end tests with validation testing, and one thing here is if this approach works for you, it would be awesome, but if it doesn't and you decide to take something else, the first thing you should answer yourself is, in what state is my project? In our case, the front-end is now heavily developing, so it didn't really make sense to actually do the whole end-to-end test parts or create a visual regression test because we will end up with deleting them because the design would be changed after a month, for example. What we did is to actually create a new project, we called it the end-to-end because it just calls our APIs, and its job is to, as I said, call our APIs. These APIs should call some third parties, and the client asserts that the response is from the third parties and everything is synced. That's pretty much everything it does. The thing that we need to do now is to put it in the CI because it's fully localized and easy to use, and I think that's pretty much everything I wanted for today. So thank you. Sometimes for questions. So if you have any questions, I'll ask you to come forward and talk to him. So any questions about testing? I'd just like to come back to the different frameworks of testing, like the different libraries. So are you really using testing? Yeah. It's like, you need test plus jungle test plus spy test inside one project? Or can you explain briefly what isn't better for what? Which one is better for working with the database and jungle, or M, and which one is better for... To me, jungle test case, for example, the jungle test case actually is from simple test case and transaction test case. The transaction test case class is actually capable for all the ORM stuff. And I'm not 100% sure, but I think the test plus test case actually don't give you any other abstraction over this. So it's the same, actually. Thanks for that great talk. So from what I understood in order to have good unit tests, you need to have a really good project structure. Is that right? 100% in my opinion. Yeah, so what if you don't have a good structure? How can you write good unit tests? You can't, and that was the point of the talk, actually. Okay, thanks. So maybe a little follow-up question. You were talking about writing the unit tests each time for a single unit. So supposing this structure of a Django where you go from the views on one side through API services models to the database, what would be the can you give some examples of the components that you're testing? Yeah, actually what you saw there with the test of the views is if you have this structure one nice thing is that all of your views test would look something like this. If you have, we put our actually put our permissions and outindications usually in a mixing. So you can test if you as an admin I can access this view or something like this, but it's pretty much everything. And when you have mocked the service call, that's the only thing, another thing that your test for the views should look like. Actually the heavy lifting business logic is tested only in the service and that's the main part. Again, here comes the problem that you mock every communication with the outside of the function, but this is how you should unit test. For anything else integration. If that answers your question. Okay. Okay, we have time for two more questions. If anyone has a question, if there are no further questions then thank you Martin for your talk.