 Do I need to adjust it? Hello. Do you hear me? Again, great. My name is Eugene, and I work as a team leader at Scraping Hub. I was lucky enough over a few past years to work in different areas of IT, like games for social networks or collective blog. Again, like Reddit, but just for Russia. So a major part of my job was to review code, and I might say that readability of tests, as we see it now, is not really good enough. But actually, tests are very important to our work. If we look at top 100 mode start projects at GitHub, we can see that 23% of code is located in test folders. So this means that if your job is to just read through the code of this project, that you would spend two hours a day just reading through tests. And tests become more and more important. If from these 100 projects, 34 projects have revision history as much as five years deep, so we can take a quick look into it. It's a percentage of tests to the code it tests, and we see it steadily growing. And if we take a look at some absolute metric, like a number of lines of code, we can see that growth is even more significant. It's from 143,000 of light to nearly half million lines for just tests. So if we take a look at some general tests, for example, sorry, yeah, this is how most of tests are written, at least the ones that I see. But when we look at it, we can do better job of describing our intent. That means that when other people would read this code, they wouldn't have such question like what this code does. Is this test actually test what it's supposed to? And if there is an error in the test run, was it because of there is error in the code or because test is written wrong? So what actually is test? How can we understand it? All tests consist of three main things. First one is environment. Environment, which we have before we test something. It could be no special environment. Could be fixed date and time. Could be, if you're testing some chat service, VIP profile of user. Another thing is what we actually test. We can test that to multiply by 2 equals 4. We can calculate yesterday's date. We can make these fictional VIP users try to swear in public channel. And after each event, we have some expectations. This means that we expect that 2 multiplied by 2 equals 4, that yesterday calculated correctly, add that the swearing user would be banned no matter of his status. So nearly a decade ago, a father of behavior during development then published an article where he proposed a given event template to describe such behavior. Let's take a look at Wikipedia example. Here, first line describe our environment. We have customer who has bought a sweater from us. We have stock with their sweater. We test that when sweater is returned, then number of sweaters at our stock will increase by 1. And this is very clearly where we see what we have, what we do, and what we want to result. In behavior during development, they are write these tests as a text. So managers without any programming experience could write those, or even users. But they have additional layer of transforming text to actual code. From benefit from this template, we actually don't need that layer. What we can do is simply extract methods, generate an environment to the methods which start with the word given. And so the same with when and then. And let's have an example. I showed you this test before. What we test here is when we registered a logger for Sentry, it doesn't register same or twice. So same error wouldn't appear twice in the Sentry itself. Let's take a look what is environment here. Creating logarithm environment. So we move it to a separate method. Same for action. We can see that there is two action. It's a little bit alarming when we talk about it. We'll talk about it later. And what's results? So now, when we transform code in this way, it's much better to understand what it does and what we expected of it. Actually, when we have two actions, it's a good alarming thing because we almost never want two actions in our code. Because we want to check that progressing from one state environment with an action, we get to another state. And when second action we do, we want to generate that previous state. And the first action, we just move to a separate test. So I didn't highlight it. I forgot to highlight it. But you can see the second test method itself given handler registered. This is where we describe what we had at the beginning of this. Also, when we put it in literal words, we can see when something wrong with what we wanted to test. So here, last expectation of which test, then number of century handlers registered. Actually, what we want instead is to check that every century handler registered is unique. So we move it to another method. Of course, we change behavior of it, I just put it here. Another thing, when users and developers firstly approach this template, they tend to put everything. And I mean everything in the code itself, so it's really correct. It's kind of again common sense. We don't need this. We have setup. So move common. Don't be afraid. You still check up setup when you read this. So actually, this is a good way to structure single test. But single test is not only our problem in test. We also want to organize better tests with its multiple test. We have better organized. So let's take a look at another example. If you take a look at every aquarium, it has some equipment. It have environment in it like salt water, fresh water. This is a Dutch aquarium where majority of space is taken by plants. So this is kind of like classes. So we can take this example. So here is an example of these aquariums. And we see that they inherited from each other. And also, we want to check single method. A lot of methods could be. But this method on the top of class, and it's inherited by each, every one of them. So of course, with this aquarium, we have a lot of fish. And usually with such thing, when it is true, we have test like this. It's a lot of repetition. And I mean a lot. And there could be hundreds of cases. For example, if you test in parcel for its removing malicious code from it, there actually could be 100 cases. So we don't want such long tests. Usually it's transformed in a loop. So we have an aquarium. And for rich data, we loop through it. We check. We do some action. We test. Same, but it's just a little bit better. I can't stress enough how bad loops are for testing. For this particular case, if there is an error with goopy, you don't know whether there are errors just with goopy, or with every other fish, or with some particular fish. But this information is very important because you can spend half a day looking into a place where you shouldn't look for this particular problem because problem is somewhere else. And you would see this if you immediately have results on which test, pass, which don't. Here, for example, we see that tests pass for goopy and for goldfish. And don't pass for raspberry and for lapel. Your train of sorts could be, how are those connected? Because you sort of something else at the moment. And thinking about it, you would be able to find exact spot which unites those particular problem. So how do we want to translate data to separate tests? I very like NOS parametrized model, which looks like this. And by the way, if you don't know, this month NOS parametrized updates and actually applies not just for NOS, but pie test and unit test. So you transform your tests. So test methods now have parameters. You can see after self there is a fish parameter. And you write your code, your data, in a list decorating this match. Unfortunately, particularly for NOS parametrized, there is a problem with inheritance. So if we inherit from freshwater aquarium, and noticed the different data. So we changed rasbora for Grammy and we don't test for lapel. Our expectations could be either we run tests in dash aquarium just for these three particular data, or it unites all of them. And rasbora and lapel added to those three. Unfortunately, what's actually happened is this. We have four tests for each aquarium. And what happened here, so you would understand, we have four tests from freshwater aquarium. We replace three for those we have data. And the last test is just a leftover, which is inherited from previous one. So it's fails and it's not something. It should be considered. So what do, how can we apply inheritance to our tests? So our goal would be to have tests parameterized like NOS parameterization like NOS is good enough, but it need to be able to deal with inheritance. So inherited test data. Also, we probably don't want to repeat each test each time for the inherited test case. We want it to be defined in parent class and seamlessly used in the child cases. And we don't want to apply, we don't always want to apply all the data from parent cases to the child cases. So we need a way to control this. So these goals could be rewritten to next environments for parameterized parameterization. We need to apply single test method to different data as much data as we want. For inherited test data and inheritance of test itself, what we need to solve this is just an access to parent class and we can extract the data from it. And for control execution, what we want is a way to exclude data. So let's take a look what Python tools and approaches could help us to deal with these goals. Decorator is most important part of parameterization. It's used in every approach which I would show. And unfortunately, usually it's diminished to just create a new function which would some code before or later original function. But being function over function decorator is actually transforming original function almost to anything. You could see this, for example, skip if decorator of unit test. It transform original function depending on the condition either to original function or to skip function or the trace of skip. So decorator can be also applied to classes, not only functions. So this is also transformation of original class to anything. So how would we apply decorator to achieve those? No-sport parameterized show us very good way to define data. It's clear. It's understandable. So we decorate each test case with data that it requires. But only decoration doesn't create multiple tests. So we need a way to transform this function with assigned data to multiple test cases. So here we just assign to the method data in any way we want. And with second decorator, we take this class. We have all these methods defining it. And for those methods which defined parameterization, we create additional method. With this approach, we target test parameterization. But it's not very good for inheritance. It's applicable because you can do this. But for each class which inherited from this test case, you need again to reapply this decorator for new test. Because what we simply did here is instead of some test case, we created some test case with more methods. It's not applying the behavior to its child cases. OK. Another approach would be metaclass, which is a way to configure how you create tests. So typical approach is, and we would use typical approach here, is we have a name. We have basis, which is all parent classes of this particular case. And we have a name space, which is dictionary of methods, parameters from which we create this class. So we can manipulate with dictionary before we create this class. So we have some tests which have data assigned to it with rates from all of them. And for those who have data, we add some keys and values to this name space. It's seamless, and it is working with inheritance. So when you create a child test cases, this behavior is also transformed there. So unfortunately with metaclass, there is also trade-off. It requires that each metaclass is inherited in a direct chain, not a sum tree. So well, it's not a problem for your single project when you start packaging your things, when there is one metaclass in one package and one metaclass with another package, and they're not correlating with each other. Your users could not be able to use both of them. So this approach is useful at some extent, but also we can also not return some class but anything we want. So frames is another approach. And it's used by NOS parametrized. When you have a tracebook, it's actually listing us frames, of course, and it's kind of name spaces where cursor is our execution cursor. So NOS parametrized take a frame from which its decoration was called, and it is inject new method there. And also, so here namespace would be definition of class test freshwater aquarium. And it's transformed like that. It's like we written it this way. I am a little bit short of time, so I quickly approach last one. My favorite, before that, here it is very understandable why we can't approach, why we can't get parent from here. Because at that very moment, we don't have class to get parent from. We are just defining the namespace. So with frames, we can't target the parent class. So my favorite one is custom loader of unit test. Loaders are responsible for gathering tests from your code and creating suits from them. So we don't need actually anything at this point except to mark some tests with data. And we don't have to think about inheritance. What we have here is something like this. We get names. We iterate through all of names. And if it has some data, we extend those. If it's not, we create usual test case class. Actually, worth mentioning that unit test uses loader to create different instances for each test method it has. So it's not one class with a lot of tests. But a lot of instances for this class which tests single method. Search is very straightforward, so you can read through it and it's very understandable. So here, what we do, we decorate data. And when we approach actual test run, we create additional tests. We don't change anything into class. We just decide if we want this test, if we don't want this test, or if we want to create multiple tests from this one. I didn't mention for previous approaches, control execution which skip inherited data. This is because there's two general approaches for them. One, for those where we have access to parent classes, we can clearly tell what we want to do with the data from parent classes. We can extend it. We can remove some particular data from parent data. We can completely replace it. Another approach which don't need to create different decorators for this is just to insert in your test body something like skip if. But I like very much JUnit approach where they have assume. So we can have assume here, which would go nicely with give and then and when. And it would skip test if it's not applicable at the parent is a child class. So I want quickly to give some mental experiments. So I would read out loud a few things and you try to get to yourself feelings about it. So it's not related to test. It's just get and get help. Now SVM, now for the bird we had before folders with different approach and changing code live on production. After that, I am very glad that we have get and get help now. And I am very sure that using approaches that I demonstrated today, whatever briefly it was, I'm sorry, would be able for you to create a framework that applied to your particular project that works best for you, that for you and your team, it would be very easy to create test. It would be no frustration. It would be low maintenance. And you can quickly navigate through it. Thank you. For a couple of questions. Thank you. First part is new functions. It looks like BDD, isn't it? Behaviour of development. I'm sorry, I didn't. What looks like that? First part with new function. When you change the part of code with new function with full name. You mean organizing it in a different even? Yeah. It is actually very easy. It has some current cases, for example, when you want with speech. So you can approach it simply like a patch isn't only can be used with this, but it has start and end. So you can create given function and start patching it. And then you stop all started patches in test. Yeah. It looks like, do you know about Cacumber, maybe, something to this framework? It's Ruby, probably. It's Behaviour of Development. But it is another layer, which we actually don't need because some are projects small, some are not. Cacumber is great. And we can benefit without using it, just clarifying what we want to do from our test. OK, thanks. Thank you for the question. Last question. OK, thank you. Oh, hi. So how you manage, like if you have a lot of tests, because my previous project ended with having a lot of tests, and how you manage to do not create a lot of function that just either do BDD, but they do something different underneath, and how to maintain a lot of helper code to make the test. Because from what you were showing on the slides, you move out something to the different function to have one responsibility. And then test is easy, but maintaining that. You're worrying that test cases would be too long to reach and to maintain, yes? No, if you have a lot of test cases, then you will get a lot of copied code because somebody wants to just add this one from that, this one from that, this helper code that you move out. So you're not only worried for that, like I said, but for the length of single test, where you have a lot of then tests. So from experience, there is actually no harm to combine those then function to some single then function. So you would have then user is banned. And in this method, you would have then it's deactivated. Then it can do this. Then it can do this. You don't put all of this in the test, but you have a test, a method that describes it. So generally what you want to do is to follow the organization of your code. So you organize your code somehow, and it's better to follow same structure in the test. OK, OK. Kind of answered my question, but probably I will have more. OK, feel free, of course, to talk to me afterwards. Thank you.