 My talk subtests are the best if you were expecting something else. This is what you're going to get. My name is Dmitri Chukin. I work at the Cactus Group as a backend developer. I have been working there for a couple years previously. I worked as a math teacher and I spent a couple years learning Python, doing free online tutorials, working on some projects with friends. Now I'm at Cactus. Cactus, for those who don't know, is a web development company in Durham, North Carolina. We specialize in web apps, projects using Jingo, as well as a few other projects not specific to Jingo. So that's what we do. Okay, testing is important. So testing is important because we want to make sure that the code works. We want to write tests that are actually there for a purpose. We want to make sure that our actual code runs the way that we want it to. We want to make sure at Cactus we write lots of tests. We want to make sure that when we upgrade to new libraries, we know when deprecation warnings tell us something's going to be deprecated. We want to use tests so that we know when something actually breaks without having to rely on the users in production code. Plus, if I can catch a bug in my code, then I'm not using somebody else's QA time to actually find a bug and I'm not relying on users because then nobody's happy. So testing makes our code more efficient and it reduces our technical data. Great, so we all agree testing is important. We should all do tests, but this talk isn't actually about tests specifically. I'm talking about subtests. So I'm going to get to that in a minute. One more thing about testing. So the rest of this talk is mostly going to be about how tests, good tests are readable, thorough, and dry. So number one, readable. We want to make sure that other developers who are getting to our tests actually understand them without having to spend a lot of time digging through them, checking each line, making sure they understand what each of the variables mean, things like that. We want to make sure that they're thorough so that we're testing our functions, our classes, our endpoints. We're just checking that they exist, but they actually work the way they're supposed to. And we want to make sure that they're dry. So we don't want to be writing the same lines of code in multiple places. By the way, these principles are taken from the Zen of Python, like readability counts. Beautiful is better than ugly. Explicit is better than implicit. And there should be one, and preferably only one, obvious way to do it. So the rest of the talk, I'm going to talk about why subtests make our tests more readable, more thorough, and more dry. Okay. Number one, readable tests. This is an example. Let's say we have a site with users, they each have profile, and the profiles can follow each other. So we might want to write a test that, in our model, we can have, that we have correct statistics for this. So as you can see, let me see with my pointer, there we go. So at the top here, I define some profiles, and I have a bunch of asserts that when profiles are created, then no one's following anyone else. So okay, that's easy, that makes sense. We should probably add a section where people actually follow other people. So in our test, we might add another section. So a lot of lines of code. This is our original section from the previous slide, and then we've added some followers here, and we have a lot of asserts for those followers. By the way, I'm not expecting that we're going to be able to go through each of these lines right now and say that, okay, this makes sense. The point of this section is just about readability, and so we're going to work on this test to get more readable. Before we do that, we should also add a section about unfollowing, because we want those statistics to be good and correct as well. So okay, here's a section about unfollowing. Now we can see this test is getting really long, and let's say someone else is coming to this test, they need to update something. They probably don't want to look at this and try to figure out what is happening. So to make it more readable, one thing we could do is we could split it up into different sections. Four sections. Here's the setup here where I create the profiles. Here's some asserts about followers at the beginning, adding followers and asserts, and then removing followers. So that's a little more readable. To do to make it even better, I can add a comment at the top of each of these sections. So I'm creating three profiles. This one says no followers. Several profiles follow other profiles and removing followers. That's a little better. With some tests, it looks like this. So that second comment is actually a comment here in subtests. And I think from a quick look at this test, it's a lot easier to say, okay this is a test about followers. It looks like there's some setup at top and three sections. And then in each section there are comments about what's happening and what we're asserting. If one of them fails, then the failure will have this comment. So this is a great opportunity or this is a great way that we can use subtests to make our tests more readable. Because as I mentioned, if someone new is coming to our test suite and wants to know what's happening, this is a much easier test to understand what is actually happening. Okay, so readability is important because we want to know what's happening, especially if I come back to this test in six months, 12 months, two years, or something like that, or if a new developer is coming to this code. I don't want to spend the time reading through it line by line trying to figure out what's happening. I'd rather spend my time fixing it or updating it. For companies, this means employees are actually more interested in what they're doing because they're getting code. They're not deciphering someone else's complicated tests, which means that they end up having better projects and more efficiency, so more money. Okay, one question. Why not just break up that test that I had, that really long test? Why not just break it up into three different sections? Yes, we could do that as well. However, one reason you might not want to do that is because that means we have to have the same setup for each test. And in some cases, it's not really a big deal. In other cases, if we have to create some objects, some instances of one model, some instances of another model, if we have any money to relation, add some things there, that becomes complicated and ends up being not dry if we have to repeat that over and over. Yes, I guess my, so my thought was you could theoretically break it up into separate tests. In exactly like that, no, it would not work. Yes. Yeah, you could do it a few different ways. Yeah, thorough tests. Okay, it's important to have thorough tests. Another example, let's say we have a function here called isUserError. And it returns true for any user error, meaning any status code that's a 400 level status code. Okay, this is one way that we could test it. So we test for the 400, 401, 402, 403, and 405 status codes, and then some other common ones, 200, 101, 500, 503. This is okay. For running code coverage, this would tell us that our function actually has 100% coverage. But this is actually not complete because there are other status codes that we're not testing here. So one way to make it more thorough is we could write a line for every single status code between 400 and 500 and all the 200 ones and so on. But that's a lot. No one wants to read 100 lines of assert true. So we could do it in a for loop like this, where we just loop through all the status codes or all the integers between 400 and 500 and assert that this function returns true. We could do it for all the 200 ones, all the 500 ones assert that it's false. That's fine, that's our row. But if something fails, then we get something like this. It just says assertion error false is not true. And so we're left wondering what failed? I don't know, something. With subtests, we can change the test that I just mentioned with the for loop. There's the same for loop and we just add put subtest status code equals status code. And this parameter ends up being spit back out when something fails. And our failure looks something like this. So we would say assertion error false is not true and that happens on the status code 405. Question should, can't we just do that with custom assert messages? We could, but if we write custom assert messages then it's more maintenance because we would have to write that same message for each assert statement. And then if we ever want to change it then it ends up being more complicated rather than just putting in one parameter. Also another thing to note is that subtests run independently. So if we have multiple failures then we get all those failures back at once rather than just getting the first one like we would in a regular test. So if status code 403 and 405 both failed we would get something like this. And so we see them both at once. This can be really helpful in diagnosing what the problem is in an application. So for example, this is a different test. I haven't told you what it's about. I haven't told you what it does. But if we have an assertion error here we can see that we are expecting Jane open parenthesis close parenthesis and we're actually getting an empty space there. So we're off by one space for some reason. So that's without subtests. If I had put in the parameters into subtests we might be seeing something like this where it says first name is Jane, last name is empty string that fails, first name is empty string, last name is Smith that also fails. So in conclusion our code does not handle empty strings. And so that's it's a lot easier to see that when we have all the failures right next to each other rather than trying to figure it out from just the first failure. So not having actually okay it's important to test all the parts of a function not just one part of it. And some tests allow us to do that as well as having useful error messages for each failure within our tests. Okay section three dry tests. Another example let's say we have an API endpoint we want to make sure that the right fields are required. So we have three fields each of them is required at this point. It's first name, last name, and address. And in this test it looks like we are posting to this endpoint and we're doing it once here, we're doing it a second time here, and we're doing it a third time here. Really we're just doing the same thing three times. So we could ask ourselves is it clear what's being tested? Is this dry? Can we improve it? Yes we can improve it. For example we could put comments on each section it does improve it it makes it more readable. But still we're doing the same thing three times. So one way you could do it one way you could improve this with subtests is like this I defined a function up here it's get minimum required data and it returns first name, last name, and address. These are all the required fields and so I could have a test that make sure that those required fields work. Down here I define my missing subtests and this is a tuple of a field name and the subtest description. So each of the field names are here first name, last name, and address and we have a description for each of them. Sorry which one? Oh the big one this would be in the test case. So this is just the helper method for the test case and then this is a test within the test case. Yes it looks like the underscores for some reason got cut out here. There should be an underscore in each of these spaces. I apologize for that. We loop through each field name and subtest description. We get the data, we take out the field name and then we post to the endpoint and make sure that the status codes 400. So why is this better? It's better because if something is required or if something has changed about the required fields like for example if we add a new one all we have to do is just add one more line here and then this test continues to work. We don't have to look through several different sections about where these things are defined in our test suite. If one of these fields becomes optional then we could just take it out or if the endpoint ends up having 15 different required fields then we can just add to our tuple of tuples. Also since things are defined neatly it's more readable. If someone else wants to look through all the fields that are required they're just right here. So having dry testing code is important. Less code. It's less code for me to read through. It's less code for other developers to read through. Similar to as how I mentioned before if a new developer is coming to this test suite and trying to update tests I would rather have them just read through one test rather than having to look at the same thing define multiple times or looking through different posts within the test within different tests or within a test suite. It's easier to maintain because we just define things in one place and subtests give us a great way to write code that is dry. So in summary readability matters since someone's going to have to read through a test. It could be me, could be somebody else and subtests allow us to have readable tests. Subtests allow us to be more thorough and to have messages when things fail that are actually useful for us. So we can have code that's thorough as well as really useful when things fail. Subtests allow us to be dry because they're another way another tool for us to have dry tests. Where I work at Cactus we strive to have good code, good testing, good accommodations and subtests have been really helpful for us for doing that. So did they solve all of our testing problems? No, they did not. There are still lots of ways that people can write bad code. You can have bad comments or no comments. You can have confusing variable names. You can have huge amounts of setup as well as other things. But subtests are a way for us to have good tests. There are another tool that we can use to have tests that are readable and dry and thorough. Can you do this with other testing libraries? Absolutely. If you like using Nose or PyTest, sure, you can use those. But you don't have to because subtests are now a part of the Python standard library. As of Python 3.4, they're just there. So if you don't want to go through and learn PyTest, then you don't have to. You can just write subtests as long as you have Python 3.4. So whether you use them or not, they are there for you for writing readable, thorough and dry tests. So this is my concept information.