 I have the pleasure to introduce Rafael Piazzina. He's a Python maintainer, a Git maintainer, Python developer from Edinburgh, UK. And he's presenting to us what's new in PyTest 3.0. Give him a warm welcome, please. Thank you. Thanks a lot for your interest in my presentation today. As a very, very short disclaimer, I talk a lot about Python 3. And with this release, with PyTest 3, I often get confused. So if I seem to change like PyTest and Python, don't, like, it just happens to me. So don't get confused as well, please. Yeah, so you can find me on the internet on my Twitter handle as Huckabrood. I'm on GitHub and Twitter mostly. I tweet about, like, mostly open source things or PyTest. And on GitHub, you will find mostly Python stuff, a bit of go sometimes, and them stuff. I'm involved in those things. Like, PyTest is definitely, like, the biggest topic for me today, but I will also briefly talk about Cookie-Cut or a project that I'm working on also my spare time. And I work for a company called FanDuel, and we are based in Edinburgh. We are working in the industry of daily fantasy sports, and we have offices in the UK and in the States, and we have about 400 members across the globe. And I'm actually German. I relocated to the UK half a year ago. And if you ever have the chance to relocate to a foreign country, I highly recommend even considering it. I definitely don't regret my decision. The UK is pretty cool, and it's got a little in particular. That's where I go for a run on the weekends. If you're into that kind of stuff, you should definitely check it out. One of our main initiatives this year is inclusion. So we work at FanDuel that everyone, regardless of any factors that make up your essence as a human being, be it race or gender or whatever, we try to give everyone the same opportunities. So if you're interested, please check out our website. Yeah, so I talked about Cookie-Cut. It's a command line utility, and I gave a presentation here last year. And what it does, it helps you to create projects based on templates that others may provide for you or you may buy them yourself. It's agnostic to use a programming which you could use it wherever you want to, but it's running on Python, and we support all major operating systems and going from Python 2.7 to 3.5. This is how you use it. You just invoke the executable, pass on a template. As you can see, it's downloading a template from GitHub at this point. You answer a bunch of questions, you can either stick to the sensible defaults or put in your own information as highlighted here. We also support choice variables, so you can, like a template author could decide that there are just a limited number of viable options so you can choose from them. What it does is creates directories and files for you and insert the information, so that's like a set up tools, set up, and you can see it has the information that I've entered. You can find the project at github.com, Audrey R. Cookiecutter. Just give me an impression of how familiar you are with PyTas. Could you please raise your hands if you've used PyTas before? Okay, that's everyone, that's amazing. And the next question, are you familiar with how PyTas plugins work? Okay, that should be about 50%, that's quite nice, cool. So the reason I'm asking this question is because writing plugins requires you to have some kind of experience with internals of PyTas already, so I'll see if I can go into more detail later on then. So just a brief introduction, PyTas is a testing framework. You probably know all of it, so I'll just skip over it. It's released under the terms of the MIT license, so it's open source, and it's not only the core framework, but there's also a lot, a lot of plugins, so the community is quite big, and it's ever growing. You can find it at github.com, PyTas-dev, and PyTas, and you will find under the organization there are some plugins that we maintain as a community, and also a template. This is how you install PyTas, and this is how you run PyTas, and I've gone through this experience, and I've found it is crazy confusing that you have to put a dot when you run PyTas. That's causing a lot of confusion for a lot of people who are new to PyTas, so that was my reaction in the beginning. I was just mind blown because you import PyTas without the dot, but there is the entry point, which doesn't have the dot, but it's all coming from history, so it originated from a library called PyLip, and it had a lot of several, like PyPath, I think, PyLip, and one of those was PyTas, at least, and for the community that creates this project, backwards compatibility is one of the major concerns, so we kept this as it was to not break anything for anyone else, but there is hope. Soon, with the new release of PyTas 3, we support both, so you don't need to put the dot anymore if you don't want to, but PyTas will be kept for compatibility reasons again, so nothing will break for you, but if you decide to just use one name, you can do so. Thank you. Yep. So, again, as everyone seems to be familiar with PyTas, that's what we do. The example that I have in my presentation are running on Python 3. There is some, like, tuple unpacking, which is only available for Python 3, so don't get confused by this. In our example, we will have a command line application, I use click for that. We want to support three commands and two flags, and I will show you now, and it's a smoke test, so we only check that the exit code is zero. That's all we do. First, I want to show you how it's done with unit tests. So we have, like, you all know how it's looking like, and the green-surrounded thing is our test code itself. So we have a command called config. We have flags, in this case, it's verbose, and we use the click testing runner to run our main entry point with a flags, with a command, and then check with a cert equal that the exit code is zero. That's all we do. But as I said, we have three commands and two flags. So how it's looking like if you want to do that with unit tests? Well, you copy-paste your test and just change your input parameters. That's what you do. So it doesn't really scale well. So how is it looking like with py tests? With py tests, that's what you do. You have fixtures, so you put your setup code and your teardown code somewhere else because if you want to maintain the code base and with tests, you really want to just look at the test implementation and not everything around it. And this is how it's looking like with py tests. You use a marker called parameterize, and you can just stack them and it will automatically permutate all the combinations, but the testing code doesn't change at all. It's still the same. So that's about the fundamentals of py tests. The names matter, so the test discovery, it's based on a naming convention. The fixtures system is based on names. Hooks do need to fit to a naming convention, so naming is really important in py tests. Again, as I said, there is a marker called parameterize. It's a phone, what it does. It just inputs the values into the test function, whatever you want to with it. When you run it in the both mode, you see that after the test method's name, you can see the parameters, all they are combined, so if you want to see a failure, you exactly know which parameter combination failed. There is a fixture, sorry, there's a decorator called fixture, which you use for combining several setup codes to each other, but it also supports the params keyword argument, so you can pretty much do the same with it. One of the main benefits of py tests is that it's extensible, so if you have something in your company or your open source project that is very specific, you are free to just write a plugin for it and use it and extend py tests. It does it with based on hooks, so you can just inject your code into what py test is doing, and this is how it would look like. So the cookies variable, sorry, fixture is just from one of the plugins that I maintain, and it just feels as if it would be in your own code base, so you can just use it and do whatever you want to with it. If you're interested, that's the py test plugin to test cookie cutter templates, which is kind of handy, but very, very manual. And on the other hand, since I like to combine things that I like, there is a template to create py test plugins, and a lot of people use it, like even Dropbox graded their plugins with based on those templates, which is good. Hooks, again, this is just an example of what a hook could do. You can, for example, run all only tests which use a certain fixture, which you can't do just out of the box, but you can write a hook that does it. It's called py test collection modify items and there is a huge number of hooks that you can use to fit your needs, just skipping over that for now. And there is also a GitHub IO page. It's called py test tricks, and it's currently me and Bruno, a contributor from Brazil, and we try to provide blog posts which are probably not worth like a separate blog post, just like dumping best practices maybe, but also like tricks, so you can find advanced things which doesn't really fit into the documentation from py test itself. So I wanted to talk about new features in this talk, and one of which is a prox, and I think a lot of people are kind of annoyed from the scientific community that it's really hard to compare floating point numbers. So there's a new method called py test a prox, and what it does, it just asserts based on a precision that your values still match, which is kind of handy. There are changes regarding the yield fixture. So you may know there is the decorator called fixture, but there's also yield fixture. It allows you to run setup code, then yield into the test item, and then run the tier down code. What changes now is yield fixture is still in the library, but you can just use fixture now, and py test will look into the test. If there is a yield, it's automatically yield fixture. If there is a return, it's a regular fixture. So that's just convenience. There is a new thing called doc test namespace. It's a fixture, and you can add abbreviations to it from just filling a dictionary. And if you run doc test via doc test modules, this is an example, apparently in the NumPy code base. There seem to be this convention of just using NP as an abbreviation. So what you can do, you can use the doc test namespace and say NP now points to the module called NumPy. And if you then have a doc test, it will work just out of the box when you use this fixture on auto use. Name fixtures, as again, as I said, it's based on names. But if you use tools like pylons, it will complain about your py test tests. And the reason for that is you can see it in here in the example. So it tells us that the positional argument for our test item is the same as a definition of a function in the same module. So you can see the first fixture that we use is template and we defined it in the same module and pylons complains about this for a good reason because we are overriding something that's in the auto scope. There is a new keyword argument called name. And it allows you to change the fixture name, essentially. So we don't really care about the function's name anymore but we will look into this keyword argument and match based on that. And this is, I think, one of my favorite features. It's a new hook called py test make parameterize ID. So this is an example of a test. I tried to not skip over too fast. So what it does, we define a list of class instances. Then we have a fixture which parameterizes these. Then there is a test, it's called test become a programmer, it receives a person fixture which we define via the mark parameterize marker and the Python package from the fixture on top. And there is another test called test is open source, it just uses this fixture. So what you do then, if you run it, you will see in the both mode that it just combines the fixture names, puts a suffix to it, and this is how you identify which test is actually being run. This is nice but it's not really awesome, I would say. So what you can do is you can provide in keyword argument called IDs. You can use a callable that will be applied to the parameters to get a string as a representation, or you can do it explicitly and just pass in the strings yourself for each of the individual parameters. So if you run that, you can see, now it's a bit more like what we want to, we can now understand what's actually happening. And this new hook, it allows you to move this logic into your conf test file or your plugins or wherever you want to. And if you have own classes and you don't want to repeat this in every single PyTest parameterized marker, this logic, you can just use this hook, it receives a config object and value, and value will be the parameter, and then you can do whatever you want to. And in this case, I decided to use emoji because emoji are cool. So if I run my tests again, this is how it looks like. Probably as a disclaimer, I don't think this works in the regular Windows terminal, but an iTermit does. There is a new feature that I've actually implemented. It's called fixtures per test, and this is where I'll be doing a live demo, and I really hope it works. So if you run the tests, oops. I use cookie cutter for that case. So it's doing a thing, everything is cool. And then there's a command line fact, it's called fixtures. What it does, it analyzes the code base, like the tests, without actually running them, and it shows me where a fixture is defined. This is the test module. What's the name of the fixture? And it tells me in case there is no doc string that I should definitely add one. If there is one, it will actually print it here. So this is handy, but if you use the same name over again for a fixture, and you define it, once it might be in a plugin, it might be in a conf test pi, you might maybe create it dynamically in the mark parameterize or in the test module itself, this gets quite confusing. So there is a new thing called fixtures per tests. So now it's collecting all the test items, and I should probably run that in verbose mode so you can see more information. So this is a test method. It's defined in tests, version control system, test prompt delete at this line. It uses a fixture called mocker, and mocker is defined in pi test mock at this line. There's the doc string for it, and you can see that it's pretty much the same for all of the, and it doesn't matter if the fixture is defined in your own package again or in the plugin. And I find this quite nice, so if you then, for example, have a fixture in your code base somewhere, so there's one here, template. So then I can just jump to the definition and make my changes and I know what I'm actually doing. So that's us. So I also want to address some of the backwards incompatible changes. Again, as I said, backwards compatibility is quite important for our project, but there are some changes that we thought about that we should probably do because now with the major version, now it's the best time to do it. One of which is there is a command in fact called assert, and what it does, it allows you to choose a strategy how insertion errors are being handled. It accepts plain, which means it doesn't do anything pretty much. There is rewrite and what rewrite does, it does its extra magic using the AST to put more information, the introspection into your assertion error message and there used to be reinterpret. And this is most of the magic part that used to be in the code base. What it does, it runs in plain mode and if it encounters a failure, it reruns the same thing again, but with rewrite. So this has some very, very ugly side effects, which means since it's running the same assertion statement again, and if you do some heavy logic in that, it will be executed twice, which is not what you want. So we remove that, so it will be just plain and rewrite. Those are some other command line options that we've removed. And they, like no assert and no magic, are pretty much achievable by just using their assert thing. Then a new change. Pytest warning summary will be now displayed by default. So if you upgrade to the new Pytest version, you may see the warning summary in the end of your tests. We do that because we want to communicate with you when we actually deprecate things. So if we happen to deprecate some other logic in the near future, we want you to see this and react on it, and if you encounter any problems, you should let us know and not just ignore the warnings. But if you decide this is just too annoying and you don't really want to bother, you can use the disable Pytest warnings flag to, the line will still be showing that you have warnings, but it won't print out the warning summary. Deprecations. There used to be this thing called Pytest fung arc and it allowed you to define fixtures. We deprecated this and it will be removed in the next major release. There's this thing called get fung arc value that you can use from the special request fixture. We renamed that to get fixture value because it's just being more telling you what it's actually doing. It's taking your fixture by name and it gives you the value that it returns. The other one is still in the code base, so we won't deprecate that for now. Improvements. So it used to be the case that you can only have the assertion rewriting in your tests, but now you can also have that in plugins and in Conf test pies. So you could have an assert statement in there and we will apply our magic to it so you can get more of both assertion errors. That cost the code base to remove a whole bunch, the assertion interpretation remove. So Flores did that and as you can see, it's a lot of lines of code that have been removed and it's amazing because it's a very nasty part of the code base. Talking about documentation. So I went to the write the docs conference last year and I can highly recommend doing it because I noticed that every time when I go to a conference, when people are new to a framework, they don't look at the code base to learn about things. They look into the documentation. But we as developers sometimes just not paid enough attention to it, I would say, and I think that applies to PyTest as well. Documentation is a bit cluttered. The information is present, but it's hard to find. So we sat down on the PyTest sprint that happened last June and came up with a new idea. So treating it more like maybe an agile project talking about personas. So if you're new to PyTest and don't be scared, there will be a wall of text on the next slide. That's for a beginner user. We want to focus on clear instructions, how you install PyTest, how you write a basic test with PyTest, how you run it, and it's more like a tutorial style of documentation. We want you to understand the core concepts of it and be familiar with how you assert statements and fixtures and so on. Then we want to have a section for an advanced user, which would be more like a lengthy blog post, which goes really in-depth and explains more advanced topics, but it could also be cookbook-style-recipes, pretty much to what we do on the PyTest tricks repository. So you get nice things that you can do if you want to look into the advanced features. Then there are plugin authors. What they want to know is, how do you actually submit a package, a plugin to PyPy? There is this template, but we have it in the documentation, but not everyone may know it, so we want to cover that. Also, we have the PyTest Dev Organization, and it's a place for us where we share the responsibility over projects that belong to this namespace, and we invite you to submit your plugin to this organization if you are willing to support also other projects. It's really like this shared responsibility idea. There is a guide for it, and we want to make it more obvious how we can do that, actually. You can also be a contributor, and everyone is really welcome to help out. We accept pull requests for typos for all sorts of things, and every contribution is greatly appreciated, but sometimes it's not really easy how to get started. So there should be a dedicated section for contributors as well. That's from our PyTest sprint regarding the documentation. So we figured out what our Pozonas are, and then we looked at what the documentation has right now, put it on a post-it, and tried to put it into a section for a Pozona, so where it belongs and how we should handle it. At the top, you see something that is probably going into the sidebar, so things that are applicable to everyone. There is a new, better way to get to the documentation, and it's just docs.pytest.org. And this is just something that I wanted to say because I have the chance to speak here today. Funding open source or open source work, in general, is not very easy sometimes. It requires a lot of time, and we have yesterday also in the core developers panel that even they are not funded. And as I said, we've been doing this PyTest sprint in June, and it was based on an Indiegogo campaign. And my employer actually contributed, or donated 500 US dollars to it, but they didn't do that just because they wanted to, but because I, as a developer, I understand how important PyTest is for our community. So I approached my line manager, and he said, this is a good idea. I should definitely talk to our CTO and vice president of engineering, and they were really happy to talk about it. So I encourage you, if you understand how important open source work is, that you approach your managers because they don't really know about how it works. They have the money, but they don't really know how to provide projects with. So community, I actually started last year. Here at Europe, I was engaging the first time with the PyTest core contributors. It was really nice, and this is how it looked like last month. So there is a lot of people, and people came from all over the place, from Brazil, from Australia, from Germany, the UK, from everywhere, from China even. Someone was in Europe for the very first time just to spend a week with the project. This is just amazing. So funding open source is important. It's not just about writing code on the issue tracker. So people need to get together, flourish and think about ideas and come up with new features like the ones that are presented earlier. We also have a block now. It's block.pytest.org, and you will find updates from the core team itself. And there's also something called Planet PyTest where it's just a collection of RSS feeds from blocks that we know that talk a lot about PyTest. And the slides for the presentation will be at the speaker deck. That's my username over there. It's not uploaded yet, but I'll definitely do so in the course of the day. And I really want to thank you for your attention and the chance for me to speak here today. Thank you very much, Raphael. We have time for questions. Hey, thanks for the talk. Is there a place in the docs where there are do's and don't do's? There are some page called examples, usage invocations, and it kind of covers best practices. But it's not like do's and don't do's. We've tried to talk about this on the PyTest tricks thing. A lot of projects do something like having a Conf test py, but then importing from other modules to get fixtures to be available in your code base. And this actually has very, very unwanted side effects because it duplicates fixtures and the scoping is messed up. The whole caching doesn't work as expected. So there is another way. It's called, I think, PyTest underscore plugins and it accepts a list of strings and it will import from there, but respect caching and scoping. But this is really hard to find. So we know that and we want to work on it. It's just a matter of time, I think, until we get there. But if you have something that you are missing from the documentation, please just talk to us. We really appreciate improvements to the documentation. Hi there. First I wanted to thank you about how the community is going because I did talk on PyTest a couple of days ago and every single feature you are putting now in PyTest and PyTest 3 will have improved my talk. So obviously, I don't use something right. And second, well, the first one will, PyTest 3, be ready. And if you left something behind you wanted to put and will have to wait until four. So we initially plan to release the new version just one week before EuroPython. But as we discovered, there are a lot of things that are not just yet in the major release. So they haven't fully gone through the code review process. So we decided it's best to not release an unfinished major release just because to make a big announcement here at the conference. So we hope to release it in the next weeks. I think it's just a bunch of pull requests that need to be merged, then some testing and then it should be good to go, I think. And sorry, the second question was? Left something behind. I think we did a strategy, like we sat down with this whole group and talked about the features that we want to have, something that we left behind. I don't really think so, but the list of features, and this is just like an excerpt from, if you go to the GitHub page and you choose the branch, we use something called a features branch and there is a file called changelockRST. And it doesn't even fit on one screen. You really need to scroll, it's crazy, a lot of features, a lot of improvements and I skip most of them just because I wanted to keep the timing right. But there may be something that I just forgot or which is still standing on the issue tracker. Okay, we have more time for questions. Regarding the Pi package, Pi namespace that originally contained Pi test, is there anything in that namespace still that's maintained and you'd recommend people looking at and all that or is it sort of a bygone era? To be honest, I don't know. I think I can definitely forward that question and answer it on Twitter or something, but that's just beyond my knowledge right now. Yeah, cool. There has been work also on performance because I'm doing a plugin that does random stuff with running tests and it can easily produce something like 100,000 tests and the tests itself are quite fast, but the build up of the what to do looks, it was quite slow with Python, with Python too. Yeah, so there is this plugin called exdusts which allows you to distribute your tests to different, I don't know if it's threads, I don't really know. I think Holger Frecker started that plugin and there has been work being done at the Pi test sprint because the problem is it distributes the tests but doesn't take into account parametrization. So it may send one test with 100 parameters to one worker and another one with just one test to another one. So we effectively wait for the one that's, and I think there has been work done during the sprint that actually takes parametrization into account to make it more even, like to also split the parameterized tests. Other than that, like there have been people working on performance but I don't really know what the outcome was and if anything has been merged yet. We still have a few minutes left for questions. Any more? So, there's one. With the fixtures per test mechanism, you showed it for like the entire test suite. Can you use the normal selection mechanism with like minus k and specific test files? You just look for fixtures for one specific test? Yeah, that's what I did here. So I have tests to our marked with Euro Python. So I just run those and you can see this is just one test and this is another one. So out of 200 tests, I only selected two of them. So it works with the usual collection mechanism and it allows you to use markers or parsing in a file. So it works as well. Okay, last chance for beer. Questions? No? And thank you very much. Have a great talk.