 Hey So when I heard that the Python, sorry, the Europe Python organizers were looking for some more advanced talks I was thinking that maybe this is a good opportunity for me to speak a bit more about developing plugins and Explaining how you can get started developing plugins. So that's kind of what this talk is about I hope by the end of this talk you will feel kind of that you know enough to get started I realized the Python documentation is tricky to navigate and doesn't explain much about how we actually implement plugins So kind of what I'm trying with this talk is to help with that So I would encourage you if you have any questions like during the talk, please Let me know and I'll try to explain whatever I'm showing on screen Due to the nature of showing like I think I want to show a lot of code I won't have any code examples on the slides itself. So I'll switch to my editor if I'm too fast for any reason Please let me know I'll try to be Aware of this and take the time to to for everyone to catch up Yeah, sure Yes already on GitHub and I I created get commits for every individual step So you should be able to follow along later on So I would like to start this with maybe giving a big hand to all of the amazing organizers and volunteers and Then maybe before we talk about myself I would like to take a selfie with everyone of you because I need some proof for my manager that we had a full house today Cool, thanks So who am I Mostly I I work on the cookie cutter projects and I'm a member of the pie test team So if you see someone tweeting from the pie test or candle, that's maybe Brianna. Maybe it's myself. I Try to post Some pie test tricks every once in a while on my personal Twitter So if you're interested, you can you can follow me I'll have my handle on the next slide. I think and my day job is I'm a senior test engineer at Mozilla So I work on Firefox telemetry and More about that later. You can find the Contents for this talk also on my blog. That's rafael codes I'm Hacker brought on GitHub and Twitter. Please don't ask me where this handle comes from As I mentioned I work for Mozilla so we are the people behind the Firefox browser and I want to start this with maybe just giving a very Important update on the pie test project because we recently Release the first version of the five point all series and that's the first Release that doesn't support Python 2 any longer. So if you're using a modern version of pip and setup tools, you'll be just fine because they Intelligent tools and they will get you the version that's compatible with your Python installation We will continue to maintain Python 2 in our five sorry 4.6 series. So there will be maintenance releases going out But 5.0 is only supporting 3.5 and newer If you want to find out more about this specific topic, you can take a picture of this link, maybe We have some maintenance schedule. So I think we're expecting to be releasing bugfix releases until next year. I think More on this, I think So getting to this talk Plug-in hacks to make you more productive the reason why I chose this title is because a Lot of the a lot of times, especially as a test engineer There's also always some sort of like convincing work that I need to do with teams shipping product or product enhancements and whatnot to also work on tests and So from my experience having worked in this field for quite some time is that as a test engineer, you can do a lot by making your test suite more intelligent So that you don't compromise for instance on speed So if your tests are fast your developers will use them as I'm sorry will run them as part of the development workflow So they have an incentive actually maintaining the test as well So I think Pytest is a framework where you can do a lot of things to help with that That's the blog post that I mentioned before This is kind of based on the talk that I gave at Pi Berlin and in this talk was more or less an introduction to Pytest and Today, I just want to kind of go over this blog post very briefly to summarize of what we're talking about to give you some context And please let me know if you can read it. Okay Cool So for the purpose of this talk, I created this dummy project example project, which is called off and it Trice I tried to be funny because I was watching this nature documentary and they give all of these funny facts about animals that are not funny interesting in fact so I came up with this idea of creating this example project and using that project to demonstrate the The power that Pytest provides So when you check out the repository and you can do that. It's on GitHub. It's just Huckabro's earth And as I mentioned, there are Three different branches so the master is kind of where we are starting with this blog post The increased test coverage is kind of where we finish with this blog post and the Europe Python branch Is kind of where we will be finishing today after this talk So when you first check out the project you will see that there are already some tests the kind of the idea is that This product person comes to us and says how hey this earth project is so super important for delivering value to our users Unfortunately, the sole maintainer of this project left the company. We have no idea what it does. It's super important Please have a look and see if there are any blind spots something that we need to be worried about and see if we need to do any Bug fixes. So when you first check out the project, you will see that there are a bunch of tests, but they all seem Somewhat generic these are the naming And when you run them, you will see that it's actually running on Python 2 or like it's using Python 2 syntax Even though it's specified on the read me that it's Python 3.7 So in this talk I described and I'm sure you're all aware since it's a In-advanced pyTas talk that you can skip tests with pyTas and I explain kind of that you can use those market decorators And then you run them again, and you will see that there is some test coverage We are at 40 sorry 53% and There is a lot of testing going on which doesn't actually contribute to the code coverage because it's really just Scaffolding more or less. So I go on to explain that you can also skip modules Then we run the tests again, and what you will see is that there's also an example file and let me just Show that to you what it looks like Okay, so there's this example and as you can see it Prints a whole bunch of stuff hello adventurers, and then there are a bunch of folks who accept an invite apparently Then there are folks packing and they're traveling I'm disappointed that Hinnick is not here because Yeah, you're always lying so that's surprising And there is something going on with PyCon US in North America So fair enough going back to our tutorial So what I encourage folks attending this talk is kind of that it's important to have a Different levels of testing so there is maybe you have heard of unit testing You have maybe heard of integration testing what I found most important as a test engineer is kind of that the the code that users will be running is covered by tests by meaningful tests and Sometimes it is referred to as happy path testing. So if you have an example on your read me This example should work under all circumstances because if a user checks out your project Copy paste your example code into a local file on that example and raises an exception They will probably just go away and find an alternative So I encourage people to write a test for this so what we do here is we copy paste the script into a pie test test and We just we don't even have any assertions in here because we just want to see if it raises an issue sorry an exception and when we do we see that it's actually passing and When we go on to check if the code coverage is any different now, we will see that there is Coverage missing for those three functions So what we do then is Since we are using pie test we can use fixtures to kind of run the same test but over different kind of test scenarios So we started with the small group of those adventurers and in a large group This is more quantum comprehensive and it includes the functions that were that we saw were missed in the coverage report So when we run this again, oh, yeah, there was a I included a bug here because Maybe there's also worthwhile for you knowing that Since pie test matches fixture functions with the names of positional arguments We've had some issues on the pie test project where people Had tests silently passing or failing Because I forgot to update the names to fixtures inside of the test bodies And since we're using Python and everything is an object and you if you're defining the fixture in the same module as your tests Python will tell you hey There is an object and if you're searching that against say non or if it's a false ish value Your test might pass because you're testing against the function definition rather than the fixture That makes sense so what I what I explain in this tutorial here is that there is a Keyword argument that you can can you that you can pass to fixtures and This will kind of prevent you from running into this issue A lot of test suites out there don't do this and might be silently having Passing tests even though they are bugs So What we see then if you're running the tests again is that something takes a lot long time So maybe just So Po is eating And as I mentioned this is based on an nature documentary and as I learned that Panas are very particular about the diet They only eat bamboo and since the nutritional value of bamboo is almost nothing. They're eating up to 14 hours a day So if you are inviting pandas to your conference, you should be aware that there might be delays And there was also an issue here. So it turns out that Dave didn't make it to our conference and We might be find out why this is just in a second So I'm explaining here that we can add markers arbitrary names for markers you can document them in your Python config file and We can use this later to write a plugin So we're writing another test, but this time around we don't have any pandas in our group So that our test is it kind of we get the maximum percentage of coverage without compromising on the on the speed of the test run So we have three different scenarios now This is how you can run a more Complex expression on markers so you can combine those different markers and Python is intelligent enough to kind of filter the test out then There was this issue here that Dave couldn't attend the reason being that there is some randomness in the earth project And the way you can detect this with pytest is installing the pytest repeat plugin running your tests a Number of times and then seeing if they're maybe passing if you run them more than once You can mark them as X file and then check the code coverage again And we're at 98% which is much better than the 53 that we had before Which brings us to kind of the conclusion of this blog post and that's kind of where we're starting today So we have some test coverage. We already have a test based on pytest we have three different scenarios with Comprehensive group which the test is super slow Then we have a fast one which covers the example from the read me and then we have kind of the middle ground a fast test Which covers most of the functions that we have Cool, so if we have a look into the the test now test earth That's kind of where we are right now So we have three different fixtures as I mentioned and then there are three different test implementations And if you have experience with pytest, this seems maybe a bit redundant So why I have the test implementation copied multiple times? so what we do now is we use something called pytest mark parameterize to To kind of combine all of these tests into one Give you a second maybe to read this So it's the same test scenarios the small group the large group the no pandas group What you can also see here in this example is that we can apply pytest markers to individual parameters of the parameterize decorator So we have the same logic from before and the indirect keyword That's an important one which is super useful What it does it passes this string value here to the fixture called group But rather than just overwriting the fixture value We inject the string into the fixture And we will have access to this using the request fixture on So pytest built on request fixture if we get the param this will be the string And in this example, we just match that against the natural fixture value I will say though. This is not the smart solution because in here we kind of depend on all of the fixtures So if you have some like some Slow, maybe tests set up. Maybe you're connecting to the database or I don't know doing something here You don't want to do this for every single test because you will run them for every test Even though you might not need them fix the fixtures at all Cool. Um, so what we did then in this example here is we're starting our first pytest plugin um in the test when we When kind of don't want to run all of the slow slow tests as well What we can do is we can right implement a pytest hook which by default Deselects all of the tests that are marked with slow And this is how you would do it We add an extra comment line option using the pytest add option hook implementation And then we use the pytest collection modify items hook Checking for the comment line Option and if it's specified we will apply the marker Sorry, we are checking for marker slow on the test item And if you find it we deselect the test If not, we select it So this means that by default if we don't specify the dash dash slow Comment line option, we will skip all of the slow tests Does everyone know what pytest hooks are Can you maybe raise your hands? Okay, that's that's fewer people than I had anticipated. So the way pytest works is You can customize pytest itself by developing plugins and in pytest world everything is kind of a plugin Meaning that even pytest itself is built on top of a lot of internal plugins So what these plugins do is they implement these hooks They match based on their names and they are called in different steps of the test session So for instance the pytest Collection modify items is called when pytest collects all of the tests And then it calls these hooks into all of the different plugins and allows them to modify the selected the test selection And you can implement those hooks yourself if you're writing a plugin and this is kind of what we will be doing in this talk What's also quite important is that there are a bunch of different plugins which already do a whole bunch of things And in this example, we were also looking at Using external data For our tests There is a plugin called variables And this allows you to Have some data maybe somewhere else. So here we have a json file This could be maybe something that you download from from an api To retrieve some information and then We can get access to this from inside of pytest with the pytest variables plugin and then construct our Tests fixtures based on this information So rather than testing only against the picon us event we Simply set up a fixture which iterates over all of these values and runs all of our tests against all of these different events So kind of the the tldr for this so far is you can do Interesting things with pytest if you use fixtures you parametrize them rather than copy pasting. This is what you would do with pytest So The first one that I want the first plugin. Sorry, let's let's maybe go back to here Yeah, so the first plugin that I wanted to show you today is a plugin that caches test durations and What I mean by this is Right now we explicitly mark a slow test as being slow But it would be kind of nice If we can somehow defer this information from a previous test run So if we run our test with once and we measure how long a test takes Maybe we can kind of keep track of this information and use them for the next test run and I kind of want to show you how you would do this when you're waiting getting started And typically sadly this will require you to know a bit about pytest itself So for the example of durations what you would typically do is you might check the pytest help And you will see that there is an option called dash dash durations which gives you a list of your slowest tests So with this in mind what you can do is we can we can search for this This implementation and the pytest project itself And you will see that So this is from the pytest project itself And then you will find that the durations option Will be setting the value To something which is called durations. So if you then search for get option duration awesome Oh durations So as you can tell there are many ways how you can retrieve options. It's a bit messy Yeah, but anyways, so what you will find is that there is a hook implementation which uses the option And if you scroll down here, you will find that it's somehow iterating over something which is called reports So if you run your tests Every test item so every single test will have another report created for it And this will have extra information about your test So then if you Actually search for the reports here You will see that There's something reports here So now that we know that every report has a duration We can retrieve this from a hook that actually has access to all of the reports and use this information But how do we actually keep track of what we of this information that from a previous test run? So there is another feature called dash dash lf in pytest built in which allows you to run The tests that failed during a last test run So there seems to be some sort of capability inside of pytest which allows you to keep track of this information So if we search for f again, we will find this cache provider plugin And then if we look here, we will see that there is An nf plugin which is new tests But then there is also this lf plugin. So the last failure plugin And in here you will see that we actually have Access to some sort of caching mechanism inside of pytest So then with this in mind if you combine those two pieces together We can use the durations on the reports And write them to pytest cache so that the next time we run the tests We have access to this information and can select tests based on how long they took the last time So going back to our earth project Whoops This is kind of what we end up with if we want to implement a plugin that uses those two capabilities together So the caching mechanism and then the the capability to to deselect tests And this is kind of the important piece here that in the collection modify items Hook implementation when we iterate over the individual tests We get access to the durations from the previous run using the cache here And then if the duration for a test is longer than some value that we hard-coded here in this example Then we will automatically add this turtle marker To this test item So then if we run the tests We can Let's see We can say pytest And we say we don't want to run tests that use the turtle marker And you will see that they're quite fast Let's include all of the tests And you can see that they're the tests are slow if we don't specify this So just to reiterate if we run the test with the test just slow It will include all of the tests But with this turtle plugin we can deselect those tests automatically based on the previous test run And they're fast and you can kind of check the the data that's inside of the pytest hash by using the cache show Here So our kind of plugin here Keep track of the node id so that means the individual test item and then stores the The duration for the individual phases So that's kind of the the setup so everything that happens before you get into the tests Which means like test sorry code that's executed in the fixtures for instance Then the extra test Execution and then the tear down So this is one of the kind of the the Features inside of pytest which will make your life much easier if you actually want to use the cache so In the next step, let's kind of We we already saw that we had this Variables fixture that we can use but what happens if we actually add a new item here Imagine you have a large code base and you're changing this kind of external data and you update your tests but your tests might be Like it might be a large code base wouldn't it be cool if we had a plugin that allows you to only run the test that uses specific fixture So rather than running all of the tests all of the time if we're just changing one fixture, why not just select those tests? There is a command line flag which kind of gives you information about which tests Uses which fixtures And this prints out also like the dock strings and the individual location of the the fixture definition So that's super useful if you also use the same fixture name throughout your test suite multiple times This is kind of a way to debug where your test gets its fixture from So taking this to the next level if we then Look into a way maybe how we can run only the test that uses specific fixture We have this all plugin here And again, we only want to run this if we actually pass a command line option to it That we implemented here in the add option hook And what it allows us it allows us to pass in the name of a fixture on the command line and then based on that deselects tests So using the same Hook from before And pretty much the same but instead of checking for the markers on the test items what we do is we check if The name provided on the command line is actually in the list of names of fixtures That are used by this test Do you have any questions so far? so So we have also the a hook implementation that allows you to create Custom reports So by default you will see the command line output But if you're using a ci system like Jenkins or something you might be required to pass this information Or like generate the report in a specific format Could be some xml And there is a hook which kind of allows you to collect all of the individual reports and then create a report like a custom report for you based on that so you could In theory write a plugin that Generates a report in jason and that report will have the information how many tests passed failed error or something like this So if you kind of Want to think maybe a bit out of Of the box and maybe maybe you want to write a plugin actually that if your test Sorry fail for the very first time You could even go as far as creating a github issue for this and then Add the test report generated from a different plugin and add it to the comment of the of the issue for instance So Let's maybe just demonstrate that So if we run the tests And they're still fast. There is an error here. So sorry, let's maybe just add a test failure as well So if you go to the Let's just travel for instance So we just kind of want to create a failure for the purpose of demonstrating this So we run the tests again specify the elephant command line option that we implemented Um This so we see that there was an error and there was a failure and we see here in the terminal summary So for one it generated this report in in the markdown format for us, but it also created a github issue for us So if you open this You can see that our custom plugin now created an issue against the earth project And it includes Our custom report that we just generated with the markdown plugin This might seem like a terrible idea and it probably is so you don't want to rate Kind of perform any hdp requests in your test run But the reason that i'm showing this is kind of it's super flexible. So you can It's kind of brings us back to why we with what I what I wanted to say in the very beginning You can write all sorts of customizations which help your teams be more productive with when running the tests and So this could be just one example. Um, what you could do for instance There are other plugins which send notifications to slack or your isc channel or whatever when your tests fail for the first time Yeah So this is kind of uh, we will finish with this and then Maybe the question is so what do you do if you implement all of these plugins in your individual test suite? Wouldn't be nice if you create also packages from that so that other people can also install your plugins and use them Rather than you just implement them locally in your own projects So this brings us kind of to the cookie cut up project which makes Distributing codes much more easy because it allows you to kind of scaffold an entire python package distribution and then Generate all of the necessary files and the code for you to um to implement pytest plugins So the cookie cut up project is a common line utility. This is kind of where you can find it It recently moved to a github organization. So it used to be odd ray r slash cookie cutter. That's that's new now This is how you install it with pip and then there template projects mostly on github And there is this pytest under our pytest Organization on github you will find this specific plugin And this is probably the fastest way for you to write your own pytest plugins because it generates the setup Tools entry points for you and all of that stuff so you can focus on implementing your plugin rather than having to build entire Scaffolding outside of it What cookie cutter then does it will ask you a bunch of questions depending on what the template authors require you to provide So typically that's a name you github use a name or something a name for the project And maybe it prompts even for the the kind of license that you want to use for your project Sorry And then it generates this Directory for you with all of the files here as I mentioned there are different Templates out there And I want to focus you here on the pytest dev plugin, which is as I said the best way for you to get started writing plugins so maybe if we if we see how we can use that to Use our own plugins and create distributions from that we can create a plugins directory and then call cookie cutter into the plugins PyTest plugin that's a different kind of template that I use for my own personal plugins And That's just for me So in here, that's kind of the important piece the plugin name because that's kind of how your plugin will be referred to Also, that's going to be the name that's on pypi for your plugin So if we if we want to kind of publish our Say the What's it called again the turtle plugin we can say plugin name turtle A pytest plugin That automatically marks slow tests We select a license for it And then if we check out this here This generated a pytest plugin for us So Rather than having all of our pytest implementations in just our local test suite. We can now migrate that over And and publish our plugin code So what you will do then once you generate the the kind of the code here you will Copy paste your stuff over and then delete it from your test suite and then add this plugin to your requirements Cool. So I think that's that's kind of the conclusion of of the talk so far I hope this somehow helped you to learn more about writing pytest plugins It's it's mostly like the difficulty is knowing which hook is which called in which phase of the test run and What I wanted to demonstrate here is it it typically comes down to reading the the code of pytest itself and then Yeah, it's horrible, you know We started in 2016. We started changing the documentation of the entire project which is a Herculean effort. So we we never got to actually finishing it So my call to you if you are interested in writing pytest plugins You can please come talk to me and maybe you can find more people who would be willing to help us changing the documentation that Facilitates kind of PyTAS plugin development more And then one thing that I just wanted to mention we recently also started accepting Financial support for the pytest project. So we're an open collective and title of now So if you use py tests, maybe even for your job, I would encourage you to check out this page in our documentation And please talk to your managers and so this will allow us to not only print more stickers, but also send people From other regions to our development sprints We did one in 2016 and we had people coming over from australia and brazil all to germany and so If you use py tests, please support the project so we can keep on working on improving pytest more Thank you so much We have some more times and we can have at least three three questions at least I guess And any questions user feedback feature request Anyone hi Hi, can you name the Best plugin you write or the most helpful one that the most the feature that you find the most useful Um writing plugins. So you're asking about plugins that I personally find super useful. Yeah, okay So typically it's um, I I use different plugins for different scenarios. So for instance when I'm Investigating intermittent failures. So test that not fail every time but only sometimes I will use the py test repeat plugin. That's the one that I used to run the test multiple times all over again If I'm checking for kind of the general Status maybe of my code coverage. There's the py test uh dash cuff plugin co cov And then I Personally like the markdown Plugin, but I'm also the author So I'm biased and the reason being is because I sometimes just want to kind of Copy my results from a test run into a github issue or a gist on github or something And then I found it super tedious to copy paste the comment line outputs and remove the Directories from the output. So this is one of the other plugins that I find useful Okay, thank you very much Yeah, I had a question on parameterized tests It's quite often useful when you're running parameterized tests Not to do all of the fixtures for setup. Is there any option for a scope? So you can have scoped tests so you don't have to run them for every single test But only within a certain session, for example Is there any chance of it the feature or an alternative way of doing it to have the scope of a given parameterized set of tests I think the parameterized decorator doesn't allow for this because it's evaluated at a different time than the fixtures So it doesn't tie into the kind of the fixture scope mechanism um The related question that always comes up is how you can combine those two like fixtures and the parameterized There has been some work so we did some work on kind of seeing how we can combine them and the Conclusion that was that was made was kind of that the internals of pi tests don't really allow for this at this point But there's work going on That will hopefully allow us to do this And I guess at this point we will also be able to specify the scope for the parameterized Parameters, okay. Thank you everybody helpful. Sure Hi, and thank you for your talk Um, I have a question regarding profiling. Do you recommend any plugin that Integrates results for profiling and maybe trigger failures if complexity is too high or something like that Um, I think Yonell wrote a plugin about this. Um, I would have to look this up I don't remember the name off the top of my head, but um, if you so What's cool about The new, uh, pi pi is that you can also search and there is a pi test classifier So if you search for the pi test framework on pi pi you will get a list of all of the different pi test plugins And probably if you provide a search on like profile, you will find something But I I'll I'll do some research and maybe you can talk later So like just google pi test classifiers Not google, but on pi pi the search. Okay. Okay. That's probably your best chance. Okay. Thank you. Sure. Hi Question, uh, usually when you need to improve your coverage you need to deal also with exceptions So is there a plugin that triggers exceptions already in pi test or how you could deal with that in pi test? I'm not sure I follow like the trigger is exception. Yes. So usually you want to test an exception also So how would you do that in? Yeah, so you um, there is a context manager for this in pi tests. Um And uh, it looks like something like this with pi tests raises Oh, sorry Raises value error And then you get access to the exception info object and then you can I don't know You can call some code that triggers this um And then later you can get access to the I think it's message So doing this like you can use this context manager to catch the exceptions and then specify that they're actually correct And contain the information that you care about Perfect. Thank you Um, at the moment you support unit test, but are you planning for better support and sub tests? On sub tests? Yeah for unit tests. Uh, I think there is actually I would have to check the changelog, but I think there was some change Going in at some point, uh, to better allow for sub tests at the moment They are not displayed as separate or okay cases Yeah, I remember the conversation on the mailing list. So I'm I'm not sure but Yeah Hi I wanted to ask you like What would be for you a good use case for the x-fail decorator? Meaning If you have a flaky test, I think that design philosophy that should Be supported would be to fix that flaky test rather than encourage It to pass. So I think you built it with some Yeah, you I mean the pi testing built it with some philosophy in mind And so what's the use case that most would fit the usage of this feature Yeah, I can't speak for the pi test team because I wasn't around when the x-fail feature was integrated I typically use it for intermittent failures. Um, for sorry for what for intermittent failures or like flaky tests and that kind of stuff Yeah, I I think that's That's a very good question, but I don't think there is like a general answer to this Uh, I certainly use the x-fail for it And you know like when the test passes even though you marked it as x-fail It will show up as x pass. So you kind of expected the test to fail But it passed And there is a mode in pi test which kind of creates errors from this So if you want to use the x-fail as something that you really want to fail You can specify this extra option and this will generate failures for you Um, yeah Yeah, thanks. Sure Hello. Hey, first of all, thanks for your work on pi test is amazing Even are they the scientists in our team they get they use it they they kind of even then So one question about that assertion there to you and all the audience Where do you guys prefer to put the expected value on the left or on the right? you use yoda assertion or What's the best thing because the output is better I think for this case where you have your cursor to have the expected on the on the left Yeah, but you know sometimes let's do a raise of hands everyone left hand side for the expected value Okay, is that sorry was that a question or I just didn't get that so When you're writing it it makes sense to write it like you have done right where your cursor is where the expected value is on The right. Yeah, but given that you know that foo is correct When you when that goes wrong it reads kind of the wrong way around I think so it says something like I mean, maybe you need to run one like and can remind remind myself, but like it basically ends up looking like Fu was not the value that was expected but foo what was what was expected If that makes any sense. Yeah, I think I don't know what you mean So we can this is kind of what you said, right? Yeah Now it's going to do the right thing and I'm going to get egg on my face, but that's what I thought Let's get rid of this thing So this this now raises this error because we created the plugin and it contains a local plugin and pytest will warn us about it so Good So world does not hello But it's it's kind of as we specified here, isn't it? Yeah, so yeah, you're right But it's it's when you have like bigger cases and you have like a dictionary and it tries to Tell you what's wrong with the dictionary, right? That's where it gets confusing Okay Yeah, maybe that's good feedback I'll I'll maybe investigate and see if if there's Maybe something that can improve and the display of the error messages often it would be really useful to Sorry, this isn't really relevant, but really useful just to print out what what was given on the left Yeah, rather than try and do the comparison that doesn't always work quite right. Yeah. Yeah, that makes sense So I personally always put kind of what I like about python is because you can kind of read it as text So when i'm reading an insertion, it's almost like i'm Is what I have what I would expect So that's kind of more than the english language Sounds to me like so I personally always put the The kind of what I get on the left hand side and what I would expect on the right hand side, but that's just me I have a question about hello Um, so I have this big test suite which is often used by people that are not programmers And so we have often comparisons between lists of complex objects And uh, yeah, I'd like to know if there is a way of Customizing the output so that they can be like, uh More easily understandable by people that actually know what the values mean, but they you know, yeah Without customizing the whole output Thank you. There is a A plug in sorry Running So I think there is a hook which allows you to kind of customize the representation that they're so generates I think that's what you're asking for, right? So if there is a way to improve this information Okay Yeah, okay Hi, so Do you run your tests from the source directory or do you install them? I mean, especially when you're using an editor and try to do debugging and so on I mean the online advice I find is that you should always install the package and do the installed version But then how does this work with your IDE or editor? um So I typically don't run them from inside of my editor itself. I just run them from the terminal And I typically don't uh run pytas directly, but I use talks Uh, and then I talks will create a virtual environment and then run my tests against the installed version of my package If that makes sense So I I think that's probably the safer way of testing it because you want to make sure that when people download your library For instance from pypi that it actually works for them Rather than against the checkouts of the project Okay, any more questions? Okay, uh, if you have no more questions, but you would like to Feedback to rafael you can download our conference app identify and give some rating And that would be interesting and that's uh, thanks rafael with a really big hand again. Thank you