 Hello everybody, now we have a presentation about AsyncTest, how to have easier unique testing for AsyncIO code. Hi everybody, my name is Martin, I used to be a system and network engineer working on HTTP reverse proxy written with AsyncIO and during that time I had a lot of trouble trying to write unique tests which were working correctly. So I decided to write a small library called AsyncTest which provides a few features that you might find useful if you ever happen to be in the same condition. So during this presentation I will present you the main features of AsyncTest. I will assume that you know what a unique test is, I hope at least. And unfortunately I won't have the time to explain stuff about AsyncIO, so if you are really not into what's a coroutine, what's a task, how an event loop is working you might be a bit lost and I'm sorry for that. So I will talk about what I made to enhance the unique test package, standard package from Python. Most of what you will see is basically built on top of unique test, so it's like a drop-in replacement, you can use AsyncTest instead of unique test and it will work as expected. I will show you how I implemented transparent mocking of coroutine functions which is probably the most useful feature of AsyncTest and I will show you a few advanced features that might be also interesting. So let's start with a presentation of a test case. So as I said, AsyncTest case is built on top of unique test case, so it's just a class that extends the unique test version and its main feature is that it handles the loop creation and termination for each of your tests. So every time a test case is instantiated and starts to run, a new event loop from AsyncTest is started and running so you are sure that your test is running in an isolated environment, so to speak. Obviously inside of your code you can access the instance of the loop just by using self.loop. So here is a simple example of test case and what we can see here is that I can actually use a coroutine function as a setup function or it works with the alone or even your test case can be a coroutine so you don't have to manually declare your coroutine function then schedule it on the loop by yourself. It's all done for you so you just have to focus on writing your actual test and don't bother with the scheduling and stuff like that. It also works for cleanup functions so basically everywhere you expect to be able to use a coroutine in a test case it will work. Just a quick advice regarding that, since it's fully compatible with unique tests, setup and teardown can be actual functions but I suggest that you always use coroutines because it will be way easier and you won't have to remember if you need to await your parent setup or stuff like that so always stick to coroutines. Be aware also that the loop is created for the instance of the test case so it won't work if you want to use a coroutine in setup class or setup module or those kind of functions and someone opened a ticket to add this feature and I decided to reject it because I feel like if you try to do that you will probably break the single most important thing with test which is try to keep the isolation as best as possible. So however if you really, really want to break this simple assumption you can still do it. Here what I do is that I'm in a situation where I need to use my customized version of an event loop. So for instance here is a setup class I define it and I can ask the test case to use the default loop so in that case rather than creating a new loop before the test we just call async.getEventLoop and return the event loop that you set right before. One of the features of test case is that it has some failsafe for you like this example which was very common at first when people were using Python 3.4 without the async-def syntax you were using actual generators as coroutines and in that case you have the feeling that your test is okay and you won't see any failure in the result of your test while actually it didn't run at all because what it did here is that it's so function that is prefixed by test.score so it will call it, it will return generator instance and it will never run so basically you won't see that your test is actually not running and that's because you forgot to add the decorator async.co.cootine which marks effectively this generator function as a coroutine. So what I did in that case is that I did a default check that if the loop did not run during the test the test will fail so you will know that you made a mistake. If in some case you actually don't want to use a coroutine and be sure that the loop didn't and don't want to check that the loop did not run I've got this decorator called failon which has a few parameters that allows you to enable some of the checks and disable some others. We will see some of them later. So one nice feature that I did a few months ago is clocked test case because what we want to do is schedule stuff that will run probably in a couple of seconds or later. Writing test that will have to wait until the clock moves forward can take a lot of time just wasted time. So clock test case helps you to control the time of the loop. So in this example for instance let's say that I've got a class which denotes a resource and we'll refresh it every five seconds. So here what I've got is that I set up my refresh and see that my donator has never been called. Then I call self-advanced which will move the clock forward for five seconds. And as you can see also I can do it with ten seconds there. And what's interesting is that I actually had three, two calls during the wait. So what it means is that the clock will not jump to the ten more seconds it will actually execute the seconds of callbacks and coroutine you expect. So it will work as well as if the clock was working and not like stuttering or moving fast and break. If we just moved forward the time to ten seconds we will only have one call and one and basically the call set for five seconds would be late and one scheduled for ten seconds would not be scheduled at all. So in that case you might want to answer that if you set schedule a callback for later you want to be sure that it has been executed before the end of your test. We can check that with optional tests called active handles that you can also enable use the fail-on decorator. In this example I use the decorator on the world class so it will apply on all tests obviously. And yes the example in the bottom shows that I have solved all the case where you cancel the handle and the callback. So now more interesting probably is mocking. So I will show you a quick example. This example basically get first URL from URL and I open the connection to the server then I write a HTTP request then read the errors and read the payload according to the size I got in the header. So it's a bit dumb but it's for the example. So here obviously what I don't want to do is to open an actual connection to a server in my test so I want to mug that. So here what I do is that I define the function called create mocks. So I just will mock my stream reader and stream writer which are the two objects written by my open connection coating. As you can see you can specify the spec as you would do with unit test and in that case the mock object would be smart enough to detect that one of the attributes that you are accessing is actually a coroutine function and not a plain function. So it will work correctly. So here for instance I just specify a few what I expect to get as a result of my read and read until calls that were in my example here. So as I said it used the spec of the object you want to mock so it detects coroutine and it's fine. Coroutines functions are actually mocked using an object class called coroutine mock which basically will work correctly if you try is an instance of a coroutine or use the helpers function is coroutine also with the inspect module so it will actually work almost as expected all the time. However one drawback is that kind of an anti pattern I see with I think people are actually saying that they are providing a coroutine in their API while it's actually just a function returning a future so it works exactly the same way in your code but actually the function is not a coroutine and when you will want to mock it you will expect it to work as mostly as in the example I showed while actually it will not work and one famous example is with a UH TTP which does that a lot or used to do that a lot and made the mocking quite hard. So now that I know how I can create my mocks what I do here is that I want to use them so I use the feature of unit test called patch. Patch is basically will temporarily replace the symbol you want to use by mock. Here we've got open connection which will be replaced by a mock which will return as a result the result of my previously defined function create mocks and in that case what you have seen with the decorator is roughly the same as this example using the with statement and what is important to see here is that the patch will still be active even if the coroutine yields to the scheduler which means that basically the patch will stay enabled even if you go out of the coroutine and will affect other tasks running concurrently. In some cases you don't want to have that and you can so I did a feature that is an attribute to the patch used as decorator it won't work with the with statement but it works with the decorator and here I specify the scope of my patch to limited to the coroutine and when the coroutine will yield and stop running the patch will be disabled and re-enabled again right when the scheduler and the loop will decide to render the coroutine again. Okay so one last feature which is selector mocking this is probably a feature you will never use because it's target is the lowest level of async.io the idea is that in some cases you will want to handle low level objects like sockets file descriptors whatever and you will want to see if you can if everything is working as expected like you will get an event from the selector from the event loop and so basically it's something you will use if you for instance want to add a new protocol or a new transport to the async.io library or if for instance you are trying to use twisty on top of async.io. So in this example which is actually a really dumb example but what I do is that I don't want to open a real socket so I just use socket mark instead and I can schedule and call back to the event something is ready to be read on the socket and I've got an helper provided by async test which allows me to trigger the actual event so I can manually define that I want to simulate a kernel event saying I can read or write on my file descriptor. So the socket mark works because I define something called test selector which is basically a wrap around the original selector and it's pretty useful because it works with the mock file descriptors but also with actual file descriptors so you can use it as your main selector for your event loop without having to care if of your test will run. I provide a few different mocks, file mocks, SSL socket mocks and all of them are basically just mocks compatible with test selector and providing a file no function which returns a fake file descriptor so a fake integer and they just use the spec of the objects defined by Python. So since it works correctly if you happen to use mock files or actual file descriptors, I decided to enable it by default in the test case and basically it's already available if you use the test case class I presented you before. So one last trick is if you want to be sure that you have one call to remove reader after you set a callback with address reader or the converse add writer and remove writer, you can enable this check. We will answer that everything is cleaned when your test finishes and this is very important because you might have a lot of bizarre side effects if you happen to close a file without removing the reader and writer callbacks. So if you want to be sure that you have one call to remove the reader and writer callbacks, since they will be defined even if a new file descriptor is using the same value as file previously closed. So just to wrap up, in the future I would like to add the support of asynchronous iterators and context managers. This is a feature that is more and more asked because some libraries like IOHTP are using asynchronous context manager a lot. I would like to add features to my file mocks, for instance, being able to specify a given buffer and let the user choose or we will consume this buffer. So to simulate an actual what you would expect when you mock events running on the network. And well, I won't do that actually, but if someone wants to work on practical support which is targeting Windows, feel free to open a product request. And for people who are actually using currently PyTest as in IOHTP which is plug-in to PyTest, you can use AsyncTest with it especially for mocks that will be really useful and saves you a lot of time. And to finish, AsyncTest is actually used by people in production, well, to test their production code. Some of them are working at Cisco and Mozilla and obviously my previous company where I started this project always that I use it a lot for all its tests. So thank you very much for your attention. You can read the docs and check the code on GitHub and feel free to ask me questions or suggest some features or find bugs and whatever on AsyncTest. I would be glad to try to help you. Thank you. Okay, so I've got time for questions if anyone wants to ask. So the question was what's the actual difference in terms of speed when running with AsyncTest rather than UnitTest. Obviously AsyncTest is really slower than UnitTest because for each test that you start you will create a new loop and so open a lot of low-level kernel objects like the selector object and stuff like that. So it's quite slow. So it's quite slow. Quite slower than UnitTest. So if you don't target, if what you want to do is not running coroutines, you probably don't need to use AsyncTest and should stick to UnitTest, yes. Oh, and also you can mix UnitTest.testcase and AsyncTest.testcase anytime you want because it will always be detected by the UnitTest runner. So that's working. Well, then related to that, can you actually reuse an event loop from another test when you know there is not going to be any side effect by using it? So the question was about reusing the same loop between several tests. Is that correct? Yeah. So you can improve speed by doing that. It's alright. But the issue will be that if you happen to have some events that are still that were not executed or the loop will be in a state that is modified if several tests are running and using the same loop. So for instance, if you happen to use ClockTestCase and use the same loop for each test, you will most of the time have a non-predictable start time at the beginning of your test. Using ClockTestCase, for instance, the first test that runs, if you call self.loop.time, you will get time of zero because it's the beginning of the test. So if you use the same loop and several tests, the next test will have the value that is left by the previous test. So you can speed up by using the same loop, but most of the time when you write tests, you don't want to have tests that are sat fast because it's not your main concern. You're more concerned by consistency and the actual work of your loop. Yeah. When you know that it's not going to have side effects, you can't do it because there is that feature that allows you to do it. So animal questions? Okay. So I think I'm done. Thanks.