 Hi everyone. Yeah, I'd like today to talk to you about trial and how it maybe can save your soul when you are working with asynchronous programming. As you may know, asynchronous programming is really not a new thing in Python. It's been there basically forever, like thanks to Twisted, but since some years it has been something really bigger with this new Async.io library and especially is the introduction of this Async.io keyword. The thing is how bad was it just before this Async.io keyword was a thing. So let's see something. So just in the beginning it was like that. This is something that you no longer write in Python, so I had to find another language that still use it sometime. So it's really horrible because you have to handle error by yourself. You cannot use the exception. If you use a debugger on this, basically you don't know your own. You never know a callback how it has been created by who or something. So after some time people come up with a better idea. So it's called a promise on JavaScript. It's called differ on Twisted or future on Async.io, but it's always the same idea. So now it's a bit better because you can combine promise together to do a synchronization between callbacks. It's kind of callbacks on steroids, but you still you cannot use the exception, right? And you know your debugger is still totally useless because it's still callbacks. So after some time you've got the real revolution, which is Async.io keyword. So now we have this concept of Async.io function, right? So it's much better because now you have the regular function and you have just a new kind of color of function, the Async.io one, and you can just use this Awake keyword and you stop the execution of this function for some time and you give a chance for another coroutine to just write a bit. So yeah, it's much better. I think we can just improve it a bit more like this and yeah, it will be perfect, right? We're using this new Async.io library. We're using the Async.io keyword. Everything's fine. So this is the end, right? But yeah, maybe not. I mean, if you've been to the talk of Yuri this morning, you know that there is still kind of some trouble with Async.io. For instance, let's see this code. It's really simple like you see there is this scheduler function which acts as a little server. So it's a long running and from time to time it will schedule a new coroutine which does some job. The thing is this job is broken. It will raise an exception. So if we run this, let's see what happens. Yeah, I got the code right there. You can see? Yeah, big enough. So yeah, if you're used to Async.io programming, maybe you don't feel it's that weird, but if you're not, it seems really wrong there because there is an exception occurring here. It's really obvious, but and we never try to catch the exception, right? In your code base, there is no except anything. But anyway, the code was still running. I mean, we got an exception. We just print it on the standard output and we go on like, okay, nothing occurs. So yeah, it feels wrong. And the other thing is if we look at the stack trace, it's not, the stack trace is not complete. If we see this function raise the exception, then it was this function which called the other one and we cannot go up. So it's like here, we created this new coroutine and in Async.io when you create a coroutine, you go like it's fire and forget. You just create the new coroutine and then there is no connection between the creator and the creation. And so it's really bad because it means when a one coroutine raise an exception, it bubbles up, up, up until it goes to the event loop and then the event loop doesn't know what to do with this exception. It cannot give it to anyone. So it just does the least worst thing it can do which would be printing it on the standard output and just go finger crossed and say, okay, maybe this thing was not too important and maybe we will be able to go on. Let's see another example. So here it's a bit more complicated. I call it the recursive Russian roulette. So it's basically we are playing Russian roulette, but we have with program. So this morning we were killing threads. Now we're just killing coroutines. So it's okay. The idea is we have this function. We, let's say, try all luck and if we get lucky the coroutine doesn't blow off and what it does it just create two new coroutines that will just continue this same function. And now we use this asyncio gather. So if you don't know what asyncio gather does, it basically just wait for the coroutine to finish. So if we run this code, what will happen? I got it here. I think something wrong. So if I if I just try it a bit to stop a bit faster. Yeah, so we can see just we try our luck one time. Then we create new coroutine that we'll try. Eventually we run out of luck here and here there is the exception which is handled. We can see it's here and here it's getting crazy because even if we are out of our recursive Russian roulette function, there is anyway new coroutine which which are created and which execute some code and create new coroutine is getting crazy. So what's happening there? If we think about the coroutine, we have just our main coroutine first and then this coroutine will call the recursive Russian roulette function. So let's say it gets lucky and it just has to create two new coroutines that themselves get lucky and so create two new coroutines. So we're like this. If we consider this async.io gather function, it's something like that. So you have the main coroutine which is listening on coroutine one and coroutine two and we have coroutine one waiting on its children and coroutine two waiting on its children. So now let's say this coroutine blowed off. What's happening? And I saw it was what happening until I saw a eurus talk and no I'm not sure of anything which means async.io is really hard. Even if we want to do a talk slightly about this. So I saw that when a coroutine blow up and the coroutine is watched by async.io gather, async.io gather will cancel the other coroutine. It is tasked to watch and will make the exception bubbles up. So it bubbles up to the coroutine two and this coroutine two is also watched by async.io gather by the coroutine one, sorry, the coroutine main. So we cancel coroutine one two and we end up like this. So now it seems a bit wrong because in fact the async.io gather from coroutine one couldn't kick in. So it was just killed. It couldn't destroy the sub coroutine it was tasked to watch. And now we end up like this and given we just catch the exception in the main and just sitting around for some time, then the existing coroutine can just spawn new coroutines and go crazy. So what's the problem here? I see three things that could be improved. The first thing is it will be much better to have complex stack trace. If we take one coroutine, we should be able to read all the part of the coroutine and to be able to go up until we see, OK, this coroutine has been creating there and if we go up enough, we should go to see the main of the function of the root of our program, sorry. The other thing, which is a bit related to this, if we get an exception from our coroutine, it should bubbles up and never get silenced by something like, OK, we're just printing on STD out and we'll be enough. It should, if nobody catches the exception, it should go up until we blow off our program. I mean, that's how it works on the synchronous program, right? So it should be the same on asynchronous. And finally, something a bit slightly more abstract is we should be able to have our coroutine, which we should have an easy way to connect our coroutine together in order to express the lifetime of one coroutine according to another one. For example, we say, OK, we have a parent, we have a child. If the parent died, we want the child to die too. So yeah, it's time to talk to you about trial. And you know this guy, I mean, his name is Nathaniel Smith. And he had this idea, this great idea. He said, OK, there is new feature in Python, like the Async await. And so what if we just drop all the other deprecated way of doing asynchronous function like a promise and callback and we just focus on this Async await thing? And what if we go a bit further? What if we invent new way, new abstraction, new building blocks to do asynchronous function, asynchronous programming and let's see how far we can go with this. And so we end up with trial like this. So what's about trial? There is three main concept about trial. The first thing is the Async await keyword. We already talk about it. We already know why there are so great. The two more exotic thing are the nursery and the Cancel scope. So first, the nursery. Here is a slightly modified version of the recursive Russian roulette. This way it was written for trial. And basically the only thing that changes the red rectangle. The idea in trial is if you want to spawn new keratin, you cannot do it fire and forget style like you would do with Async. You must use a nursery object. So putting that another way, you have to say that every keratin in your program will be connected to a nursery. And the good thing about nursery is there are asynchronous context manager. So you use them with Async with when you do Async with a nursery, you start when you enter, it does nothing. But when you want to leave this block, it will be blocking. It will block until all the keratin that are connected to this nursery will end. So how does it solve our problem? Let's see the keratin we have before. Now we said there are bundled inside the nursery, right? So it looks a bit something like that. In fact, there is a tool in in trial, which is called the monitor. And this tool allows you to plug inside a trailer application to just watch in real time the keratins. And so if you use this on our program, you will see something like that. So the idea is now what we have is a tree of keratin. It's really a graph. And so it's much simpler, I think, to visualize how trial works when you see this. If you take our example back with the exception from here, I think it's really simple to see that every time though your exception bubbles up into one keratin up, it's really easy for trial to know which keratin it should close. So it goes like this. And so it just works. It's really simple to see that you have a tree, everything that under the tree and under the node that you are working on should be destroyed because this node is going to to be destroyed too. So you end up like this, you end up clean. Something else about trial is the cancel scope. So the idea is if we are doing an asynchronous if we are using an asynchronous framework, it means that we are doing IO, right? The trouble with IO is it's basically you are waiting from someone else and this someone else can crash. You can have the router and the other end of the internet which die or something like that. So you always have to deal with timeouts. So it will be really great if we could really easily say, OK, I want to have this part of the program which has this kind of timeout and to put it really easily. But as we saw, we already have this tree of coroutine, this graph. So it's really easy with trial to just say, OK, I want this part of the graph to run, say, in 0.3 seconds, for example. So to do that, you just use a context manager. So you put a context manager with this console scope and you say, OK, this block of code I wanted to run for at most this time. And if it goes more than that, then we will just leave this block. And every node that has been created, remember, we are like this. So everything that has been created under our coroutine, you can destroy it. So you have the guarantee that no matter what, if the timeout occurs, you won't leak any coroutine. Everything will be clean. Oh, sorry, I go too far. So yeah, one good thing about our recursive version right now is we got this timeout. So now maybe the game is a bit more fair. We won't kill any coroutine. So I just written a trial version of this recursive regression red. And yeah, we run out of luck this way. Yeah, we're really unlucky. Yeah, this time we didn't kill anybody. So that's nice. Maybe you're not bought yet to this concept. But maybe you just have to think how you would have done to implement this timeout feature if you would have to do this with just Async.io with the previous example. OK, so that's it. And that's one of the features of Traio, actually. It's there is really, really few concepts. It's like an extremely simple asynchronous library. And that's something which is, I think, really strange from asynchronous library because, you know, you know, twisted, you know, Async.io, they're all with lots of documentation, a lot of concept, a lot of things. And so it seems really complicated. And with Traio, you just read something and in half a day you're ready to work. And you have those really small and simple building blocks. But you can put them together to create really complex things the easy way. I mean, it's easy to get things right. So I think it's a really great feature. OK, so maybe you're wondering, it looks great on the paper, but what about real life? So there is this use case. Let's say you want to connect to dbn.org. The thing is, so you have this domain name. You want to resolve it. And, you know, Internet is a big thing with a lot of complexity. So it's never simple. So when you resolve your domain name, you will end up with multiple IP address. And so now you have a new trouble, which IP address should I connect to? So the first idea you could have is, you well, just try to connect to the first one. And after some time, if it fails, we try the next one and so on and so forth. The trouble is, yeah, this is really slow. So there is maybe a better way. So the other way to do it is just to go a bit more violently. So we just try everything in concurrency. And we take whichever is the fastest. But it takes a lot more resources, right? So there may be a middle ground. And so an attempt of this middle ground is this thing called happy eyeball. So the idea is you start by connecting to the first IP address you have. And if after some time you didn't, I mean, you're still waiting for this connection to succeed, then you try another address. And if this other address just failed faster than the time outs, then you try the next one. And eventually one of them will succeed. And then you can cancel all the other coroutines because you have a new connection, which is correct now. So how complicated is it to implement? There is no implementation of this in Asyncio, but there is one in Twisted. So this is the code. I didn't read it myself. But according to Nathaniel, it's a quite complicated code. There is a lot of nested function inside them. So it's hard to read. It's hard to understand. It's hard to manage. So the Twisted guy, they are aware of this. And so they come up with a new version of this, which is much better, which is much simpler. So OK, they said it's less crazy, it's easier to work with, but they are still not happy about it. But the thing important here is the people working on this are like top guy. I mean, the guy who written this implementation, the second one, is the creator of Twisted himself. So it's basically the guy which has the more experience in asynchronous programming of the whole Python community. So the problem is not the people. The problem is the language. The language, I mean, we don't speak the right word. It was really easy talking human language to say, OK, that's how Happy Ible works. But when you want to write it in Python, it gets really hard. And this is not what Python is about. Python is about being able to write a complex thing easily. So maybe it would be better in trial. Who knows? This is a skeleton of our function. So we call it open TCP socket. It takes our hostname. And at the maximum time, we want to wait between two attempts of connection. So the first thing is really simple. We just do the DNS resolution. And then we got multiple targets we could try to connect to. Here we use this trial socket module, which is just the same thing than the regular socket module. The standard one, it's just an asynchronous version of it. After that, we define this winning socket. So this will be the variable which eventually will get which socket one, which socket is ready to be used. And if we have no one, well, we just raise an exception, right? Now what do we want to do? We want to do multiple things at a time. We want to start a connection and then another one and this kind of thing. So to do this in trials, there is not two ways. There's only one way, which is we just have to create a nursery. So yeah, that's what we do. We create a nursery. We just use it as an asynchronous context manager. And we create this attend function. So every time we try a new attempt against a new IP address, we will call this attend function. And so we start by calling it for the first attempt. So yeah, what are we going to write inside this attempt? What we can see here is except for the first attempt, we are always waiting. The first thing we do is we're just waiting. We're waiting for two things, in fact. The first thing is if the previous attempt is taking more than the time-out time. And the second thing we are waiting for is if the previous attempt just failed fast. So to do this in trial, what we can do is we can create multiple events. Each attempt will have a failed event. So every time it failed, it will set up this event. So now we can wait on this event. And we use CancelScope to say, OK, I want to wait on this event. But I don't want to wait on this event no longer than this time. So if we reach the time-out, we'll just leave this context manager and continue our code. So now we are almost down to do the actual job. But just before that, we have to spawn the next attempt. Because before doing our own attempt, we have to spawn the next one because it will watch to see if we are taking too long. So to do that, we just use the nursery. So we just take the nursery and ask the nursery to create a new coroutine and to execute this next attempt on the next IP address. And now we're all set. We can just do our socket connection, socket trial connection. So again, it's just like the regular socket module library. So now there is only two possible outcomes, which are first the connection failed. So what we do is just we update this event about, OK, we couldn't do anything. So now it's up to the next attempt to try to succeed. And the other outcome is, well, we got the winning socket. So we just have to update this. And now we can cancel the nursery. So the idea about canceling the nursery is like we cancel all the coroutines which are connecting to this nursery. And given we have to cancel all those coroutines, it means the nursery is now free. And so these blocks we had there with this asynchronous context manager, we will just leave it automatically and continue the code. So yeah, now we're done. Maybe you're not sure if it works. So I have the code here, thinking this one. So it's just the same I show you. Just I added this main function to just say, OK, we'll try some debion.org. Obviously, it would be much more impressive if I have done live coding. But no, it's not my kind of style. So let's just pretend it works. Anyway, yeah, thank you. I'm doing a plus two because it's not my code. It's not any other ones. I stole most of my talk from his zone. So yeah. Anyway, so what does TRIO offers you? There is basically everything that you can expect from an asynchronous library, like all the standard stuff. You can use asynchronous file system access. You can use networking. There is all those synchronous tools, like we use event. You have queue, you have logs, et cetera. There is a really good testing helper. If you love a pie test, there is a great pie test module. If you love a hypothesis, there is a great hypothesis module. If you don't love hypothesis, you should try hypothesis. And then you will love it. There is this control C working. I mean, it's feel like, yeah, what is this? I use control C. It works. But no, control C is really hard to get it right. And nobody knows about it until you have read this article that Nathaniel wrote on his blog. You should definitely go to read the blog. Everything is really interesting. So yeah, this is working. And finally, there is one of my favorite features, is this compatibility layer. Async.io is really great, because it makes the entire asynchronous world in Python compatible with each other. So with Async.io, you have compatibility with Twisted. You have compatibility with Tornado. Everything is compatible. So now, with TRIO, you just have to write an Async.io loop that you write inside in TRIO. And now, you get compatibility with the entire rest of the ecosystem, of the asynchronous ecosystem, just this way. So it's really great, because it means that your own code base, the code you just want to type fast and want to do the thing, you can use it in trial. So you will get safety. But for the third-party library, the thing that, like, say, you want to connect to Postgre. You want to use Async.pg, because it's really great. But this code is already well-tested, so you know there won't be any trouble. And you can really easily plug this asynchronous library with your code, which is written in TRIO. So yeah, it's really great. And yeah, I'm not the only one who thinks this code is really great, as there is plenty of famous people, like even rock stars. Yeah, so anyway, I guess this is about right. One more thing, I just told on maybe half of my conference is from a Nathaniel conference from last Python. And the link, which is here, is on. So you should definitely check this out, because there is a lot of things, if you're interested, in this topic. It's like, OK, you should watch this conference. You should read this blog post, and this one, and this one. So if you want to become better at this, and you don't know already this guy, Nathaniel, you should definitely go to this link. Yeah, and that's it. This great talk, if anyone has any questions, just line up at the mic right there on the site. Hi, that was excellent. Could you say a little bit more about hypertest, and indeed, whether unit test is also available with this thing? Yeah, so what do you want to know about hypertest? You already know this thing, or no? OK, so it's basically the greatest thing if you want to test code. If you want to test code, anything, anything. The idea is, normally, what you do is you create use cases. So you just say, OK, I want to test this function. I will put this input, and I want to get this output. But the thing is, most of the time, when you do this, you forget things. For instance, let's say you want to have a function that works on the strings. You will try a simple case, like you use ASCII strings. But you will forget that there is Unicode strings, and there is Unicode string with Unicode code points, which are not printable, this kind of thing. So it gets really tricky, really complicated. And so with hypothesis, what you can say is... Hypothesis? Yeah, sorry, it's by accent, which is... Sorry, I'm French. Sorry. Yeah, so now you can just turn around and say to people how great hypothesis is, and pronounce it right. Hypothesis, yeah, I don't like it, but it's really good. Shame. Property-based testing. Yeah, that's it. So anyone who wants to talk about Trio and not hypothesis and French accent? Yeah, you have to come here. Have you encountered any bugs with the Trio asyncio compatibility layer, does it? No, I didn't encounter any bugs yet. There is little kind of, let's say, quirks sometimes, because yeah, obviously you're using one asynchronous library with another one, so obviously you have to be careful. But yeah, if you're careful enough, it's really straightforward. And yeah, by the way, Trio code base is crazy good. I mean, when you read the code base, there is like two time more comments than the code. And every time it's like the guy is writing all the state of the art about just this line of code. This kind of thing we are doing is this, this, this. And if you want to know more about this, you should go there, there, there, there. So just, if you don't want to use Trio, at least read the code. With async.io, they all implement their own async.io event loop. So it means you can interact with Trio but not necessarily with Trio and all of the other async libraries. Is there a way to stop the madness and everyone use the async.io event loop instead? No, you should see it a bit like, you're writing some code in async.io and you say, okay, I want to go faster. So now I will don't use the async.io normal implementation. Now I will use UV loop. So it's still async.io but it's just another implementation of async.io, right? So now it's just the same thing with Trio. With Trio, you got an implementation of async.io which is made with Trio. So you can run totally just async.io code with this but it wouldn't be really interesting. And so the good thing is you can have a part of your code which is in Trio and the rest which is in async.io and you use this implementation of the async.io event loop to make the both be able to talk to each other. Yeah, but Twisted and Tornado, I mean, I'm not 100% sure about this because I never tried it but what they say it, Twisted is now compatible with the async.io event loop. So you have no trouble now. Yeah, jeven. Yeah, sorry, you cannot have compatibility with PHP too. No, you're right.