 Hello everyone super cool to see you today. Yeah, I'm not the postgres guy. Sorry. I used it. I love it But today I'll be talking about the different thing Yeah We will talk about async and async does not mean async.io Python is a rich language with a lot of nice frameworks and tools We'll be looking shortly in many of those With a little life code with a little demo with a little bit of theory as well Just to have an overview. What is there? What's available for us right now? What tool can we use on the word? circumstances and so on So the way we're going to do it today We all like come from different backgrounds. Some of us already have experience with async tools and generally network Development in Python some don't so you can interrupt me and ask questions right during the talk Because I really want that everyone like keeps the track of the stock Well, so please do not wait until the end rise your hand or just ask the question I'll repeat it. I'll do my best to keep everyone like in a good mood and understanding. What's happening here? Let's go. All right, so First thing I have to say is I know it was a difficult choice to come to this room having seven tracks on Euro Python and also like having this I Know what we've been thinking about It's this versus this and burning your eyes with code especially with my code is of course Tough decision. So thank you for taking that Will do our best. So as I said, I have a live examples in my talk included And we can also have discussions right during the talk and we are in Italy. So I hope coffee helps as well Who am I very shortly who knows this logo on my t-shirt? Yes, it's pike on web. So I didn't do any awesome framework in Python yet Unfortunately, I hope so I will do it in the future. But for now the only thing I can say about myself No frameworks. Okay. All right. All right. Yeah, thanks Yeah, so for now I can say that I'm the one who is Organizing pike on web. So that's probably the best description of myself and I'm a freelancer Yes, and the plan about today's talk is first We'll just have a bit of the theory recap like what is they seem computing in general or what we have in Python available? What tools we have? That we can use already now We will go then into twisted tornado and I think you're a bit more detailed We'll look including the examples how it works Then we'll have Q&A and I hope we'll have some time to just briefly discuss Django channels because it's a trendy topic and it's also kind of a Sync-ish, so I hope that we have time for that too All right, so what's a sink a quick recap? What is it in general this buzzword? So what we see on this picture is a task, yeah, we have You can see my laser pointer. Yes, great. All right, so we have a Task that is processed synchronously at first. We have a request coming in here We have processing until here then the result coming out Then the request number two comes in between then we're processing it and response and then the request number three coming in Processed response. That's like a classic model that they show us in school But the thing is that it actually never works that way obviously Because one request will never wait for another request to be completed in the reality We will have one request and then right away another and then right away the third one and so on and so forth So we will our requests never wait for responses They will just come and hope that we can handle them somehow So you see that the request number one comes our main thread of the program is Processing it and then request number two comes but we do not have resources for that So we just wait then request number three comes and it also has to wait because we don't have resources for that and Then when the request one is done only then we can go on the request number two and only then we can go to the request number three That's the reality Now what we can also notice in this picture is that the request number one It also has some Spots here, which I marked as waiting this mean that when we are processing some request It does not really mean that we are like heavily computing something all of the time Most of the times are specially in web development. We are waiting for something else like for the database or for File IO maybe some slow NFS share or whatever. So we are not really computing something every all the time We are often waiting for another resources and the problem in this approach is that even though we are just waiting Since we have just one main thread and we are dealing with this one thread with one request Other requests have to wait and that could be optimized And this is where a synchronous computing comes in. So the idea is that instead of waiting we can just do other work So we can work on other requests too. And this is what's shown on this chart. You see the request one comes in It's processed then let's say that at this point we request number two comes in Request number one is actually waiting for the database. So a smart software says, okay We can wait here for some response. Meanwhile, we will do another Thing and another thing is request number two and we start processing it easy good Next so if we map this very very simple charts that I showed you on the web servers and the web Development in general. This is how it will look. So obviously one request is one client Or one client requests in a browser. Let's say you're on some API. So let's say We start our overview with a threaded web server and this is a very simple idea. How can we handle? Many requests concurrently is that we just spin a new thread for every new request. We'd like this totally makes sense It is still a blocking approach But we can scale because the more requests we have the more threads we start and we handle each new request in a new thread Task one meaning user one is requesting some web page. We're handling it here Then user number two joins we handle it here. And so every next user Starts another thread on our back end. The problem is obviously it doesn't scale very well Who used Apache? Maybe old school. Okay, good. So I used Apache to This is Apache in my opinion under many threads when you have too much So it doesn't scale. That's a problem. And the biggest problem of that is that when Apache does this Not only your new client will will just get nothing But also the client that are already like in progress those who are Almost done. If Apache crashes, that's it. Nobody gets anything. That's a problem of the threaded server so there was a smart idea, why don't we have a pool like we can say our Threaded server is capable of I don't know 16 threads. Let's say and it does it efficiently So we have a fixed pool of 16 threads whenever there is more you just have to wait, but we at least know that this 16 are Covered without any problem. Yeah, that's a good idea So we have in this example just two threads in the pool We have thread one that is doing something thread two then the request number one and the request number two Which is just waiting that works. Yeah, so threads Threaded pool works until obviously we just reach a point where there are not enough threads in the pool To handle all of the lot the good thing is that they just have to wait the clients I mean it will not crash as hard as normal threaded server, but yeah, you can do more with your Server and now comes the synchronous web server from the same theory that I already shown to you we have just One thread and we're using this thread to like switch between different tasks in different points of time. So whenever we have a Free slots free resources. We are taking another task When there is when the response or when some let's say the network socket has data of our previous request We're switching back to it and we're finishing it. Don't worry. I mean, it's a very very brief overview I will get into details. It will get more clear later on So the main thing that I have to point out right now is that in this scenario We have just one thread and we hope that this one thread will like jungle with as many tasks as it can and switch on some smart scenario that Whenever there is some Blocking operation the thread is doing another task. That's that's the whole idea. I didn't find a better gift So this is the thread. This is a synchronous server in my Vision good Of course, it was just blah blah. So let's have a real charts Here you have Apache light HEPD and NGINX Apache being threaded server NGINX being a synchronous web server And this I request per second charts so You see here that NGINX gets as high as possible on this comparison. It can handle around 12,000 Requests per second. Obviously it depends a lot on your concrete server and hardware and so on But that's approximate pictures that you can see in practice Then there is another good chart is the memory usage as I already said threads are expensive So whenever Apache starts a new thread, it costs you so the more the more clients you have the more threads you run The more memory is being used in contrasts NGINX and light HEPD are quite sparing with Your operating system resources. Good. So now let's get more into practice So a sync Python is not a sync. Oh, there have been tools around for years Let's say that tornado is eight years old and then there is twisted, which is oh It's it's quite a while. Let's say and then there are also other languages there and you have a sync It's not something you so the question is why are blocking servers still popular like you can say Why is Django popular? It's not a sync. It's so cool. I'm using Django because it's easy I think everyone would agree that sequential code is just much easier to write. You you don't want to have a Callback hell like you haven't usually not just You write your nice fancy code in Django And if you don't have like very specific tasks that Django cannot handle you're happy with it So why I'm doing this talk is I'm going to show you that there are different tools in Python That allow you to write a synchronous code as easy as synchronous code or at least comparably easy yes, and The goal is to make it as easy as Django or Ruby on rails Then I would say that we can seriously consider using a synchronous tools of Python on the daily basis So yeah, as I said, we prefer anything that covers our needs in the simplest way possible So good news is that we have many choices in Python already at this point And we can try different things which we are going to do today and we will see What is best suitable for us? Let's start with twisted twisted is like a Python dinosaur who used twisted in this room. Oh Cool, many people. That's great. I love twisted. So twisted is a framework. It's huge It includes a powerful high-level components like a web server user authentication systems as well as mail Sir, I know it's instant message in SSH clients DNS clients and servers anything So basically any network tasks you have most likely you will have it already solved with twisted also XMPP server Anything really so if you are looking for let's say some implementation of some network protocol with Python Then twisted is your best bet most likely. It's already done in twisted so As they say twisted doesn't call you don't call twisted twisted calls you that's like a motto of twisted That's actually applies to all async frameworks in Python, but they've been first. So let's use this credit to them The base of twisted is a event loop as always think source. Obviously. So twisted is built around the reactor pattern It's an event loop responsible for handling network and system events It basically says when the event a happens react with a function or with a callback B And once started the reactor loops over and over doing this kind of task So it pulls for IO and just triggers the appropriate callbacks Depending on what events we have Loops over and over Then Let's have an example finally So as I said twisted is a really low-level framework and as our first example, I would just try to print something on the network socket So as low as possible and Here we go. So I Have the solution right here. I will show you I will run it later on first. Let's go through the code Just you know to To show you what it's doing. So you feel like comfortable with this So what we have here It's a really really simplistic minimal case. How do you write into socket and twisted? So first you have two factories, of course, you have a Protocol factory here first and the sorrow factory here. You define these two classes And that's actually what this what you have always in twister Do you have a factory for everything and you have a callback for everything? It makes it kind of complicated But if you are really willing to implement some difficult protocol Nice and clean. It's a good way to do it because you cannot miss it something accidentally So of course the main part here is a greeting protocol We inherit from protocol protocol and we define our first and only callback Connection made meaning then whenever there is a network connection on the socket we do self-transport rights chow euro python and That's it and we close the connection. That's the only thing we do here And let's see if I lie to you or not By running this in terminal Okay, you don't see my terminal I will bring it to you then Okay, I'll just mirror the displays. It will be faster in this case Can you see this? Not really good. Okay, let's start like this then then we see Okay, so I have all of the examples here I started the example twisted 1.py. It's the same code that you've just seen on the slide And to check it I cannot do it with a browser because it's a telnet socket It's a it's just a raw socket of this machine. So I'll use telnet Bless you. Okay So we have here chow euro python. Ah, you don't see it at all. I got it. Okay I need to zoom in somehow Can I change the resolution or will it crash? Okay Or anyone knows a shortcut for iterm to zoom in commons plus. Okay Yes, thank you guys. Thank you. Great. Okay, so Then again the demo You see chow euro python printed here. It worked. Good Then jumping back to the presentation No Yes, I'll take mirrored off Yeah, this jumping back and forth is not perfectly optimized. I'm sorry, but we'll do our best. Good. Okay, we're back Then of course, uh, just a socket connection is kind of boring. We are web developers most of us. I think in this room So, uh, let's see, uh, some web web related case Let's have a twisted app that uh returns fellow euro python as a get request Handler So what we have to do here is we inherit from the resource Um, this is a class uh in twisted that implements get the Well, like a http stuff Then we define the render get methods. You see it Here render get and is as easy as return chow euro python and python 3 so encode utf 8 I think I will not do this mirroring again So you can just trust me that it works if we have time at the end. I will show you This example since the shell. Otherwise it gets really boring Next uh twisted is cool So i'm not going to show you much more twisted because The goal is just to show you an idea if you implemented some Protocol you are inheriting from the protocol class. You need to define factories of factories of factories and eventually it will work By that I want to say that twisted is an excellent choice for the best number of cases and especially if you need to integrate some multiple aspects of functionality for some Network protocol, especially a complicated one. So first of all twisted is the concurrency framework It allows you to jungle multiple tasks in one application without using threads. That was a goal of asynchronous development and The code is tends to be a bit complicated. I would say it's it's actually not complicated. It's just complex So you will have a lot of code But the good thing is that you can reuse components of twisted and they have components for pretty much anything you like So it's easy to hack in into existing protocol and customize it to your needs It's also perfectly tested. So, uh, like if you would write, I don't know, uh XMPP bot or something I'd say go with twisted But then there are other tools that make async development in python even easier Just a bit of words about async evolution in python Since python 2.5 we have the yield keyword and it was used just for generators and then After some time there was a tornado web framework that used it for asynchronous development of web applications I will show you examples later Then in python 3.3 We got the generation delegate generator delegation expression yield from which is a very similar concept, but it's just A bit better. I'll show you later and then finally at 3.5 we have a weight Which like in 99 cases for you will be totally the same as yield from but it just makes more sense as a word So it's easier to understand. What is it actually doing and this little This little things say matter So they they help you to to write a synchronous code in the similar ways that you write the synchronous code Now let's get to more examples You do not have any excuse not to use asynchronous Codes in python since 2.5 because that's the point when we got yield and generators But I would recommend to use python 3 especially 3.5 because it makes things even better And again, you do not need to write callbacks every time like you do it in twisted and why callbacks are bad Who is also a javascript developer or just some javascript? Okay, so I don't have to tell you It's difficult to write nice codes and I have a little favorite example that travels with me on different conferences Here it is So yeah, this is the the real code from mongo.js It's testing as I can see here, but still like I know it's it's not something that you need to write every day But if you even had to write this for your tests, there is something wrong Oh, and I recommend you a callback hell.com if you're not convinced they have more cool examples So let's just agree that sequential code looks nicer And we will try to avoid callbacks from this point I think I know the buzzword I think I don't have to stop really long time over this one because you have awesome talks as well in this conference about it Still obviously I will show you the basis So we have some new concepts that can help us to escape the callback hell in asyncio and as well in tornado For that we need to learn a bit of a theory. Who knows what is the future? So maybe I can skip it a future object or okay. Let's let's have a little Little conversation here then so a future is just like a placeholder object It is designed to receive and store a result of some operation that is not available right now at this point of time So we use a future as a placeholder to pass around and it will have a reference to the result of our operation In the example that I told you in the beginning like if we are waiting for the database result We don't want to just block and wait for this result to be available We can Store it in the future and say okay this future it doesn't have a result yet It has a reference to the result and it will be ready at some later points So this future object can be returned to the caller right away And that caller can access the actual result later when it's available And this is the simplest way to use a future to run it. So this is obviously pretty dumb because it's still synchronous But it's just show you how it works. So you need a library that Works with futures like returns futures and for example some async http library You do fetch europeitem.eu Let's say that It's slow even though it's not but let's pretend it's slow and we don't want to wait and block So instead this library will return you a future save it in the future variable And then we say to the io loop io loop. Please wait until this future is ready And then when it's ready, we just call future dot results to get the actual result of this operation as easy as that Now next the coroutine A coroutine is a function. It's a spatial like a generator function that can return some value and suspend So when we call it again as generator, it will continue from the place where it left off last time So unlike a function, uh, it can be called multiple times and Continuing and pausing multiple times and this is a minimal example of a coroutine I didn't use the 3.5 syntax on purpose because coroutines are something that you can get in python since 2.6 or 2.5 even So there is really no excuse not to use them So you define it just as a generator You say for chunk in data dot read and let's say that data dot read is some really heavy operation And then as a generator, we're just yielding chunks back to the caller That's a minimal example Then what does it have to do with the synchronous development like coroutines play really well with futures Because a future is something that can hold a link a reference to the result as I already said And the coroutine can return it. So coroutine returns futures. That's the way it works Coroutine returns a future it pauses until the future is ready and then it continues from this place where it stopped This is an example a coroutine and then We are already familiar with this async.http.fetch A library that does not block but instead returns us a future. So a reference to the result Then we yield the future. So this is a new new concept here By yielding the future we are giving it to the caller. Color in this case is the io loop And we say that okay, please do not wait do other stuff in the meanwhile Like handle other requests in the meanwhile and whenever the result of this future will be ready Please just jump back in this place and continue as if yield was never here That's kind of a magic how it works and then we just print it I mentioned that the event loop is caller in this case. So what is the event loop? It's kind of a reactor pattern that we know from the computer science course It just waits for something to happen and it acts upon the events It's responsible for handling such things as io and system events And I think I think it actually has several loop implementations available like the module that will Default to the one most likely to be most efficient and used in operating systems like for For windows it's still using the select loop but for linux and macOS it's kq So I think I will guess which io loop Can be used in the best case for your particular operating system It delegates it to the operating system loop and just acts as a wrapper However, you can always explicitly choose the underlying loop if you like And again in a few words if you don't want details The io loop is just something that says when the event a happens Do the function b and save this mapping that's in a nutshell what it does Let's see what a sinker event loop is doing So here I already used the 3.5 syntax async def This is just a fancy way to define a coroutine. It shows you right away that hey This is not just a function and not just a generator. It's a coroutine. They seem def fancy So this is enton's talk and What coroutine does is just says do the talk And prints question after this is finished. We do not have any yields here Yeah, so this is just a minimal example How do you run the io loop? No yields no futures just a minimal example Now a bit more complicated. Let's have multiple coroutines so there is Here is enton's talk And here is a coroutine grab the coffee What I want to show you here is that how easy you can switch between two things to your io loop And they sink io so First we run enton's workshop or enton's talk. We print welcome We await await is the same as yields in this case So we do the blocking operation here do talk And at this point where you use yield or await the The library is smart enough to say, okay, we are waiting on some blocking resource Let's do something else and something else in this case is a coroutine grab coffee So then the next line that will be printed will be sip drinking coffee And then we are waiting for something again. So io loop will say, aha, we have to wait here as well So now we jump to some other thing and so it jumps back to the enton's talk And then this statement will be printed. Thanks for coming And then after this coordinate is finished. It will continue number two again So this is just a minimal example how the io loop will switch between tasks for you Could you please repeat Yes, you also have enton's workshop and here enton's talk. Thank you for noticing it. It was my bad Yeah, because I planned it as a workshop first, but then I packed it in the talk So yeah, this will not work, but this is absolutely good. Anyway, so I will I will not run it But thanks for pointing it out So let's compare the approaches like I'm still trying to To explain the the general thing about coroutines and futures to you So this is also a very very simplistic pseudocode This is a way that you would write usually a sequential code in any framework any web frameworks that you use Let's say this is a get request handler We have a huge database query a synchronous one. We save it in the result and then we print the result Easy everyone did that We will be blocking here So we will just wait at this point. That's how it works in most cases. That's how it works in Django then How would you do it in javascript or in twisted? you would say Make the huge database query and then you will use a callback you would say When the huge database query is ready call the on result function so This is a bit messy because it pushes you towards the spaghetti code But it's at least very clear because you're not using any coroutine future magic So it's very explicit. You say when this is done call this function This is the callback way of doing the same thing And this is a I think your tornado way of doing the same task. You're defining a coroutine async dev Then assuming that your huge database query returns a future So it supports this kind of operation. You put a weight in front of it And by that you're telling basically the same thing as in the example above but without a callback So huge database query returns is a future You yield or await that future give it back to the ioloop say when this future is ready Please give us the result and just save the results to the result variable. So this and this Should be the same for you And they actually look the same. That's the whole point of this talk to show you that Async does not mean callbacks Good. So now again Let's have a real example a uctp server Every uctp is a synchronous uctp framework or library Available for async iol and its author andre is actually at this conference So I really recommend you to go to his talk of his training to go in details on this I will be just showing you a minimal example So a web app that returns hello rmini is As short as that You do again the async dev so define your coroutine Then you return web response. Hello rmini No yielding no awaiting And then you run it You create web application you define your first and only root you say that Whenever it goes to this route You execute the coroutine. Hello, and then you run it. So this is a simplest example You're not waiting for the database here, of course. That's why it looks so easy But that's the minimal thing you need to run the aghtp server Of course, we want to get more tricky. We want to have async in the way to be more realistic So this is a very same example, but now You are going to some url. So this url is http bin It's a nice tool that gives you a like a fake delay. It's very cool to test your software So this will give us delay of one second just If we will eventually run this code at the end, you will see it What we're doing here. We are defining again a handler We are Requesting with get method this url on top And then the result will be a future or coroutine. You are waiting for it Meaning giving it back to the ioloop whenever the result is ready. We are saving it to response And now the surprise why we are waiting for it the second time. Who knows? Any ideas Okay, so actually there are two blocking operations in this simple example of fetching a url First blocking operation is when you fetch the first byte like when you want to get the first byte of response This is the first operation when it when you need to wait for something But then there is a second one If the response is not going all at once if it's a streaming response This is why you need to wait for the second time So let's say that first byte comes in one second, but the last byte comes in one minute So you would be blocking the difference, right? So that's why you do it twice in this example And then you print the result again So this looks not of course not exactly the same as jango, but I would say it's still decent and easy And you're not using callbacks Then What if we need to request multiple URLs in parallel? I will jump over this really quickly because as I said you have a dedicated talk for isync.io and IEHTTP on this conference So I will just say that if you want to request multiple URLs and do this in parallel It's also very easy. You just need to define multiple tasks Make a tuple of those and yield a tuple that will work This is the second way that you can do the same thing If you are requesting multiple URLs You can use async.io that wait a function that would let you pack multiple futures in one I'm not stopping over this on purpose because I want to show you tornado as well. And now tornado A big difference between tornado and async.io Tornado was there since 2.5 So even when you didn't have the fancy syntax of async.io wait and async.def You already could use tornado and write asynchronous code with it in python Using its generators and I will show you how right now So tornado runs on the same idea of io loop futures and coroutines It just uses a hackish way of achieving the same thing because again in 2.5 There was no native way of having this functionality in python So it has different mechanics, but it is using the similar syntax and the main thing It is well tested. It is stable. It's used really a lot. I can tell you So it's totally production ready if you have some web application that needs to deal with web sockets Think about tornado. It's really a good option for this How does it play together with async.io? Well, this is the idea of the stack that would work On top we have application level like tornado twisted or whatever Then we have the io framework async.io and then on the operating system level As I said, we have the queues that are most efficient for the operating system like kq or epol or select on windows It doesn't work like that right now. So tornado and twisted they both come with their own event loops But this is the idea that maybe at some point it will work like this Event loop of tornado Very similar to async.io. You don't have to remember the syntax. I'm just showing you that It's as easy as io loop.current. It's a single tone You get the io loop and you do io loop that starts to start it. There is no big magic behind it And then if you want to run the async.io event loop, it's also possible There are adapters available in tornado to use futures as well as io loop from async.io And this is how you do it. You just import async.io main loop And you start it so it works too Then futures again are compatible. So tornado prefers own futures But you can use futures from tornado from async.io as well And this is really handy because some libraries will give you futures of async.io type Let's say and you would say hey, why did I do this whole application in tornado and now it's incompatible So it is compatible. That's a really good thing. You can convert between tornado and async.io back and forth easily Finally, my minimum tornado web app It's very similar to the async.io example. You have a handle class You have a get function That handles it and you do sell.write to return the response to the caller And in the same way you define the routing You say listen on this port you start the loop. That's it Now let's have an example of fetching the URL in tornado. It's quite similar to async.io tornado has own async.http client Here it is that is doing the fetching for you So first you create an instance of the async.http client here Then you do fetch URL This returns you a future that you yield to the caller Tornado supports the await syntax as well. I'm just showing you here that you can use it with python 2 as well Which is of course a bad thing, but you can still do it So you yield the future to the io loop. It does the processing and whenever the result is ready It saves it to the response variable and then you can print it to the caller print it to the client Good, then to to fetch multiple URLs You can do it like this I need my mouse here So I did it on purpose here Yes So if you I I did it on purpose here I didn't include multiple URLs to show you how easy is to fetch multiple URLs instead of one URL So if you have multiple URLs, all you need to do is you wrap this as a list Assuming that URLs Is now a list and not a single URL So this is everything you need to change to fetch multiple URLs instead of one URL Tornado is smart enough to notice that when you yield not one future, but the list of futures or dictionary of futures It will automatically process all of them in parallel. So it's super handy. Yes Yes. Yes, you can Good now I'm running out of time So just a little wrap up. There is twisted that is super well tested. There are Implementations of all of the protocols that you can imagine It's production ready and it's time proven So use that if you want to do something really complicated like a difficult network protocol There is a sink io which is the future So if you want to make something in the perspective of future support Take that and it is using the most fancy syntax available on python And then there is a tornado. It's kind of a compromise that stays in between. I personally use tornado Just for historical reasons, but i'm now switching slowly to a sink io I think that's it. You're great. Thanks for not sleeping on this talk and now let's go to questions first one Quick questions because we only have two minutes Yeah, uh, my question is this, um, I see most of the parts when you apply synchronous calls are io What are the benefits of asynchronous, uh, processing when when most of your things are CPU bounds or Yeah, what what would be the benefit in that case? Uh, great question. So yes, it is in the first place designed to be used this way so, uh First advantage that you can take is Is the is a io that's why it's a sink io I would say that 90 percent of cases of waiting in the web application is io I cannot even think of another case. Well, maybe you have some difficult, you know face recognition system running and it takes time Yeah, well, you will not have any benefit one thing that you have though is, uh How else would you do it you would start a threat normally? Yeah, that's a typical way So you can also do it with a sink io. There is a threat pool executor and threat And process pool executor classes in a sinker library So then the advantage for you would be that you would not need to write two different code bases You would not need to use two different approaches. You could use perks of a sink io to wait for the io events and for your fancy image recognition software You could start threats but still use a sink io syntax So it would make testing probably easier a bit And it would just make your code look because it's the same way So the structure is better if you have only like let's say some difficult machine learning computations. There is no advantage as I see I could be wrong One more question Anyone No, okay. Is it good or is it bad? It's good Okay, you you covered it really well One. Yes, great Yes so Um, I know that this is probably very similar To each other. I mean those famous. I'm wondering whether you compared like a performance of them Or maybe there is something that actually, you know Yes, one of them less performant Great great question. So, uh, yes, I did compare the performance Of course performance is such term that there are like very Numerous ways to measure the performance and you could measure the memory. You can measure the speed Many things. So what I did measure is requests per second I measured tornado a sink io not js and scala finagle like four libraries that do the same Sorry guys scala did the best But uh, at least we are better than not js. That's also good Actually Yeah, I think we need to head on out. Yes. So good Thank you all for coming. Yeah. Thank you very much. It was great