 Hello, everyone. This is by far the most relaxing setup for a presenter that I've ever experienced. I feel like I should be standing, but I'll try to sit up straight and pretend. So I'll be talking about moving twisted code to async.io, which is something that is now available in Python 3 that allows asynchronous code execution. And I'll talk about what that means here in a bit. But first, to give you an overview of what I want to talk about, first is just sort of what I mean by async. Some twisted examples, followed by some async.io examples, then moving from twisted async.io. And if I have time, I'll talk about the effective use of emoji in presentations when all you've ever used is late deck. I've never had emojis available to me. I'm using a tool called Mark that just came out where you can write a markdown, and then it shows up in real time in a little viewer, which is pretty cool. So I've never had a emoji, so I might go a little crazy. Time for a quick poll. How many people have downloaded, I don't know, we'll call it stuff, from Bitsorn. Show of hands. Great. Honest people. How many people have purchased drugs on Silk Road? OK, one. OK, you're lying. I know, there's got to be one that. What about drugs on Open Bazaar? No? Nobody? You should try it. It's like, OK. You should try it. All right, so what are all these public services? We'll call them public services. What do they have in common? Well, they all use DHT's. A distributed hash table or DHT is a system that provides a lookup service similar to a hash table. We have a key and a value pair that are stored in a distributed fashion across many nodes. And any participating node can effectively retrieve the value associated with any given key. So if you have a key and a value that you want to store, you can pick a node and store it. Here's an image of a distributed hash. Oh, sorry, that's the wrong image. Nope, that's the right one. There we go. So you have data that comes in. So you have a key that is hashed. That hash then tells you which nodes to store the data on. And then you can retrieve that data based on the hash of the key in the future. So the idea is that this is a case where nodes are going to be coming up and they're going to go down a lot. So you need lots of redundancy across the entire network. You also need the ability to republish values over time. This is all handled by various types of distributed hash table algorithms. And it's all used in the public services that I mentioned earlier, where you have lots of nodes that are coming up and down. So you have a centralized repository for keys and values. So why are we talking about DHDs? Well, first of all, network heavy code can really benefit from being asynchronous. And in a little bit, I'll talk about what that means and how you can use it. But especially because it relies on UDP. And most of the examples that I gave earlier all rely on UDP for the distributed hash table. And that's just because it's a lot less expensive in terms of network overhead. So you can send a message, and then you may or may not get a response. The node may or may not be down. But you don't have to deal with an error case where you can't actually establish a TCP connection. So this is a stateless protocol. But that means that there's a lot more actual connection management. So you send a message, you wait a little bit, you see whether or not you get a response back. That can be fairly expensive, especially when you're trying to contact hundreds, potentially thousands of different nodes on a network. There are more networking things than having an Earth that are dreamt of in a WWW-centered philosophy. So we talk a lot about when we talk about networking and these types of conferences. It's typically your web. You're talking about HTTP, or you're talking about contacting the database. I think it's important to remember that there's lots of things that use different types of networking. And DHCs are one example that use a very special case with UDP. And then finally, I wrote a DHC library name, Kedemlia, that could be ported from Python 2 and Twisted to Python 3 and Async IO. So I figured that I could be a good talk. I actually proposed the talk before I even knew anything about Async IO, which is dangerous. But it ended up working out in the end, because it was pretty easy. So let's talk about Async. Here's an example problem. So this is using the simple HTTP request handler that exists in the stock HTTP server library. There comes a Python 3. If we have a path at slash slow, so this is if somebody goes to your server slash slow in a web browser, then say we've got some slow DB query that's going to take a little while to run, and somebody else wants to just load the main page. So if that slow DB query takes a really, really, really long time to run, and you have one person hitting slow, somebody else just wants to view the main page. So not slash slow, so we're just going to send them higher. Does anybody know what will happen for the person who just wants to fast, like give me the main page? It will be blocked by the slow query. So this is the problem here, is that if we have a thing that might take a little bit of time, and somebody else wants something that's very fast, we have to wait. We have to block until the slow thing finishes before we can then handle the fast thing for everybody else. So what are our options? Well, there's really three. We could either have the synchronous responses where we deal with one request at a time. This is easy for you, but it's not so great for your user. So for the person who just wants to see the home page that will load very quickly and say hi there, they're going to be waiting for somebody else who wants another page that's very slow. That's not a really great option. The second one is we could just work a new process. So this one is really easy for you, the programmer, but it's expensive for the computer. So it's putting up a whole new process. It's something that has a high expense for the kernel. Then the third option is that you could start a new thread for each request. So say you have one socket per thread. That one is a lot harder for you, the programmer, because of the global interpreter lock. You have to make sure that memory isn't being accessed by more than one thread at one time in terms of writes, especially. But reads can also be tricky. And it's also very expensive for the computer, not as expensive as the multiprocessing, but when you have to spin up multiple threads, especially one per socket, then that can be very expensive for the kernel as well. So these are three not-so-great options. The fourth option is an event-driven approach. So we could use an IO multiprocessing library, like E-Poll, that solved the problem with a single-threaded concurrent code. This is known as a synchronous socket approach or a select-based multiprocessing. The basic idea is this. While one socket is waiting for data, do something else. Like we could read or write on another socket, for instance. And this is done by the kernel essentially by either sending signals or telling us when there's data that's available on one of the sockets. So we can just essentially pull all of the available sockets. And this is known as concurrency. It's all in one thread and one process. And this is one part that I think really trips folks up, is that they think it's either multi-threading or multiprocessing. It's neither. It is a single thread and a single process, which is concurrency, which is the composition of independently executing processes. So progress can be made on more than one task at a time. This is not parallelism, where things are being executed simultaneously. That's something different. So this is concurrency. Now, to give you an example, we can look at my hometown Waffle House. That is Stephen Colbert, who is from my hometown of Charleston, South Carolina. And that is the Waffle House where we both hung out at, not together at different times. But single-threaded concurrency looks like this. So you've got a line cook who's preparing lots of food. There's lots of plates in front of him. He's got some stacked caps, mothered cover, goodness on one plate, and something else on something else. And he's making progress on each of those. But there's only one guy. And he's only able to work on one plate at any given time. Now, he might be able to cook some eggs over here, and then he can split those up and put them on two plates. But the idea is that he's working on each task concurrently, because there's lots of plates that have progress that are being made at the same time. But he's not executing more than one plate at a given time. So we're making progress on all these as we can. As the hash browns are ready, as the eggs are ready, then they can be placed on the appropriate plate. But we're not doing all the plates at the exact same time. That would require more than one person that would be multi-threading that would be multi-processing. Does that make sense? So here's an example of what this looks like in terms of memory usage. We have the Apache multi-processing module that's in blue. And then we have Nginx, which uses E-Poll that's in this sort of baby shit brown color at the bottom. And we see that the memory usage for Apache goes up quite a bit as we start talking about concurrent processes. This is because it has to spin up a whole new process for each new connection. This is incredibly expensive. And this is why Nginx or Apache with even a multi-threading module would be far more superior in terms of memory usage. So let's talk about how to async. I think one of the best examples in Python, at least, historically, has been twisted. Twisted is an event-driven networking engine. It makes it easy to implement custom concurrent network applications in Python. It lets you do lots of cool stuff. So non-blocking network I.O. So this is the concurrent code that we're talking about. Twisted implements many protocols. So this is SSH, SSTP, IMAP, DNS, XNP, FTP, Finger, AMP, GPS. They have GPS protocols, HTTP, IRC, MAPcache, and then TP, ShotCast, Telnet, TOC. Does anybody know what the TOC protocol is? This is like the old-school AIM chat protocol. That's implemented and twisted. So it lets you create projects that utilize many of these protocols at once. So it is ridiculously easy. If you want to, say, spin up a DNS server that then alerts you via AOL AIM about every request and then also sends an email that you can SSH into and provides your SSHing into your actual running application where you can run code, then Twisted can do that, which is pretty cool. It's also some of the nicest-looking Python out there. So here's an example. This is a simple Echo server where we define a protocol. So this essentially handles what do we do when we receive data? And in this case, we just return it. We just write it back. And then we have a factory that will build this protocol for every single connection that we get. And then we can start a reactor. And this will, for every new telnet, it's the best way to play with this. You just telnet, right, port to a local host. And then you just type some sub and hit Enter, and then it'll print it back out at you. So it's not that much code at all. And then this is something that will scale rather dramatically. So what if we add a slow thing? So this is the same example except that now we have a slow database query. And let's say that that's something that'll take who knows how long. But we don't want it to slow up anybody else who wants to take advantage of our Echo server. And it's awesome functionality. So what we can do is we can, this will create what's called a deferred. It's also known as a future in Async.io. And we can give it a callback. In this case, a send response that will get whatever the result of our query was. And then we just write that out to the transport. So this is a case where it's probably not an Echo in this case because you're taking the input data, and then you're running some query, and then you're spitting something else out. And you're doing that based on this callback. So while the slow database query is running on one hand, you can be doing lots of other stuff. You can be accepting other connections, starting other database queries, which is pretty cool. But of course, eww, callbacks, what is this note? Actually, there's no talk over in the other open conference area where they're probably looking at lots of callbacks right now. So the great thing in Twisted is that they added the ability to wrap a function using a decorator called inline callbacks that will allow you to yield a deferred. So this is the thing that might take a little while. And then we don't have to use callbacks, but it does add this requirement that we added decorator. And there's one peculiarity here that doesn't exist in async.io, which is that we can't actually return. So this function becomes a generator where we're yielding. There could be lots and lots of yields now where we're yielding as we can. But we can't actually return anything. So yeah, so slow database query does still return a promise or, in this case, a deferred for Twisted. But we can yield that. And so the wrapper will take care of then yielding control. So it's essentially we're basically turning this into a generator that can yield control. And then we can continue our reactor loop where the reactor is looking at all the deferreds that exist. And it's like, OK, progress does not be made. Let's move on to the next one. And then eventually gets back to it. And this is pretty much copied in async.io except with some better syntax. But this is exactly what async.io does, but with some cleaner syntax. And I'll show you that, and maybe it'll make more sense. Any other questions on Twisted? All right, how to async with async.io. So first, it's worth noting that the async.io package has been included in the standard library on a provisional basis, which means that they could just remove it or completely change the API at will and nobody can complain. So you should know that. So a little history. Python 3.3 added the yield from, which allowed a generator to delegate part of its operations to another generator. So it allowed you, within one generator, to basically say actually delegate all the next generation to this other generator with a better syntax with yield from as opposed to having to actually loop through that and then yield out of another generator. This is just like some syntactic sugar. Also added support for async.io, which at the time, and before that was known as TOOL, then Python 3.4 added async.io to the actual core. And then with 3.5, we got some more sugar syntactically with await and async. And I'll show you what those do in a second. What if you still use Python 2? How many people are stuck on Python 2? All right, well, there's truly is, but not really. So we just said, stop using it. So it just prints out a warning that says you shouldn't use it. So use at your own risk. So in the async.io innards, we've got a pluggable event loop just like twisted with the reactor. We have transport and protocol abstractions, which were pretty much just taken straight from Twisted. Actually, the definition for those, they're essentially interfaces, these abstractions for transports and protocols. The actual definition for the methods is almost the exact same, except that Twisted used camel case. And then async.io uses the underscore between the different words. And that's pretty much the only difference. So async.io has support for TCP, UDP, and SSL, as well as sub-process pipes, delete calls, but does not include any actual protocol abstractions, or actual implementations, rather, within it, like Twisted with all the SSH and SMTB and all the rest. It also provides an interface for passing work off to a thread pool for times where you absolutely positively have to use library in the next block in IO calls. All those things are the exact same in Twisted for the most part, except with different actual interfaces. So the event loop runs in a single thread and executes all the callbacks and tasks in the same thread. While the task is running in the event loop, no other task is running in the same thread. When the task uses the yield from, then the task is suspended and the event loop is able to move on to the next task and execute that. Any questions on that bit? All right, so this is done using coroutines. Coroutines are computer-programmed components that generalize subroutines for non-preemptive multitasking by allowing multiple entry points for suspending and resuming execution at certain locations. This is a really complicated way of describing something that we already do. And to give you an example in the olden times, if we needed a range, what we could do is we could govern some value that we wanna go up to and then we just sort of iterate and while some index that we keep track of is less than that, then we just append that to an array that's getting bigger and bigger and bigger and then return it. That's how we used to do ranges in the olden times nowadays in the modern times, we can create a generator and so we can actually yield these values. We don't need to keep all the past values on hand and then return them all at once. Instead, we can simply yield control back up to where we're calling us and give them the value so far. This generator is a coroutine. So this is a great example of what a coroutine is where we can yield control back up to whoever's calling us, when they get a value, then they can continue execution. So here's an async IO example. In this case, we are using the async def as opposed to regular def. If this is available in Python 3.5, prior to that there's a decorator that you can use and then the syntax for await is a yield from instead of an await, but that's the only difference. So when we put an async def instead of a regular def, we're essentially saying this thing is a coroutine, which means this is a thing that will act as a generator, a thing that we will yield control from. And that's what happens on the second actual line of code inside of the compute function where we await async IO sleep. So what's happening here is, await is the same as a yield from if we were to use the decorator instead for a coroutine, the two are synonymous. So we're saying, we're gonna yield from, we're gonna essentially say like the, we're gonna use this generator, we'll be yielding values for us. And we're a generator too, we're gonna essentially just be yielding from this other generator. Async IO sleep is very different from time.sleep. Time.sleep actually suspends the execution of the current thread. Async IO sleep just says like, like I'll yield from this thing when this amount of time has finished and this is something that is based on the internal reactor loop that it keeps its own internal time clock. So what happens when you actually call one of these async defs this generator coroutine is that it immediately returns. This is something that you immediately get a result from. So when we, the second line from the bottom where we say run until complete with the print sum, we immediately get a response from that and then the loop actually starts the running of the generator, iterating through what it gets back. So in this case, we print sum, print sum will await a compute, the compute actually just sleeps for a bit and then returns the sum. The execution is, this is an example taken from the async IO doc. So forgive the like kind of crappy sequence diagram here. But what's happening is that we like where the generators are yielding from this other generator that's yielding eventually gets down to the await sleep and then the reactor loop that's running is like, okay, like there's nothing I can do until that finally yields, I'll go do something else. One second later, it actually gets a result back and then it can raise the stop iteration for the generator all the way down and then the whole loop is stopped. Any questions on that example? Conversion time, how many people are considering or planning to move twisted code to async IO? What are the rest of you doing in here? Um, how many people are considering or planning to move synchronous code to async IO? Okay, great. So let's talk about reasons you'd wanna do that. Why convert synchronous Python to async IO? Well, if you have lots of IO and you wanna go fast. So this includes anything like database reads rights over a network or a socket, any sort of networking calls or whether that's DNS lookups, HTTP client or server side. This unfortunately does not include file system IO for some rather boring technical reasons. That's not something that can be done with the reactor loops in the libraries that are used for the multiplexing. Also, if you're cool, cause async is fun. So there's some reasons you shouldn't do it. So if the Python two to three conversion is to burden some, in which case you should probably just check out twisted and understand that you'll have to rewrite your code if you ever do end up converting or if you don't have any IO or scaling isn't really a goal. This includes probably the case of like I have tons of calculations that take forever. This is not gonna make those go fast. What this does is allows you for, especially for networking or if you have, if you want to actually run synchronous code in a thread pool, like this is a library that will help you do that, but it will not make slow things, all slow things go fast. It allows you to execute code while sockets are taking time, but it will not make slow things go fast. So why convert twisted to async IO for the handful of folks in here who are thinking about that? Well, one is that you get to remove a dependency and then you gain core support. And then also I think the code is prettier and smaller with async and await. I think that does definitely add quite a bit of help to the maintainability. Now there's some reasons why you probably shouldn't do that. If you can't convert from Python two to three or if you use a transporter protocol that isn't supported by a library. So like if you wanted to write an AOL aim talk client then you probably will have to implement it yourself instead of using twisted. And then also you may end up adding many dependencies. So if you're doing lots of different networking things like SMTP and some DNS and something else, then you're gonna have to add a separate library for each one of those. So there are libraries that exist there. It's third party libraries for each one of those protocols, but that means that you're gonna have to, instead of just twisted, you're gonna have to now use lots of libraries. Why did I convert Cudemilia? The distributed hash table that I wrote. Well, I wanted to convert to Python three anyway. It only relied on twisted and another library I wrote called RPC UDP, which is a library that provides a stateful-ish connection for RPC, so remote procedure calls. You can run a remote procedure on another computer using UDP as opposed to TCP. So the amount of network traffic is much, much lower, but your latency is much, much higher, which is great if you don't know if another machine is up and if you have to talk to lots of them at once, like the distributed hash table. And then I knew that I could migrate RPC UDP to async.io and then just remove all of its dependencies so it would no longer require twisted and did my actual Cudemilia library, only relied on that RPC UDP to actually talk to all the other nodes. So the process I took, well, I used the two to three conversion file by file and it turned out that everything actually was already Python three compatible, except for some lots of fun with bytes when it came down to the actual UDP data exchange. I replaced some callbacks with the weights and desks with async desks and then turned on some awesome logging options in async.io and then tested. So what happened was the code that looked like this, so this is twisted code. And maybe this is just my own crappy coding style, but I always end up with these like desks inside of desks and then I use those as callbacks, which is not pretty and they could totally be like methods, in this case, there's a bootstrap method that's used to, you can bootstrap a node on the DHT by giving it lots of nodes of other nodes that you know about and then it will contact all of them and ask them about their friends and they'll tell you about their friends and they'll tell you about their friends and then you can sort of bootstrap your knowledge of the entire network. And in this case, I had to ping all of them and then I get some results back and then I can actually start crawling them. And so I had this internal def that I created. Like this looks ugly and it's not great and it could have been done with an inline callback and then maybe another method, but desks inside of desks are just kind of gross looking. So this is the async.io conversion and you'll notice like it's less code, it's prettier, there's no internal def. And I can just say the whole thing is async and then I can ping all the addresses I know about and then I get back some results and then I can crawl all of those afterwards. So you'll notice that I'm using the async.io gather in this case, which basically says take all of these things that are gonna wait, that are gonna take a while to do and then tell me when they're all done and give me all their results. So it's, I think it looks prettier, less code. And that was sort of what most of the conversion ended up looking like. So some lessons, async.io resulted in cleaner code and less code. Unfortunately, there's no unit test support for async or await, which means another dependency has to be added. So there are libraries out there that you can use to help you test async and await code. There's no looping call concept, which is something I use in a few places from Twisted. So this is where you say like, do this thing every 10 minutes or do this thing every five minutes and then it'll just, the reactor loop will take care of executing that thing every, you know, period of time. This was actually implemented and then entirely removed from async.io, which they have this long, long rambling explanation for why, but I found that to be a pain in the ass. Async.io has context managers like async.with are awesome. This allows you to do things like transactions for databases and those look really pretty compared to what previously had to be done. And then async.io has fantastic error detection with event loop logging. So as some examples, like the top one, there's this case where I've got a coroutine where I just print hello and when you call ensure future, what happens is it actually adds it to the, adds the coroutine to the scheduler. So you have to schedule any coroutine that you create. It basically says, hey, reactor, run this thing whenever you get a chance. But I never actually start the loop. So you have to say like loop.runforever or loop.rununtilcomplete and give it a coroutine. In this case, I never even started the loop and Python 3 detects that. It was like, hey, you never started the loop. There was this task you made, but it's still pending, which is super useful. That doesn't exist in Twisted. And then also like here's the case where I have hello world. Now again, coroutines return immediately. So in this case I call hello world, returns immediately and doesn't actually execute the code that's defined within because the async is like the decorator that's a coroutine. And so it's like code, well, it looks like I'm calling it, hello world, I'm calling it. What's happening is it's returning immediately, not executing that code because that's something that the scheduler will do. But the scheduler has never actually given that code to run, so then that's a problem. We never actually awaited that thing to actually run. We never gave it to the scheduler to run the loop. Never ran it. So that's pretty cool. Like that didn't exist in Twisted. So this sort of error, there's lots of, lots more debugging that I had to do when writing Twisted code. So that's cool. So some final thoughts. Async.io is definitely not as mature as Twisted. They're undocumented unit test utils rather in the async.io source that I discovered today that are replicated by a few libraries. So there's like not a right way to do testing with async.io. And then Twisted has many more implemented protocols. So you have to like, there's lots of libraries that implement, say, SMTP or HTTP differently for async.io, and it's sort of hard to find like the most mature one or the best one so far. And then there's, I think it's important to note that despite its provisional status, there are a lot of these libraries that exist for async.io. It's just you're sort of in the, like the early days of Node where it's like, well, what's the best library for connecting to Postgres? There's 3,000 of them and they're all crappy and not well maintained. It's not quite that bad, but there's not like a single library like Twisted, like this is how we're going to implement this thing and then everybody focus on this one thing. And then finally, I found that the async.io code was cleaner and smaller. And I think that's a huge reason why it's going to be my de facto choice going forward. Any questions? These, by the way, are the repos and then under the Python 3.5 branch is you'll find all the Python 3.5. Mm-hmm. Yeah, so when you try to run this code, Python will just immediately print out the exception. Yeah. Any other questions? Yes. Yeah, so, I think it was like a year or two ago, Glyph from Twisted wrote a long blog post about how Twisted wasn't going anywhere and it was still going to be around. You can basically take the Twisted event loop and use that under async.io or vice versa, which is pretty cool. The number of protocols that are implemented in Twisted is great. I think that there's some talk about converting those to async.io, but like this syntactic sugar that's added, I think gives you a lot more power that you won't be able to do with Twisted unless they convert all that stuff. And so like they're focused on converting the Python 3.5 support just in general across the entire code base. And to get to like Python 3.5 support with the additional stuff would require lots of rewriting. I like, honestly, my thoughts are, it's probably better unless like one of the cases why you should or should not convert from one to the other is satisfied. I think it's just better to use async.io in most cases, unless you have to use Twisted. So you can, you'll get an error if you just simply try to call one of those functions without eventually awaiting it. Now you could call that and then like assign that to a variable and then later on await that variable because it's just a generator. So you can take that generator and you can send it via a parameter and a function or do something else with it. But if you never actually await it either in another coroutine or schedule it via the actual loop itself then you'll get an exception. So it'll be very obvious if you've done something wrong. Can you define async functions as function partials? So you can create sort of generalized functions that you can say, okay, I might need to take this function and then async it and then use my wrapper around that for different ways. Yes, and you can do that using the coroutine decorator. Oh. And so like just by calling that function around like whatever you put together will work the exact same as this. Yes. Have you used async.io with multiple processing? I have not. And that's like it would have to be a very special case, I think, because like at that point I might as well just like execute all the processes individually through some of their, like there are certain constraints. I haven't used it, but I know that like your arguments if you want to be able to get anything back everything has to be serializable via pickling. So you have to be able to pickle all the arguments and then you can have to be able to unpickle them in terms of the result of running that. So it sounds, I don't know, it has to be a very special case to that. Yes. No, and I have an aversion to do events in general because it's mostly with memory management. So I like none of this code is, none of the async.io code is thread safe. And I don't like the restrictions in the event of often having to worry because you may have code that is run in a separate drive and the magic there worries me. Yes, so I know that there has been some, some work in the past to make the futures that are just within async.io and then the deferreds and twisted to make them interoperable. And I think that project, well the few problems I saw were abandoned. I don't know if, I don't know if there's been much effort in the last year or so on that type of thing, but they are like conceptually roughly the same. Just definitely not interoperable because the loops are expecting different things. Any other questions? Oh my goodness. There's like async tests I think is one of the, it seems to be one of the preeminent ones that I played with a little. I just, like I literally two hours ago discovered that there are test helpers that exist in the async.io module in Python and I was gonna play with those and see if there was enough of what I needed there. Do you have a favorite? Do you have you? I use those now, it's really interesting. Yeah, yeah, I saw that. Your time is second. Yeah, and that is something that I will say that's really strong inside of Twisted. Like they have their own testing utility called trial and then they simply extend test case for unit tests for all of these, for all the actual test classes and then you can do all kinds of fun stuff when it comes to that. And they've made it, they don't like document it very well but the code is clean enough that you can just read the code and figure out how it works and that is like solid. And I wish that that existed in async I don't know. I don't think that there's anything that can be done. So it's an actual constraint of the underlying libraries that are like the E-Poll library for instance will not allow you on Linux or OS X to handle a file selector in the same way, the file descriptor in the same way. And so the way that the signals are basically bubbled up to say like, oh, this is the thing that's now ready is not supported by E-Poll. And so that's something that isn't supported inside. And there's actually like a bunch of texts explaining this inside of the async I don't module as to why they can't support it. And it's because of the underlying library. That said, I have no idea how Node does it because Node does have file IO that is asynchronous supposedly. But I don't know how they're doing it. Okay, that could be it then. Yep. Mm-hmm, yep. So, you know, multi-colon. Yeah, like I said, yep, or you can use multi-threading. So there is the ability to, and Twisted is called defer to thread or they have like a thread pool inside of async IO where you can basically say like take this synchronous thing and go run it in some other thread and then when it finishes, like give me the result. So you can use that or the multi-processing part of async IO to run separate processes as well. It could potentially get, like if you have a case where you can embarrassingly parallelize your code that you're trying to run, then that's easy enough. If you have a case where what you actually need is multiple threads or to be able to mutate the same memory location, then that can get really tricky and you should probably use something more fun language like Clojure. I don't know for sure. I think that, I know that Django had some sort of module that allowed you to use the Twisted event loop to respond, and I wouldn't be surprised if they also had an async IO method for running the WSGI stuff too. I don't know of any others that have, I don't know what Tornado's doing and I haven't looked at that in a while, but. Any other questions? All righty, thank you all very much.