 So, hello. I am Amber Brown, commonly known as Hawkeye on the internet. Here is my Twitter and my website. So I have quality Twitter posts. Those of you that follow me know that I'm lying. So I live in Perth, Western Australia. If you're wondering where that is in the world, it's right there. I've come, like, 13,000 kilometres. So hopefully, you know, it's been good so far. I mainly work on the Twitter project in my open source stuff. So I'm a core developer and a release manager and I've single-handedly ported the most code to Python 3. So I personally had a hand in porting about 40,000 lines of code, which is about 20 to 25% of Twisted's code base, as well as some auxiliary things and some things that use Twisted. So this is pretty much, you know, how it is when I'm working with it. So, and I'm here today because of my work, which is Crossfire.io. We do WebSocket routers and WebSocket RPC and stuff for your browsers and all of that. So I also do same sort of release management there as well, binary release management and more porting to Python 3, as well as WebAPI and REST integration into Crossfire. So the original idea for this talk comes from two people, Russell Keith McGee and Cliff Lubicovitz. So Russ asked me the question, why is Twisted relevant when there's async.io? Now, he knows the reason why it's relevant, but he likes to play devil's advocate. But I think it is, you know, kind of worth answering for those that aren't as ingrained into it as I may be or as he may be from me ranting about it endlessly whenever I see him. Cliff published a blog post about it and it goes into a lot of the same things that I'm going to go into, but, you know, it's good. I recommend checking it out. And he talks about it from a more of a long sort of perspective as he's been part of the project for a very long time, and I've been on it for three or four years, so it's a bit of different perspectives as far as the timescale goes. Now, one of the core problems you have when you're writing any software pretty much ever is that you want to do some form of I.O. Now, I mainly do web stuff. So you have web frameworks and they're all pretty good, except the problem with it, from this hilarious joke, is that the more conventional ones like Django, Pyramid, Flask, all of that, they only really serve one request at any one time. Now, the way you get around that is you deploy using runners, and these runners have multiple copies, and they put these multiple copies in threads and processors. So you are effectively still processing one request at a time, but you're handling the request in parallel. So it sort of gets around it in a okay sort of way. Now, in Python specifically, threads or processors won't really help you with what's called C10K, which is 10,000 concurrent connections. Now, when you try, you know, it sort of ends up looking a bit like this. Mainly because in Python and in programming languages in general, threads are very hard to use safely. You end up with race conditions, and it's really hard to reason about your code from a purely, like, reading the code perspective because threads decide when they want to change between each other. Not you. You don't get control over that. You can try to get control over it using your points, but this doesn't always, like, for example, given, but this doesn't always work. They're also a bit hard to scale with in Python specifically, because if you have one thread per connection, you're going to have thread memory overhead. Now, that's like the Python stack. So by default, it's like 8 megabytes on Python, on virtual memory. Now, even if it doesn't use all of that, that can stack up pretty quickly. If you're just using 128 kilobytes per your thread for the stack and some various other things, if you have 10,000 of those threads, you're going to have, like, 1.3 gigabytes of overhead without doing any processing at all. None of your business logic, none of your, like, fancy application stuff are just threading. Until the gelectomy happens or until software transactional memory in PyPy becomes a thing that you can use without downsides, you don't end up with parallelism either. Because the global interpreter lock means that only one of these threads may be running Python at any one time. You can get around it with C extensions, so you can do heavy lifting in those, but if you're writing Python, you're probably going to have, at least in the early stages, everything in Python. You can't afford to put everything in C extensions. Siphon makes this a bit easier, but, you know, it's still something that you have to special case, the easiest way of maintaining it and the fastest production code, which is not good, because you want them to be kind of one in the same. You also won't do threads properly. Pretty much no one in this room can do threads properly. Even if you've written thread using code, there's probably some subtle thing where it's going wrong, and you won't know until it's really, really bad. A fun thing is when people say that you can do threading properly, especially when they're like C threading. Yeah, that works. You know, there's applications that use it. If you look at the CVE database and search for race condition and look at when there's thread race conditions, there's been untold amounts of damage from not handling a thread properly, from a small race condition happening. Microthreads like G events and event learning, they're not really better. They still have some of the similar problems, and Glyph talks about them much better than I ever could, so I recommend checking that out. All these slides will be up online, so you don't have to worry about getting this even short into URL. Now, something that Twisted uses, as well as Tornado, Asyncho, all of those frameworks, is non-threaded asynchronous IO. So we all use it. It's the common approach compared to, for example, event learns and G event, G event, which use green threads. So Twisted was one of the first. It's been around for known history, at least since 2001. There was bits of CVS before it, but all of that is lost to time, so let's just pretend it's done in 2001. A recently moved to get, we're catching up on the 21st century. It's amazing. Asyncho is a bit newer, so it's been around since 2012, was one of us first commits. In the very core of them, they both use the identical system calls. They're called selector functions. Now, they're like select, poll, e-poll, and what happens is you give them a list of file descriptors, for example, sockets, open files, network, sorry, Unix pipes, all sorts of things that rarely anything that has a file descriptor, and it will tell you what is ready to have operations done on it. The most common ones these are reading and writing, because you will have, for example, you won't be able to read anything if the client hasn't sent you anything. You won't be able to write anything if the send buffer is completely full. These selector functions tell you when you can do these things without blocking, so you can tell it to do it in the occasion there, and it won't take an indeterminate amount of time. Now, selector loops can quite easily handle thousands of thousands of open sockets and events per second. For example here, this is just on my Mac. It can support C10K on my Mac with just making U limit so it can accept 10,000 connections, and it works fine with not really that much CPU load, so it is something that you can do on commodity hardware. You can do it on, you know, a standard laptop. You might want to have a bit more of a beefy machine if you're serving actually 10,000 concurrent real people all doing real work, but it's not a problem to handle that many connections. You can, with these selector loops and these selector frameworks like Async.io and Twisted, you can just do it. You don't need to worry about having things in C or that to handle that many connections. Generating what happens is that data is channeled through a transport. So, for example, a TCP connection, a UDP datagram, a Unix up file or anything to a protocol implementation. So, a protocol implementation is a thing that actually takes the bytes and transforms that into something useful. For example, HDP. It goes from a series of random bytes on the wire with whatever data and content might be there. And then a HDP protocol will actually pass that into something you can interact with. In these frameworks, sending data is queued until the network is ready because if you're trying to send a one-megabyte file and you've only got a 512k kbit uplink, it's going to be quite a few cycles until it can send all of that data down on the network. And nothing blocks because it waits until things can be done. And then it just says, you know, while I'm waiting for this, you can serve all these other connections. I don't care. So, the thing that uses these selector loops, the thing that uses the selector functions are called IO loops or in twisaparlance reactors, named after the reactor pattern because data coming in is events and you react to it. The great thing about it is that you end up with much higher density per core. Now, that C10K demo thing, the screenshot of that, that was only using one core. That wasn't using multiple threads or multiple cores. That was just a single CPU. You also don't need to have threads around. So, it works on things that don't support threads. And it also works on, it also means that you don't have to have the thread overhead. You still end up with no parallelism because you're still only one CPU and one thread, but you end up with concurrency. You can handle multiple requests at once because when you can't continue serving one request, you yield and let the loop handle the next one. The best case for it are the sort of applications that a lot of us are writing today. So, they do a lot of IO. For example, sending stuff down the network. For example, on Twitter, you don't really do a lot of CPU intensive stuff. You maybe send some pictures and send some text and mainly wait for the database to come back with that or for the client to send you some information or all of that sort of thing. So, because you're not using a lot of CPU per connection, you can hold 10,000, 20,000 as many as you want, really, as much as you have RAM for. As much as your IO loop can handle in one second. It also works really well when you have high latency clients because clients might take an indeterminate amount of time to respond. Now, if you have one, if you have 10 threads and each one is serving a client, that is uploading a picture. And the client suddenly decides to not send you any data for 500 milliseconds. That thread is being occupied for 500 milliseconds doing nothing. But it's still blocked and still waiting for data to come. And nowadays, you're probably waiting on either the client to send you information or the database to give you information. Generally, most web applications nowadays are thin layers on top of databases and on top of task management systems like Celery that actually go and do the hard processing on specialized data farms or in other boxes, not your web servers. Some of the implementations come with some nice abstractions so that you don't have to handle all of this, like, directly. The most common one is that they provide an object and that object is a stand-in for some results in the future. And a way of telling you when that result has happened. Now, the feature in Async.io is one of these. Twisted users deferred. They are very much the same. They have slightly different ways of operating, but they both have the core concept, a thing you can pass around when you don't actually have a result yet. For example, a deferred if you want to, you have a deferred, which is an empty thing, it does not have a result yet. You tell it that you have a result print when it calls back with a result and then you call it back with a result. So now the deferred has a value and it works through the callback chain. Features work very much the same. You have a feature, which is empty, then you add a callback to it, and then you set a result on it. Now, there are two different ways of saying it. You have callback here and you have set result there, but they are pretty much identical. Apart from one little thing, deferreds run callbacks as soon as they are able to. They will run it synchronously and won't yield to the IO loop. But features will schedule a callback to happen on the next IO loop, which is a bit fairer scheduling. That is pretty much the core difference. So if we have got twisted, why do we need a new solution? Why do we need a sync IO? Only 2012 was kind of a bit of a mess. As far as Python Dev was concerned. GVNs weren't ported yet, so that only happens this year, I believe, so it's been quite a while coming. Not much of twisted was ported, so you couldn't really build any real applications on Python 3. Most of Tornado had been ported though, so you could write it if you were using that, but Tornado is slightly less used than like GVNs and twists and all of that, so one of the major ones was ported, but it doesn't cover everyone. Elsewhere, Node.js was completely exploding in popularity. Everyone was using it. People were like, yeah, let's put everything to it. Everything was sort of happening very fast over there. And async await landed in .NET 4.5, so it was sort of a nicer way of doing this sort of asynchronous stuff. Now, Node.js is quite similar to async IO and twisted. It has the same event loop at its core. It uses libv, which is a layer on top of all of those selected functions, but it all works very much the same, and it did sort of give credence to the idea of this being a workable solution for pretty much everyone, that it was no longer something that was kind of niche, that you couldn't just put things in threads anymore, that there was a real use case for this. Python 3.0 option was kind of getting, you know, it's always going to, it's always taken a while for Python 3.0 to get massive adoption. But there wasn't really anything that was sort of Python 3.0's cool thing. There wasn't anything that you could really look at because there wasn't typing yet. There wasn't any async IO. It was nice cleanups, but that was about it. So why async IO specifically? It was designed around co-routines. Now, co-routines in Python are a special kind of generator. So a generator is a function that it sort of suspends. So it might not have a value yet, so it suspends until it does. So kind of similar to a future and a deferred. Python 3.5 especially contains syntax that makes features act like co-routines and co-routines act like features and various other things that sort of make some sort of work together. So for example here, if we have this code example here, it's just a loop that prints time for five seconds or every second for five seconds. Now, you'll see the special thing there. Do I have a thingy? Yeah. So the special thing here is the async def. Now, that is what defines a co-routing. You also have this special keyword here called a weight. Now, a weight is very much like yield is in Python 2 and Python 3, except it doesn't actually talk to the generator itself. It delegates to a subgenerator. Now, it's a little bit strange how this works, but it does mean that the implementation is a lot cleaner and it does mean that it works a lot nicer. Especially when you have async def and there's like async for all these sorts of things are introduced in 3.5 and they're made working with co-routines and working with async and stuff so much easier because all you need to do to a weight is just type of weight and then if you can weight on this, it will. So async.io.sleep1 returns a future. Now, in this co-routine, you can weight on a future and that will wait for the results. So if, for example, I'll explain this a bit better. So what happens is this line here suspends until the future returned by async.io.sleep actually has a result. So it doesn't just keep looping. It waits for one second. Now, because it uses the event loop here, it doesn't actually mean that this waits one second. All it does is tells the reactor in one second stops suspending this feature. Like, tell this feature that it has a result in one second so that it will continue with the loop. So you don't have to worry about callbacks, you don't have to worry about all that sort of structuring of your code because it just acts kind of like your old Python code used to be. It's very sort of Pythonic. Another thing that async.io was really meant to do was repair the library API fragmentation because you've got Twisted, you've got Tornado, you've got even gmn, for example, all have a different way of doing things. And there shouldn't really be so many different ways of doing the same task. And if you look at the Zen of Python, there should be one and probably, and should only be one way of doing things. So this sort of hopefully was like here's the one that is how you do it and all these frameworks can implement it and you don't have to do it three or four different ways, you just do it like one. Of course, we all know that XKCD comic about how as 30 standards we should define it under one and now there's 31 competing standards. But, you know, this time it's kind of different. We hope. It also was meant to reduce duplication because async.io would implement the same thing that all of these selector frameworks had internally, which was the selector loop. Now if async.io brought its own, then that means that all these other frameworks don't have to have what is essentially the same code. There can be one, one central one that is centrally maintained, all the bug fixes happen there. All of the knowledge can be poured into the one implementation and you don't end up with several ones that have several small bugs or work slightly differently or have downsides, you just have one and then it can hopefully be the best thing. So does async.io replace Twisted? Well, no. They both do the same sort of thing. They have cooperative, single-threaded multitasking. They have primitives for supporting asynchronous programming, like features, like ferns and co-routines, like inline callbacks and Twisted, sort of. These are the same system APIs. You know, select poly, poly, KQ, ICP on Windows, and async.io kind of took the protocols and transports abstraction from Twisted, which separates the thing that is the wire and the thing that processes the actual bytes of the wire as two separate concepts, which is really handy if you've got things like TCP, except it's actually TCP over some other protocol, which happens, for example, I think it's like TCP over socks or whatever, the old proxy sort of thing, it works a lot better if you separate them out so that the individual protocols don't have to care about what their transport is. So it has the same sort of benefits of Twisted in that regard. It's also very architecturally similar internally. If you read the Twisted reactor source code and you read the async.io event loop source code, you can see the same things being done slightly different ways, but they're essentially doing the same thing. And it's a newer and standard API, and it's just there in Python 3.4. You don't have to pip install anything, you don't have to worry about any of that, you just import async.io and off you go. Now, where this falls down is that Twisted is an async.io thing. And async.io itself is an async.io thing. And there's the same kind of thing, and surely you only need one of these things. So Twisted just replace your Twisted usage with async.io. Well, that's some work because async.io is like an apple, and Twisted is a fruit salad. Twisted is, for example, much bigger. And for example here, if you look at the amount of lines of code we have, it's a lot more. We also have a lot more comments, which I like. So if you remove the tests and then you just do the pure implementation, it's like 10 times bigger. Now, that's not because Twisted is 10 times bloated, or it does the same thing in 10 times the amount of code. It does a lot more than that. It's not only the core reactor, it is also protocols, for example, HTTP, IMAP, POP3, DNS, SSH, all of these different things. And it kind of all does this in one package because when Twisted was first made, nothing else really kind of did it in the same way. And there were a lot of things that Twisted does that Python didn't have yet, for example, order dictionaries. We had our own, I think we recently removed it when we dropped 2.6 support. We had to have our own order dictionary lying around because it wasn't in Python until 2.7, or 2.6, I forget. And one big package was very much easier to distribute in the early days of Python. You didn't have PyPI that worked as well as it did, and even if you did have PyPI, it was down every 20 minutes. And it wasn't a good time. So if you just have one package, it was a lot easier to use and a lot easier to install because it was just one thing. And it also came with basically everything you needed, batteries included more or less. If we pull this down to what Twisted does, what Async.io does, and the equivalent Twisted code, they're very much the same. The cores are essentially equivalent. And this equivalent core is basically those primitives, the core Async.io utils, a couple of Python utilities that Twisted has that's in Python 3 now, and a couple of protocols that use all of the above. So some quite basic protocols. Oh, sorry. Oh, sorry, I did the wrong slide. This slide is showing that early Python code, for example Django 1.9, is very much the same sort of size as Twisted. So Twisted is big, but Django is also big. This is the size that compares lots of graphs in this one. So as you can see, they're roughly about the same size in lines of code. Twisted is a little bit bigger, but, you know... But if you actually look at what Twisted does internally and what you need Async.io to do, to do the equivalent sort of thing, it is very much sort of the same. You're going to end up with a lot of code. So some people say, you know, Async.io isn't bloated. Twisted is bloated. Look how big it is. Look at all of the code. It's very big and bloated. Well, we just do stuff. We also have some protocol implementations that aren't quite in Async.io, like HTTP2. I don't think that's in AOHP2 yet. There might be one or two, but Twisted turns it like nearly first-class support, for example. But enough about Twisted. Let's talk about Tornado. Who here has used Tornado? So Tornado is another asynchronous framework for Python. It's a specifically asynchronous web framework. It's made by FriendFeed, which was bought by Facebook and then was torn apart and, you know, dissolved, because that's what happens if you get bought by Facebook. They're... It's sort of similar in some ways. The transport is very similar to the IO stream, but their protocols are a little bit mixed in. They don't have to worry about the genera... generality. There we go. First blob of the word. Generality that Twisted and Async.io have. It does implement its own selector loop, and it does actually have Twisted and Async.io integration. So you can yield diverse or you can yield futures. And they might actually remove their event loop and just replace it with Async.io. So, as you can see, they've, you know, got a bit further into using the standard sort of thing, and they're a really great example of inter-operation. And is this kind of the future of Twisted? Now, inter-operation is hard. Is anyone that's ever had to work with a system that's similar, but not quite the same? You know, it's... it kind of has its difficulties. My focus has been the Async away keyword. This was introduced as pep like... Oh, here we go. Pep0492, and I believe was mostly written by Uri Simlov. I can't pronounce it a lot. It was introduced in Python 3.5, and in Python you can use right now. And it's pretty cool. I gave the code example earlier. It makes things a lot easier to read and looks a lot like your regular Python code. You don't have to worry about callbacks, you don't have to worry about callback hell, you don't have to worry about lambdas for all sorts of things just to add a value together and then pass it down the callback chain because you just await and then you just do it on the next line. So wait, as I explained, gets the result of a coroutine. It sort of works. You have a coroutine and you await on a coroutine, which is sort of like a feature now in Async.io, they sort of act like each other, and they are a special kind of generator. Similar to yield from, it delegates to a subgenerator, and it lets you have asynchronous code executed in asynchronous style, which is the main draw to it. Twisted's had a sort of the same sort of thing since around 2006 called inline callbacks, where you use the old yield function, which is very much similar to how G event does it, that you use yield and then you write sort of your standard Python code and you don't worry about callbacks as much. But I am working on the interop, and coming soon is a little thing called intro deferred. And what that does is it takes a coroutine and it turns it into a deferred. And then that coroutine itself can await on deferreds. So if you want to write twisted code on Python 3.5, you can just await things. You don't have to worry about deferreds or callbacks or anything like that. If it returns a deferred, you just await on. And then because it's Async.def, it is actually a... Let's see. I've got the word now. What is it? Coroutine. Yes, I said it 12 seconds ago, and I forgot. It's a coroutine. So you just go down here and show deferreds of the function that takes the coroutine, and that returns a deferred. So that means that you can write this code, and if it's like, I accept a deferred, you can be like, well, I'm just going to write this Async.def function, and the other code doesn't know because it returns a deferred, and you can yield deferreds inside of it. Also coming is a Async.io reactor, which is a twisted reactor on top of Async.io. So sort of replacing those twisted internals with Async.io in the sort of original idea of what was supposed to happen. So it's on top of Async.io, so that means that you can share the different things. So you can have a twisted protocol. In this example, for example, this is AOHTDP, so using UV loop, which is the high-performance Async.io reactor. So this track here is a twisted thing, and AOHTDP is an Async.io thing. We just get the reactor and we just tell it, yeah, we're running. And this here is just a coroutine, which is AOHTDP coroutine, so for example, handling a web request, and we're doing some twisted, some track sort of stuff in there. We just go deferred to future, and then the Async.io code just believes that's Async.io code. It'll wait and wait for the deferred to fire, and because they're running on the same reactor underneath, they will sort of let, they won't block the other. So you'll be able to have some Async.io stuff, some twisted stuff, and it won't really matter. So yeah, here's the core part of it, just deferred to future. So hopefully in the next version of Twisted, it'll come with it. Async.io does need to patch one or two little things. I mean, discussing with them, like on US, it's got a lot on it, a bit more, but it is there. So it is very close to having that sort of thing where you can have Tornado, Twisted, and Async.io all using the same event loop, and you can sort of bring your own, and bring the different abstractions from the different frameworks, and use whatever you're most comfortable with. But why is Twisted itself, apart from Async.io, still worth using? It's released often. We have three plus times a year that we have releases. 2016 is set to have five releases, which is quite often in sort of a size of Twisted sort of project. That means that we are going to be able to get features out a lot quicker than, for example, Async.io, because Async.io has to wait either for a new PIP release, which I'm not sure when they do it, or wait for a new Python release. They're time-based releases taken off our trunk branch, so we just say, yeah, we'll release here. So you don't end up with big features that are sort of half-merged because the trunk still has to keep working. So you can get the cutting edge pretty safely. We do have a lot of protocols under the box, so here's just a small list of them, like just some of the random ones that I've seen people use. Not finger that much, but, you know, that's an art tutorial. So a lot of them are... some of them are ported to Python 3, some of them aren't. It just comes down to someone saying, oh, we use that protocol, we want to be on Python 3, and I sit up at 3 a.m. porting it, and that's ported basically. And it's super easy to actually make your own protocol, so if you need to talk to some custom system, or you feel like running your own protocol for whatever reason, you can just do it in Twisted. Same with Async.io. It's very quite easy. So that there is just an example of something that just echoes out whatever you send to it on the command line. We also have HTTP 2, which is really cool because this is pure Python HTTP 2. So this is without Internet, without Apache, all of that pure Python, so you can just pip install Twisted, square brackets, HTTP 2, and then just set up your TLS certificate because that's how it negotiates that. Your browser says, I want HTTP 2 in the TLS request, and then it'll just let you have HTTP 2, and pretty soon we're going to have all the server push stuff and the client support, and it's kind of cool that you can just do this in Python. And this also means that when we get to third to future and future to third working, that you can write Async.io code that uses HTTP 2. We also do have established library support. We've been around for a very long time, and we do have a lot of handy little things. One of my favorite libraries is TXACME and TXSNI, which is a Python interface to let's encrypt, and it lets you do automatic certificate renewal. So if, for example, you go to my website, which is atleastfornow.net, you'll get a HTT, it'll go onto HTTPS. Now, I don't actually have to provision the certificates or anything like that, I just turn on TXACME, and it goes and automatically gets the certificate and does the challenge and handles all of that, and it sets it up. I have, like, a straight A on the quala's SSL test without ever actually having to look at a certificate. There's Hendrix, which is like a whiskey runner, which uses Twisted. Let's see you do web sockets and TLS and run Twisted code inside your blocking, like Django or Flask or whatever code, so it's a pretty cool project. Spottobahn, which is one of the things I work on, which, for example, because Twisted and Async.io share the same sort of protocol and transport abstractions, it's a web socket library that has a single protocol and then sort of shims for Async.io and Twisted, so you have the same sort of dependable base of web sockets and it works the same on Async.io and Twisted, and it works pretty well under PyPy, the optimising JIT compiler, which is also very good and also the HP2 stuff here also works very well under PyPy and is actually probably one of the faster HP2 implementations out there. We're also a very dependable base because we try not to break a code. Now, as some people that have been on the receiving end of my releases may know, we don't always do this, but Twisted is a very big project and we try not to and that's kind of, you know, at least half like not breaking a code. We have deprecation cycles, so we don't have dot one, two dot zero, we don't have three dot zero. We say that we want to get rid of the usage of this, so we're going to have a new version that does things right and we're going to deprecate the old one and in a year or year or more, sometimes it depends, we'll actually just remove it. So when you upgrade from, for example, 16.3, something might be deprecated, you see the deprecation warning, you fix it and then come by like 17.3, it's gone. So it means that we're constantly getting the new stuff and updated stuff and it's a lot more fluid than if you have, for example, the big 2.0 release that breaks everything and then you end up never porting. You just need to make sure you're on the latest version of Twisted, which is pretty easy because the releases are every couple of months, so they're not huge changes, they're rather small. So you can upgrade with basically impunity. You can just see that there's deprecation warnings and you run your tests against it and you have tests. Don't you? Yes, you run your tests and then you can go, okay, everything is fine and then when something does break, you're just fixing one little thing, not the entire bunch. We also have CodeReview. CodeReview is sort of the thing that Twisted did and now everyone else is doing it, so because it's great, we have lots of automated tests, like thousands and thousands of thousands of tests, so we try and make sure that everything in our code base will work because we have a test to prove it. We're about 90%, so your code will only break, like, if you're using the 10th cent, which is actually probably stuff like... What was it? If you use the MSN support, which I removed because it sucked. I don't usually use MSN anymore. It's kind of terrible. We have a pipeline to it because most of our tests pass. We're working on getting the last 10 or 15 tests. They're all CPython assumptions, like they're just into the garbage collector, sorry. It seems like when it goes out of scope, it'll be immediately garbage collected. Not the case on PyPython, we need to alter our tests for that. We have a lot of people that run it in production. We run it in production ourselves and the speed benefit is absolutely amazing. You can handle, like, twice as many TCP connections just by switching out your Python compiler. You can serve a bajillion more DNS requests and you can do so much more templating per second because it's a just-in-time compiler, so those core inner loops are all transpiled to machine code and then they go really fast. We support a bunch of platforms, so that means that the tests pass on the platform and we gate any mergers that it needs to pass. So we end up with a huge bunch, so pretty much nearly anything you run, it'll work. There's even a couple of other platforms that unofficially supported that work pretty well. We have people running it on, like, the HP UX, like that Unix thing, and it works and I don't know how, but, you know, we support Python 2.7 on all platforms, Python 3.4 and 3.5 on Linux. Python 3.3 as well, but I don't think there's any current platforms that aren't end-of-life, so we don't test on it anymore. PyPy is close, few tests remain, and PyPy 3, which is actually 3.3.5, is getting worked on, so that means that you'll be able to have your Python 3 code and also a fast code, rather than picking clean Python 3 code or fast Python 2 code. And support's coming to Windows soon for Python 3.3.5, just cleaning up the last little things. And most of all, the reason why I think Twisted and Tornado and all of those other frameworks that aren't Accinco have real value is that competition is really good. We fit in this ecosystem if only as competitors, because we can have some things that are good, and then Accinco can go and do things better, and that means that we have to go do things better to compete. So it means that we just keep moving forward altogether as a community, and as the inter-operation gets better, it means that we all benefit. Now, where to from here for Twisted, for Accinco? Well, inter-operation is the big thing, because then that means that you can use all of your old Twisted code and your new Accinco code and your new Twisted code and all of that on Python 3. There's the Accinc SIG mailing list, which I haven't actually subscribed to yet, but we are going to be talking a lot more in the coming weeks, coming months, coming years, especially, about inter-operation between all the frameworks and making it so that everything sort of works together. Now, if you want to know some more about, for example, protocols, I recommend Cory Benfields. He's one of the request maintainer and the author of the Twisted HP 2 support. I pike on U.S. talk building protocol libraries the right way, ultimately titled You Do It Wrong, and all of my libraries except one do it right. No, wait, I'll blow him around. And thinking in coroutines, look at Langer at Python U.S. So also a good thing about thinking, talking about how coroutines themselves work and how they especially work in this context of Accinco and how they work internally, which is kind of good to know if you're ever running some issues about it. And questions. So ask questions. If you would like to yell at me about how I'm wrong, afterwards, you can yell at me how I'm wrong. I love it. Yes. Scrapy, I would say that there's probably not any value in it, especially once the intra-operation stuff comes into account, because, so Scrapy is, I believe you write, it's like a whole bunch of tools, and then you do write asynchronous stuff for fetching the pages and processing them, right? But in that case, because Scrapy is large, I don't think that something like Scrapy could really survive a transition to Accinco without some major rift. You can't completely break all the existing code and all of that without some ramifications. Now, with like intra-operation stuff, then that means that you sort of, it doesn't matter what Scrapy itself is written in, because it'll work with your Accinco, database adapters, the Accinco, other things. But as far as like Scrapy and other projects, it's kind of worth just keeping on whatever you're on and waiting for everything to catch up talking with each other, because it's, you just can't rewrite that amount of code without something going wrong. And, you know, that's the unfortunate bit is that you can't just take them and go, oh, this is better because all the existing code won't work, and that's really valuable to some people. Okay, so this await here and this await here. Okay, so the reason is that this get here, this is actually a thing about Trex interface, is that .get will return when you have headers, and then the .content on the .get, on the thing that's returned there, will return when the entire body is fetched. So that's just a particular thing for this API, that you get an early response, that's not the entire body, and then you get the rest of the body, which might take hours, minutes, days, depending on how big it is and what your connection is. So that's just a purely sort of thing there. You can use as many await statements as you want and all of that, so it's really just limited by your imagination. But, yeah, this is sort of an interesting example showing the early interrupt. As you can see, there's some really ugly, terrible stuff here for how AO, HP, HP thinks headers are and how twisted things have it are, because we think ours are lists of... Yeah, it's different. But, yes. Orange, yep. Yes? Yeah, so that's actually up in... almost up in review right now, is that... So that's what this Twisted Internal Async Co-Reactor, which is in almost review, is pretty much doing, is that what it is, is that... So you have the async.io event loop, and then all this here is just maps the function calls to the async.io function calls. Yes, so async.io manages all that, yep. And that's just because the interface to the reactor is very similar, but we use camel case and async.io uses not camel case, snake case, and we use different names for things, so it's just purely as a thing going up presently so that things keep working on top. This is not out yet. This is pretty much up for review, and there exists something currently on PyPI called txtulup, which is just actually this, and I've just improved it a little bit to make it work on, I think, E-Poll a bit better, but it's been done before, so, yeah. Yes? Tx.io, so... Yes, that's one of the things to Crossbar and to... Yeah, Crossbar and only to vendor. We actually work on that, so what we use for Crossbar and, sorry, we use it for Autobahn. So the thing about tx.io is that because Autobahn needs to work on Python 2.7, we can't really use the coroutine way of interop, which will be, sadly, Python 3.5 plus because coroutines sort of give that little gap where we can do the compatibility. So if you need to support Python 2, tx.io is sort of good at that, but the better way... Well, it is sort of two things. You don't want to use it too heavily because you want to do in the way of... Where's that? Corey Benfield's thing. So where he's essentially got his protocol, which is the meat and potatoes of, like, the HB2 support, and all of that is synchronous. It doesn't use features or the thirds. It's like a state machine, and then you have a wrapper around that that handles making the futures, making the thirds. Now, tx.io is that for Autobahn, and it's useful for some other projects, but I would say that going forward for writing new code, the coroutines way of interop will be better because it's, you know, it's a lot more Pythonic while tx.io, you're sort of reduced to a common delimiter. Yeah, lowest common delimiter of what futures and deferreds both do to make it work. So you don't end up using, like, deferreds, how you would use deferreds or futures, how you would use futures, because you sort of have to use them both at the same time. So it's good for current software, but there will be more optimal solutions in the coming years, and when we drop Python 2 support as, like, a community, then you'll get better. The only competitor that's twisted on PyPy has in the async.io world is UBloop. So, where is UBloop? Sorry, I'm jumping between these so much. So UBloop, no, that's not UBloop. This is UBloop. So UBloop here, here, yeah. That is the only thing that can come anywhere close to twisted on PyPy. Now, the problem with UBloop is that UBloop is the core event loop, is in C. But if you look at URI's benchmarks, it doesn't actually make something like AIO-HB any faster, because that there, you're restricted by Python's interpretation. So while PyPy, in that case, even if our reactor is slower, all of the actual protocol code is much, much faster. So I'd say that when PyPy 3 comes out and UBloop gets a port to CFFI, so it works better on PyPy and other sorts of things, then you're going to have a truly fast async.io. But that's just like, it's plenty fast already, and UBloop makes it really good for most things. But, you know, it's twisted and PyPy is still top of the pack as far as I'm concerned, just because the JIT works on all your protocol code as well, which is, you know, when in real-world applications, that's the bulk of the processing that's happening, not the reactor. Um, sure. Okay, so, like, um... Tornado has the multiprocess option, and I saw a lot of recommendations where people just wait too lazy to implement, like, what you want. They just say, oh, I'm going to switch on the password to remove the option, and actually can teach people... Yeah, so, yeah. So the problem with multiprocessing is that multiprocessing is great, well, multi, as in running it in multiprocesses is great in the context where you have CPU-bound workloads. So if you're doing lots of math, doing lots of things like that. So in that case, you're still going to need multiprocessing. You're still going to need, I think, asyncho has a thing in concurrent called, like, process executor or something like that, which is sort of, you run some code in a process, and it returns a feature. Now, that sort of thing is still going to be valuable going forward, but simply because until we have truth-reading in Python, which, although Larry Hastings' Gelectomy is coming up, and I think going well, we've still got all these Python versions that won't have that, and it might not even land in Python because it might break the CAPI too much. So both, yes, it's good to say, don't do blocking calls because blocking networking calls are the devil and should not be done, but we also need to twist an asynchonal that get it easier to run code in processes really easily for the sort of thing where it's not networking, where it is CPU workloads, where it's processing images or doing natural language processing and all that sort of thing. So sort of half and half is that you probably shouldn't use multiprocessing for talking to the network, but it does still have a lot of value, and we need to get better at supporting it going forward, like as a sort of having one central way of doing it, which asynchro has one, but that's not two, I think. Yes. You first and then you. Sorry? Yep. I would say not. The eventual future is probably going to be that we ditch all of our reactors and Twisted becomes the protocols, and that's it, rather than the other way around. And it's more likely that we're going to split out more and more of our Python utilities, for example, deferred, which is purely just a Python utility, and split all that into different sort of Python packages so it can be more widely used. So that is the sort of future I optimistically see Twisted going, is that it is essentially just a bunch of protocols for asyncho, but that's not going to happen until at least 20, like 30, so 2023, because we are giving... So with the 2020 Python 2 drop, Twisted's not subscribing to that because we could only start porting at Python 3.3. It was the first version we could realistically port code, and we want to give our users the five years' death notice. So, you know, once it goes, we drop 27 support, which is most likely in 2023, maybe longer, it depends, maybe shorter, maybe everything is ported tomorrow, and then we can just drop it tomorrow. I would say that that would be the eventual best case, is that we don't have the reactor, we don't have all these utilities, we are just protocols. Thank you. Sorry, did the person... Yep, yep, little loud. So talking to databases, so for example, one great thing is... Okay, so a lot of current ones talk to a socket, so they wouldn't work as stands. There is a library for Twisted called CX Postgres, which ramps the native async parts of the Postgres C library, so it just uses that and does all of that, but you will ultimately need to write brand new database drivers that are natively asynchronous, so it's... Yeah, you're going to have to write a lot of code for that. The usual way to do it is just wrap it in a thread pool and go trung, but that's not the optimum way. Native ones will always work better, will be more efficient, but yeah, so it will require a lot of code rewriting, which is fun. Anyone else? Oh, yes. In paradigms that don't require like JIL and stuff, so can you think of, for example, concurrency or actually in particular JIL, is a true constraint? The JIL is only a true constraint when you are dealing with C extensions. With C extensions. So the thing that PyPy has is... It's got this experimental thing called the STM, which is software transactional memory. Now, when that kind of gets forward a bit more and doesn't crash as much, because STM is a bit... It's a wonderful technology. Basically, what it means is that rather than having a global interpreter lock, you have a lot of finer locks and you lock specific bits of memory. Now, when you do something like Twisted or Async.io and all of that, where you have the core reactor and then all the things that come down from it, but these individual sort of handlers don't talk to each other. So that means that in the sort of the JIL free world, you just say all of these have a software transactional memory lock for their own sections, and then they all run in parallel. So the JIL... If you're in a world where there isn't C extensions or there's better C extensions and you're writing Twisted and Async.io and all of that, then JIL is not required, essentially, because it's just a thing that the C API needs and also prevents race conditions. That's also the other thing, while software transactional memory also prevents race conditions by having finer locking. So if one tries to talk to memory, then another one is locked, it will actually run in sequence, not parallel. So it'll work around it. So JIL is not required, basically. But it's there because everything is horrible. Anyone else? No, no more questions? Come on. You look like you want to ask a question. All of you, I just pointed out, so I'm being general. Am I? Oh, okay. Okay, how about this? Who wants coffee? Okay. I love being in the last slot because I can just run over time. Okay. Sorry?