 I have a microphone. Hello, welcome to track one after morning tea session. Please keep your arms and legs inside the vehicle at all times. Right up now we have Corey who really doesn't care about things. Enjoy. Hey, welcome to the most inflammatory title of the entire conference. It's gonna be really good. You can all feel really mad and you can send me lots of angry messages. It's gonna be fantastic. My name's Corey. If you would like to look for me, you can find me at these places. If you treat at me in the talk, you will not get the satisfaction of seeing your tweets up on the screen. So don't bother. But they will come to my watch, so I'll still see them and feel sad if you would like to tweet abuse. Go ahead. Little bit about me. I'm employed currently at HP as a senior cloud engineer. My main role at HP is to work on open source Python tools and libraries, particularly tools around HTTP and HTTP too. In that kind of ballpark, world of things, I maintain and work on a number of projects. One of the ones that's most notable, the thing you're most likely to have heard of is requests. I'm a requests core contributor. I've been a request core contributor for coming up on four years now. It's been quite a long time. I also am a core contributor on the closely related URL Lib3 project, which is probably the most important Python library you've never heard of. Separately, I am currently the maintainer and lead developer on Hyper, which is Python's only HTTP to stack. That statement was true when I started work on it in January 2014 and it's still true now. So I'm kind of hoping that I'll be able to keep going and no one will ever do that work and it can always be me and I will ruin my life. Separately, I work on a whole ton of other things that kind of help out with Python HTTP in various different ways and I throw patches around and generally open source my way around the world. All right, so this is the controversial title of my talk. My assertion is that no one in this room cares about efficiency. That is clearly and obviously not true. Plenty of people in this room care about efficiency. Anyone in this room who writes software professionally presumably cares at least a little bit about software efficiency, although we all write Python so we clearly don't care that much about software efficiency somewhere in the middle ground. So why am I writing this kind of really controversial and inflammatory title? Well, it's been my experience that a surprisingly large number of people, including people in this room, write their Python software in a way that betrays a certain lack of care about efficiency. This is not super conscious. People aren't sitting there in a kind of screw you efficiency sort of way, mic dropping and wandering out and saying, I don't care about efficiency. We've got all the CPU cycles. It's great. Most people do it either because they don't know better or because it's hard. And the major focus of this talk is on the don't know better. I'm gonna try and stand in front of you and convince you in the rest of the time in this slot that you can be writing code more efficiently and with more care about its efficiency and that it's not that hard. So while I used an alarmist title to get you into the room, I've got a slightly less alarmist title, slightly more forward prediction-y title that might be better for this talk. And that title is that synchronous code is dying. It's my proposition that we are entering the end of the kind of life cycle for synchronous code, at least in any moderately sized software project. And I'm gonna try and spend the rest of the time in my slot convincing you that that's true. Before I start though, I should mention the inspiration for this talk and frankly, I stole the alarmist title for this talk directly from Amber Brown, who is Hawke Owl, who is the Twisted Release Maintainer and is a lot smarter than me. So you should go and ask her things, but she mentioned in an IRC chat that she planned to have a lightning talk at Pycon AU about how blocking-only software and C extensions are pretty much things written by people who don't care about efficiency at all. And I stole that idea because I'm terrible. So you should all tweet her. She is at Hawke Owl on Twitter. You should tell her that someone is currently stealing her work right now. Cool, all right. So problem statement. Having written lots of alarmist titles and encouraged people to come to this talk, I presumably have some kind of feeling for what the problem might actually be, right? So problem statement. It's my belief that synchronous software, synchronous code, is hurting the software industry as a group and it's hurting the Python community in particular. The tendency of this community to write software in a synchronous model for that to be the tool we reach for every time we go to write new software, it's my belief that that's a toxic anti-pattern and we need to stop doing it. And if we continue to do that, Python will begin to fall into a trap of irrelevance or if most of the community swaps and you don't, you will fall into a trap of irrelevance and that bad I think you should all keep your jobs or get new jobs that are better if you don't currently like your job. So let's learn to do something new. Quick question, is there anyone in the room who doesn't know what I mean when I say synchronous code? Is there ambiguity? Let me flip the question. Okay, we've got at least one person, which is good. It's totally fine. I wasn't actually planning to explain it right now. So that's fine. I was just trying to get a feel for the room but I'm getting the impression that either all the room feels comfortable with it or no one feels so uncomfortable that they're prepared to put their hand up even if they're sat at the back and no one can see them. So that's fine. All right, cool. By the way, how is this for like the third alarmist slide in this talk? I'm doing really well. Strong statements. It's fantastic. All right, graph. Graphs are great. Everyone needs graphs. Put this graph in yesterday because I felt like I didn't have enough. This graph I stole outright from a paper. You can see there's a citation on the bottom in tiny, tiny font. These slides will go online so you don't need to try and read it right now if you wanna go and find the actual paper. But this is basically a slide of Moore's Law. So for people who are not familiar with Moore's Law which I think is a pretty small proportion of people in the room, Moore's Law is one of those laws of nature that is actually observational. And it was originally stated to basically say, computing power measured in number of transistors on a CPU has roughly doubled every 18 months throughout the modern era of computing. This is increasingly less true in particularly over the last couple of years but it's a good illustration. And the way you can phrase it in a kind of ad hoc, simple way is just to say that every 18 months, roughly speaking, computers get twice as powerful as they were before. If you restrict yourself to kind of the quote unquote modern era of computing, say starting with the Apple II which was released in 1977, the sheer computing power available on a consumer desktop computer or even a consumer laptop computer like the one in front of me. That computing power has increased by a factor of roughly one million. So the machine in front of me today is roughly a million times more capable than the Apple II was in 1977. This is like a million times is a really tricky number and it's worth trying to get ballpark on it. So one second is roughly as long as it takes me to make some kind of ridiculous claim that you're all gonna get really angry about. A million of those seconds is 11 and a half days. And if you think about this talk going on for 11 and a half days, you get a feel for what the scale that I'm talking about here is. This is ridiculous. We have so much computing power. So with all of this computing power, why do people keep complaining that everything is slow? And you definitely hear this, right? You certainly hear it from your friends who are less kind of into computers, but you also hear it from people who work with computers all the time. It is difficult to understand why so much software is slow. Except of course it's not slow, right? Most software isn't slow. What it is, is something that feels slow. Almost overwhelmingly, software that end users believe is slow is actually doing plenty of work. It's just not responding to them. It's not responding to user. It's not responding to changing events on the system. It's sitting there doing its one thing, not paying any attention to anything else. User clicks a thing. Thing takes forever to happen. User goes on Twitter and complains. Someone from the customer service department tweets back. Something nasty. It all goes terribly wrong. This has nothing to do with computational power. Moore's law does not help you in this case. The computer has got plenty of computational power to spend. Again, every time Outlook freezes up on someone's work machine and they call IT services, it's not like the work machine is actually doing a whole ton of stuff. It's that Outlook has just decided that, who cares? Generally speaking, this is always because software is waiting for something that is much, much slower than the CPU that is running on. And we can pretty much say that too, within a rounding error, all of the things that are being waited on here are IO. Sending, receiving data, pulling data from somewhere that isn't directly on the chip to the chip or sending data that is on the chip out of the chip. It is pretty much objectively true that software feels faster when it is doing literally anything during these waits. It doesn't have to be anything important but it does have to be something. As an example here, how many people in the room have iPhones? I hate software conferences, there's never enough hands. iPhones, little while back, probably iOS 5 or 6, it became mandatory when writing an iPhone application to provide a splash screen for when your application launches. And the splash screen's literally just a picture. It's a PNG, it's does nothing. It's just got a company logo on it but it comes up when the application launches and then disappears. And the reason this became important is because it takes a little while to load an application off of the iPhone's crappy flash memory into its RAM. And that wait made users think the phone was slow. So instead, it loads a picture and the picture's really fast and you can render it really quick. So it comes up, zoom me. The animation hides the loading time of that, by the way. This plays the picture for a couple seconds and then goes to the application and users feel like it's faster. That screen doesn't do anything. It doesn't respond to your inputs in any way but it's better than doing nothing at all. This is what I mean when I say it feels faster if it does literally anything. The most important thing if you can do it is that you should be responsive to user input to stop waiting. If you are waiting for some kind of IO and your user tries to cancel the IO, Outlook is my ongoing favorite of these things, you really need to be responsive to that request to stop. Users don't like it if you ignore them when they tell you to stop doing a thing. It's very bad. So to improve responsiveness, we need to stop waiting. Having your program wait for something, wait for something to finish when it could in principle be doing something else represents a really inefficient allocation of computing resources. So let's put this problem in a more strong way, something you can fit into a tweet. You can't fit into a tweet, it's more than 140 characters but you could. If your program has any work to do that it could be doing right now and it isn't maxing out at least one CPU core, your program is inefficient. There is no excuse for having work to do right now and to not be doing it unless you like wasting money on more powerful processes than you need. So how do you write software this inefficient in case you were looking to get fired? The way to do it is to write it in a synchronous manner. Synchronous execution in this context means that you wait for each operation to finish before you move on to the next one. This operation can be something that's quite quick, like addition, it can be something that's really slow like sending a file over the internet but whatever it is, in synchronous software you wait for that thing to happen, you wait for any given thing to happen before you move on to doing the next thing. While you're waiting, you aren't doing anything else. You're not responding to user input, you're not writing stuff into the database, you're doing nothing at all. This weight is not a problem when you're doing something that's quick, like addition but it is a problem for anything that takes a long, long time like I owe. So real programs, not the toy programs that you might write when you're trying to test out a new language or write a microbenchmark, real programs do I owe because real programs either generate some kind of useful output or operate on some kind of meaningful input, usually both. Programs that do not generate useful output are not enormously helpful. Programs that don't operate on any kind of input are also not enormously helpful. You need some way of persisting that kind of data, persisting results and obtaining data. Now, this I owe might not be very heavy, it might be only writing to a terminal, for example, which is something that most computers can do reasonably quickly, but at some point you do have to get the data out of the program and during that point, if you cannot be interrupted to go and do something else or if you cannot do other work you might have pending, then your program is kind of definitionally inefficient. It could be doing better. So let's get some perspective on what I mean by inefficient and how inefficient you can get. This is a table to go with my graph. It's actually unrelated to the graph, but all the best talks have tables and graphs. This is a table of system latencies. See what I did there, I read the title of the graph to you. The left column is the specific type of event. The middle column is the approximate latency. Now these numbers change all the time, don't worry about them too much. It's just an approximate get a feel for how long a certain event will take to happen. And on the right is those numbers again scaled up so that the fastest thing on this table takes a second and this is again a second is a super useful benchmark. So one quick note, most programmers, probably everyone in this room, runs on a quick optimization that says that accessing main memory, accessing out to RAM is basically free, it doesn't take very much time. Now that is not true and for certain kinds of software the not trueness of that statement is extremely important. But for anyone writing Python, tough you may as well assume that that's true. You're not going to get cash alignment, just sorry. But it is a fair optimization and the reason it's fair is if you look up at that table and you look where you see main memory access and over the right it says 46 seconds and then you go the next one down. This is the next fastest thing on this list and it is reading from an SSD and that's not 43 seconds, that's six minutes. So if you look in the smaller latency where you've got everything in the nanoseconds you're looking at three orders of magnitude. This is a three orders of magnitude, wait for something, jump in latency. Like this is crazy. The amount of reading and writing from main memory you can do when you're waiting for your apparently super fast SSD is just a little bit mind boggling and that doesn't even begin to touch the thing that most of our programs do which is that they wait for the network. So on this table SSD access takes the equivalent of 500,000 CPU cycles. Put another way, if you want to write one byte to your disk then while that's going on you could add 500,000 numbers together and it's pretty criminal if you can't even achieve adding one during that time frame because you're waiting for this. Spinning, spinning hard disks, those ones that we've all tried to get rid of 33 million computing cycles. The network really unless you happen to be in a LAN and even on a LAN it's pretty bad but if you actually go to a real network over the public internet you are looking at at least 100 million compute cycles and on frankly an actual system probably closer to one to 10 billion compute cycles. That is a lot of work that you aren't doing because you're waiting to download a webpage. I'm pretty confident that everyone in this room can agree that if your software stops to wait for billions of CPU cycles the fact that we have doubled computational power every 18 months has not helped you. You're not using any of that, you're just sitting there twiddling your thumbs while you wait for the webpage to download and the worst of it is you paid for those CPU cycles, right? If it's on your laptop you probably didn't, maybe your employer did, maybe you stole it but if it's a server, remote server you're renting that from AWS, right? You're paying for your compute cycles, you pay more money to get bigger, more powerful servers and then you sit there waiting doing nothing, this is ridiculous. Let me show you an example of how you write code that is this wasteful. This is a perfect example of inefficient code and I wanna take a moment here and point out for anyone who is a novice speaker that when I originally wrote this I did not use requests as an example. I used a different, very popular library that I don't work on and that would have been a crappy thing to do. I'm gonna throw myself under the bus, use my own library to demonstrate how you do this wrong. This is a perfect example of inefficient code. Basically you can assume this is a web scraper, right? It's a kind of, I haven't defined what to do stuff with this but it probably parses some HTML, does some stuff, tries to find a bit of data. But this code takes way longer than it possibly needs to achieve its goal. If we assume that each requests.get call takes 500 milliseconds, which frankly is actually pretty good for the modern web. If you're going over SSL, if the web page is remotely large it's probably taking a little bit longer than that. We assume 500 milliseconds, think about how much work you would have to do and do stuff with before the arithmetic that you're doing in there takes anything like 500 milliseconds. The amount of work you'd have to do is almost mind-boggling. You would have to be deliberately doing it badly before you started to notice the difference. The runtime of this program here is going to be utterly dominated by sitting around doing nothing, waiting for the network. This is what I mean when I say that synchronous code is inefficient. If you had not that many URLs, this program could easily take 10 seconds to execute and it would only have needed to take, say, what? Code like this is real bad. It wastes your time and it wastes CPU time and we should all feel really, really bad any time we write it. So given how bad we're supposed to feel, why are we writing it? Why does everyone, almost everyone in this room, myself included, reach for that when we are solving these problems? It starts out because writing synchronous code is easy. Most of our systems, most of our software is synchronous by default because pretty much that's how we think about these problems. When we're trying to solve a problem, we generally think, first I do X, then I do Y, then I do Z. Rarely do we think about what we could be doing while we are doing Y, which requires no active work on our part. Or alternatively, what happens if the user asks us to do something while we are doing thing Y? So given that that's relatively easy, most developers start out writing code that works roughly like that. And therefore, when developers come and write a new library, they want to work with as many people as possible. They want to seem simple and easy, so they target this kind of style. Even when they're written by developers, who themselves do not particularly believe in this kind of style, like myself. I'm still going to write something that works like this because most of you would use something that works like this much more likely than you would use something else. As a quick example, who in here has used requests? Quick show of hands. All right, that's lots of hands. Who in here has used TREK? Cool, two people. Who in here knows what TREK is? Three people. TREK is a library that has a requests like Interface for Twisted. It is probably arguably what requests would be in a world where everyone cared about efficiency. But no one uses it, right? And that's because requests is easier. But now that we've done that, now that we've built the tool that is so much easier in the synchronous form and it's considered to be one of the best tools in the ecosystem, we've just made writing synchronous code easier, which means more people are gonna do it. And as more people do it, more libraries come along that also work in that paradigm. And we get this nice little self-reinforcing cycle where we keep making the thing that we probably shouldn't do easier by building better and better tools to do the thing we shouldn't be doing. So how do you kinda break out of this? Suppose I've convinced you that asynchronous code is terrible and you all feel suitably bad. In the fun world of Python, how are you gonna do async? Python, this represents a problem. Python is not a language that has particularly fantastic built-in support for asynchronous programming. Unfortunately, that statement's not as true as it used to be. I'll come back to it, but it's still fairly true. Fancy newer languages like Go tend to have the better built-in async support. But up until very recently, Python 3.54, three point something, Python didn't have a default asynchronicity model. The closest it had was threads and threads are scary, so basically it didn't have one. However, this lack of a default solution has led to quite a few really great solutions. If you really want to write asynchronous code in Python, there are lots of really great ways you can do it and you can try and find a way that suits you best. For example, off the top of my head, some libraries that are great for this, Twisted, async.io, Tornado, Geovent, Eventlet. These are all some really, really great ways to write asynchronous code in Python. And they break down into lots of different paradigms so you can pick the one that suits you best. They roughly break down into three. The first one is the threaded models. These still look quite a lot like threading. Good example of them, Geovent, Eventlet, threading, I guess. The next one is the coroutine kind of models. They are built on top of the coroutine paradigm. The coroutines in Python are normally expressed in generators. It's increasingly clear to me that a lot of people have never actually used generators as coroutines in Python. They really only use generators as lazy iterators. That's totally fine. It is really worth trying to get a feel for what generators as coroutines looks like in Python. Async.io, most notably, is built on this model. Async.io is built entirely on the generators as coroutines model. That's what you can find in the Python Standard Library in 3.recent. The other major model is the event-driven model. This was pioneered in Python by Twisted. Tornado also looks very event-driven. And what this means with these different paradigms is you can pick whatever style suits your program best. All these styles have got upsides and downsides. Some of them work really well with synchronous libraries. Some of them work less well. It's a little bit painful to use synchronous libraries from Twisted. You can do it, but you don't get the really great efficiency gains that you might like. Separately, Geovent and friends play quite well with the synchronous libraries with the occasional problem of every now and then causing things to explode for no immediately apparent reason. Generally, if you wanna write asynchronous code in Python, there are lots of great ways to do it, and it's not even that hard. And so as my big trick, I'm gonna convert my really bad, not async code into slightly less bad, still slightly sync code. This is my web scraper again. This is the ultimate low-effort approach to getting asynchronous code, and what I wanna use this example for is to suggest that if you cannot achieve at least this, even in your little pet projects, then you have really gotta ask yourself how little you care about efficiency. If you can't bear to add the three extra lines of code I have added here. This is a really simple concurrency pattern, fanning stuff out into multiple threads and then coalescing them back in together. Threads is a perfect solution for this, by the way. Lots of people tell you threads in Python are bad. They certainly can be bad, but in situations where you're mostly doing IO, threads are a thoroughly acceptable solution to the problem and you shouldn't feel bad about using them. This uses the concurrent.futures model which was added in Python 3.2. 3.2 is ancient, so you can safely use it. Suck it, 2.7. But you should feel comfortable using this, at least in test projects to get a feel for it. This is not perfect. For example, that map called down the bottom there does actually still block, so you can't respond to user input on this. So actually this whole thing also needs to run in a separate background thread. That's a little bit sad, but the upshot of this is that this code here would run substantially faster than the code I showed you a couple slides ago. So if that easy, why aren't we doing it? The answer is it imposes engineering costs. Odds are good that many of your engineers will not have written async code in the paradigm that you have chosen for your project. You chose to use twisted. How many people in the room have used twisted? All right, so if I picked you at random, probably half the team would know how to write twisted. The other half would have to ramp up. That's a little bit annoying, especially with things like twisted where the idioms and the framework can be quite different to the models that people are used to. Additionally, each framework and library imposes a form of lock-in. It's quite tricky if you've started writing code for G-Event to switch and get that code into twisted without having to really quite fundamentally rewrite lots of the code. You need to plan this ahead. Lots of developers don't because this is not a thing they've really had to do before. It's even trickier if you would like to simultaneously support multiple frameworks. They all have their own primitives and styles, even libraries that at the surface level seem really similar, like G-Event and Eventlet. They call things different things. They often don't have exact functional replacements for various tools in them. With really different ones, if you want to work simultaneously in Eventlet and Twisted, this can get really tricky. In well-designed software, you can get around it. You can push the async stuff to the edges of the code and write wrappers. But at this point, we start doing real software design. Real software design is hard. It's not hacking, so we don't want to do it. This goes doubly true if you want to write synchronous code as well. For example, if I wanted to have requests have a single code base that worked in twisted, tornado, G-Event, Eventlet, very efficiently, and also still supported requests.get. That gets really, really hard. We do intend to look at this at some point, but I'm going to need some people who are smarter than me to help out with that because I actually don't know how to build it. All right, I'm running out of time, so I'm going to sum up. Synchronous code is a gratuitous waste of resources and if you write it, you should feel bad. This doesn't mean you can't write it. There are lots of things you should still feel comfortable doing so long as you feel a little bit bad about it. If you're trying to maintain a healthy weight, you can still eat chocolate. You just shouldn't feel super great about it every time you do it. It's bad for you. You can still have it sometimes. It's fine. Asynchronous code is a beautiful land of happiness for your computer. Your computer will be happy. It'll work as efficiently as possible and get things done more quickly. That's all computers are for, so it'll feel real great. Joy will be had by all. We need to start thinking about asynchronous code as the new default way we approach programming problems. This is not strictly true if all you do is math. If all you do is math, maybe you only want to write some parallel code and chuck some stuff out to threads and you don't need to worry about the difference between async and parallel and blah. But for most of us who work and do IO at various points, we really need to start thinking about things like this first. And increasingly, we're going to find people doing exactly that. The existence of async IO in the Standard Library is making this easier and we should assume it's going to happen more and more. There are lots of problems for us to solve along the way. Python 3.5 has made some of this a little bit easier but right now the state of the ecosystem means that while asynchronous code is what we should be writing, it's a little tricky to do it. And I have a bonus prediction for you that you can all dangle over me in five years when I turn out to be wrong. My prediction is that we are reaching the peak of synchronous only code in Python. I think increasingly there's going to be a demand for libraries to have good asynchronous versions of them either in the same code base or as an extra wrapper around them. And people who write to Python libraries are going to need to start thinking about this really, really carefully. Cool, all right, fantastic. I've got maybe two minutes for questions. Thank you very much. Thank you for that. I liked your graph. Yes, one or two questions. Make me run. Thanks for the talk. I noticed you didn't give us your opinion on whether we should be in Python writing async code that looks synchronous but is actually asynchronous under the hood like say in Go or I think G event or explicitly asynchronous like node or traditional twisted. So my question is, can you give us that opinion? My opinion in the general case is that you should do whatever is going to be easiest for you to achieve writing async code. I don't actually care what you do specifically in your project. For me personally, I prefer the explicit async style. I particularly like the style that async IO has with the yield from essentially acting as a keyword for async happens here. Twisted can do this too with the inline callbacks. I tend to do that with Twisted myself. But whatever is going to make you write async code, you should do that. So my, oops, is that coming through yet? So my fear is that Python is losing mind share rapidly to node. And that's partly because node is easy to get into and also is async by default. So how can we grab that mind share back by making and communicating that it's very easy and effective and powerful to write Python asynchronously from the start? Yeah, that's a great question. I think fundamentally what it boils down to is it boils down to the tools we use to onboard people into the language. If I had to pick one thing, it is that thing. We write lots of tutorials and blog posts and various different resources that are used by people who haven't written Python before and they want to write Python to solve their specific problem. Usually those things use synchronous Python, whereas the node examples, because they can do nothing else, will write their code in an async style. We could try harder for those examples to be asynchronous by default rather than synchronous. There is a tension here. Asynchronous code can be really tricky for truly beginners. If you're writing a tool that is truly for someone who has never written software before, it is, you might want to be a little bit wary about throwing them in the deep end with asynchronous software. However, if you're writing a how do I solve problem X in Python, I think it would be beneficial if you really strongly thought about writing this with an asynchronous set of code examples, or at least both asynchronous and an asynchronous set of code examples in your framework of choice. This is very much a do-as-I-say-not-as-I-do moment. I have never actually done this in any of my own work, but having had this question, I think I'm gonna pay a lot more attention to it now. I hope I'm gonna pay a lot more attention to it now. One more quick question, maybe? I'm sorry this isn't a Boolean question, but the biggest pushback I've had to doing async in anywhere I've ever worked has not been the arguments that you have here. It's been retainability, readability, being able to put someone other than the seniors on it, putting the juniors on it. They won't be able to understand what's going on. So can you address that? How do you get past that as the hurdle? Are there ways to make this readable other than making it asynchronous code that happens to be async through some magic? Yeah, my personal opinion on this is that that argument made there, the one that says asynchronous code is harder to maintain than synchronous code, fundamentally ends up being a little bit of a fud. Asynchronous code is often harder to understand the direct flow of execution, but it is also hard to understand the flow of execution in badly written synchronous code. My contention is that there is a different style to writing clean, maintainable asynchronous code, and most people, because they don't do it very often, aren't yet comfortable in that style. If I look back at software I wrote when I just started out, synchronous or not, it's very, very difficult for me to understand what it was doing. Maintaining that is hard. There is a, you can write clean, maintainable asynchronous software, if only because, and my only evidence for that is that people do it all the time. It is a tractable problem. It is definitely harder for people who are not used to writing asynchronous code, and this is where the stuff with juniors comes up. Here I argue it comes back to the same answer I said to Tom, which is that it doesn't help that most of the new programmers we produce aren't comfortable writing asynchronous code, and I think that that's a problem we do need to grasp. And part of it might be, pair programmer, if you've got the ability to spend a little bit more resources giving people who are more familiar with writing asynchronous code, time to help those who don't know as they come into the project and help them understand what is an isn't good style, that'll help a great deal. Pair programming is good. And with that, we've run out of time. Thanks everyone.