 Welcome to the next talk. We're going to have Philip Jones talking about ASDI, the Synchron Server Gateway Interface. So please give him a nice round of applause. Thank you. Is this on? Can you hear me? So I'm going to introduce ASCII today, the asynchronous survey gateway interface. But just to get an idea, how many people already know what ASCII is? Oh, there's a few. Good. So hopefully by the end of this talk, you'll have a very good idea of what ASCII is if it's a good talk. And yeah. But before I get into that, just a little about me. If you want to follow along with the slides locally, you can find them on PGJones.dev. You can also find me as PGJones on GitHub and Medium. It's slightly more complicated on Twitter. There's an extra D. Somebody else had the name. But there we are. So I'm going to introduce ASCII. But to do that, I'm going to step back and introduce Whiskey. I suspect most of the people in this room know what Whiskey is. But if you don't, and you've done any web stuff in Python, you almost certainly used it. I'll explain what it is if you don't know. But to go back to when Whiskey was introduced in 2003, quite a time ago now, there was quite an ecosystem of Python web servers already, such as Zope, QXE, WebWare, et cetera. Now, one of the issues at the time was that these frameworks differ in their API and their functionality. But they're all very distinct. You couldn't take parts of Twisted and use it with Zope, at least as I understood at the time. So you've got to make a choice. You've got to use one or the other. And this feeds directly into the motivation for Whiskey at the time, which I'll just read out because I figure summarizes it very succinctly, which is Python currently boasts a wide variety of web application frameworks, such as Zope, QXE, Skunkweb, PSO, and Twisted Web, if I've pronounced them all right. This wide variety of choices can be a problem for new Python users, because generally speaking, their choice of web framework will limit their choice of usable web servers and vice versa. And so this was seen in 2003. And the PEP-333, if you want to read it, came about to introduce Whiskey. And what Whiskey is is a standard interface that allows you to separate the server code, which is mostly about passing and understanding the HP, sending at that time, from the application code, which is mostly about how you kind of route and couple your business logic into your web server. And Whiskey, which stands for the web server gateway interface, is quite simple in principle. What you have to do is your application is define it as a callable that takes two arguments. The first argument, environment, describes the request and the environment that they serve or the framework's running in. And the second argument, start response is a callable, which you call to start the response, in this case, with the status and some headers. And then you return or yield the body. So it's quite simply defined. And that allowed the servers, which again is mostly about passing and understanding the HPP, to be separated from the framework, which is more about the API and how it's used. So I think it's fair to say that Whiskey has been a really great success for Python. There's a lot of Whiskey frameworks. They differ in terms of whether you want to go full batches included like Django or something very minimal like Flask or Bottle and everything in between. And so you've got real choice there in what kind of API you want to use. And just as a note here, because Whiskey sounds a lot like Whiskey, a lot of them are named after receptacles like Bottle and Flask and Falcon, et cetera. So that's not the only server I realized recently. On the other side is the Whiskey servers. I think there are less of these because it's more defined. It's more constrained by the protocols. But there are three big ones. I know I have Geunicorn, You Whiskey, and Apache. I think the main kind of feature you'd go for here is how they manage concurrency. That's what you're going to really care about when choosing these. But again, it's given users of and the Python ecosystem a lot of choice, which is the great thing about Whiskey. So over the roughly 15 years, maybe that Whiskey's been about, it started to show its age. And I think this is mostly because of changes in the kind of web systems around it rather than Whiskey itself. So of the limitations, the first one I'll mention is that it has no official way to deal with web sockets. In fact, as you saw, that callable is a request response cycle. It expects you to return a response and be done. Whereas, of course, with web sockets, that connection is going to stay open. You're just going to send messages back and forth. Now, there is an unofficial workaround, which is almost a wrapper around the socket, but it depends, again, on what concurrency framework you're using. So it's not standardized. For the same reason, it doesn't really say much about HB2 in terms of concurrency. You can use HB2. It's just a request response cycle. But you've got to decide on top of it how you're going to make it concurrent because HB2 is concurrent. You can't get away from it. Carrying on, you can't use the Async and Awake keywords. They've only recently been introduced. I think this is more the fact that Async code and Sync code doesn't really work well together, but you can't really use it with Whiskey servers because the event loops, if you have them, fight each other. And finally, I haven't mentioned it here, but the request body, you may want to stream it into your server. So you might want to get it in chunks. And there's some support for that coming in Whiskey, but generally speaking, the Whiskey standard is defined that you get the whole body and then you pass it into the callable in the environment. So the server's got to have it all before it can call the application. So these limitations, I think, of what started to motivate Async web frameworks, which has been quite an explosion of in the last few years. And I think the first one that came about was AHTP, which I've shown a lot of example here. So you can see it meets a lot of these desires, these limitations. First, you can use the Async and Awake keywords. You can Awake the DB call. You can see there. Secondly, although it's not shown here, it does do web sockets as well. You can stream the request body. And although AHTP didn't say anything about HTTP too, it may do in the future. And some of the frameworks and servers I'm going to talk about in a minute do. So looking at where we are today, I would say it's quite similar to where we were or where the community was in 2003. There's quite a few Async frameworks. And these logos are AHTP, Blacksheep, Sanec, Japronto and Vibora. So some of them, I don't think Japronto or Vibora maintained anymore. But all of these have kind of made a name in the ecosystem, mostly because in these cases, just how better performance they provide, which I'll come to later. But that's kind of what's made people excited alongside these features. But especially for these frameworks, because you can't use them together, you can't mix and match, I think they make for the perfect motivation for ASCII. And if I talk about original motivation for whiskey and change it very slightly and say that Python currently both survive a variety of Async web frameworks, such as AHTP, Sanec, Blacksheep, Japronto and Vibora, this variety of choices can be a problem for new Python users, because generally speaking, their choice of web framework will limit their choice of usable web servers and vice versa. I think this goes beyond just new Python users as well. I think this is a limitation to all of us. I think this is good motivation to want a new standard to come about that works in the asynchronous world. And that standard, I hope to convince you, should be ASCII. And ASCII stands for the asynchronous server gateway interface. I think it's so named to make the parallels to whiskey. And I'll try and, as we go into this, I'll try and say what my involvement is. It's been a small part. I'll try and say where my biases come through. But ASCII itself is again about defining your application as a callable. But in this case, it's a coroutine function. So it's async by default. And this time, the callable takes three arguments. The scope, which is very similar to the environment. It tells you about the connection in this case, because it's not necessarily just a HTTP request. And there's two extra callables that receive and ascend. So you, as an application, have to receive messages from the server and then send them back. And you can see, in this example, they're all asynchronous. So it all works in the concurrent framework. And the link there you can find in the slides will just go to, that's the specification as it's written out. So to give you an example of what it looks like, so the whiskey example I showed earlier would simply respond with a 200 response with a very short body that just says hello world to whatever request it got, as with this piece of code here. So the big differences are that you have to now check that it's a HTTP request, so you're not responding to a web socket, say, with a response, because it wouldn't make any sense. And then, much like with whiskey, you start by sending the response information, the status and the headers, so you start the response. And then you stream or send off the body. And in this case, it's just hello world. So it's quite simple. So this, I think, is roughly the same as the whiskey one you saw a few minutes ago. So I'm gonna go into more detail now about what whiskey is and all the different parts of it. Thanks. But just to give you an idea of the development of it, so whiskey came about through, I think, the Django's channel's development, which was, I think the ultimate aim of this is to make Django async. And the first version of whiskey separated the server and the application by process. It was quite complicated, and you can thankfully just ignore it. So if you do manage to find anything that references whiskey one, you can just forget about it. Then there was whiskey two, and whiskey three, which are very similar. They're roughly the same. It's just that whiskey three is a bit simpler and cleaned up, and I'm only gonna talk about whiskey three. But if you come across whiskey two, basically anything that supports whiskey three will probably support whiskey two at this time. So here we are. Okay, so I'm gonna go into a bit of detail to get you hopefully really know how our whiskey works. So the first thing that callable gets as an application is the scope. And the scope is telling you about the connection. And for a HP request, that's gonna tell you about the HP request. So every ASCII message has a type. So this type is HPP. It's gonna have a HP version. And then it's gonna define the request. So that's the method, the scheme, the path, the query string and the headers. And then a little bit about the connection itself, the client address and the server address, and then a bit about the environment, which is the root path. So this is what you get in the scope. And this is pretty much it. And it's very similar to the whiskey environment. So you as an application get sent this when a client connects to the server. Then you as an application are gonna wait for messages off the server to know what you want to do. So what you're likely to receive from the server to begin with is a message that says that there's a HP request body coming through. And in this case, to demonstrate streaming, the first message you get is just says hello, but then it's telling you there's more body expected. So you probably as an application gonna expect the next message to come from the server to be something similar until you've run out of body. In this case, you get the final part world and it tells you there's no more body coming. So now you as an application can start sending your response. So the first thing you're gonna send is the basic information about the response, the status and the headers. And then you'll stream out the body. In this case, because I've run out of slide space, it's just one message, which is just high. And so you send that off and then the client's gonna disconnect from the server most likely. And so you receive from the server when that's happened to disconnect message. And for HP message, this is basically all you need to get right for ASCII to have ASCII application run. So moving on to WebSuckets. So it's all asynchronous. Like I said earlier, it supports HPT and WebSuckets. So with a WebSucket connection, the scope is a little bit different this time. First, it says it's a WebSucket connection in the type. The connection information is similar. You get told the scheme, the path, query string and headers, but you also get told the sub-proticles, which matter for the connection. A little bit about the connection the client server addresses and then finally the root path. So again, when a client connects to the server, you get called with this scope. And again, you as an application, are gonna expect some messages off the server and then gonna respond to them. And the first thing you need to do as a WebSucket is decide that you wanna accept that connection and turn it into a fully WebSucket connection. So you send this back to the server. So now it is a WebSucket connection. You perhaps will receive a bytes frame from the client through the server or maybe a text frame. So you could get both of these and you can send back any frame you want at any time as well. And then again, much like with HPT, you've gotta decide when you wanna close the connection. So you can send a close back and then once it's closed, you get told that it's the client's disconnected from the server. So again, this is roughly all you need for ASCII application. So there's one more part of ASCII that matters as a kind of protocol stage. And this is the part where I start to have been more involved in ASCII. So this is more of my bias. But one of the issues I found with Whiskey is you can't really decide that you need to prepare something before the server should start receiving requests. So a good example of this is if you're gonna connect to a database, you probably wanna create a connection pool and create that before you receive a single request. Otherwise it's just gonna add latency to that very first request. If you use Flask, you probably already get this with the before first request. So the ASCII lifespan protocol allows the server to send to you that it's ready, so it sends you a lifespan startup and you as an application set up and you send back to the server that you're ready when you say startup completes. And from that point on, the server can start accepting connections. So if you've got, say, a load balance set of servers, you could just roll them over easier. Equally, when you're shutting down gracefully, there's the opposite on shutdown. So that's lifespan. Then there are two extensions to ASCII, both of which I've been quite involved with. The first of which is server push. So for HTTP two connections, you can decide as a server if you wanna push a response to the client before the client asks for it. You often wanna do this when you know the client's gonna request a good example being if the client's requested a HTML file, they're probably gonna ask for the CSS and JavaScript next to send it to the client before they ask and save a bit of latency. So that's a server push. So as an application, you simply look for the scope and look to see if it's got an extension that says it allows server push. And then you can send this message to the server's egg, server push this to the client. The next extension is to be able to send a different response when a WebSocket connection request is made. So when WebSockets are kind of made or the connections are attempted, it starts with a HTTP message that asks for an upgrade. Now most libraries will allow you to accept that message or just close the connection if you don't want to. So what this extension allows you to do is instead say, I wanna reject the upgrade but with this particular message, so you could be a bit more informative, maybe say that they don't have the right credentials or that the request is badly formed or something like that. So again, you just need to look in the extensions and if you're wondering about the empty dictionary, that's just so there's extra options, it's not a typo. But so you look for this extension and then you can send messages back to the server like you would for a HTTP message, only they're prefixed now with WebSocket. So it's exactly the same, you just send back a HTTP response. So that in a bit of a rush is pretty much the whole ASCII spec and you can go read the details for the exact but that's pretty much all of it. So much like Whiskey, I think this is starting to have a really good effect on the community. Like there are a lot of ASCII frameworks starting to exist now. So just to name a few, there's Quart, FastAPI, Responder, Starlet and Django Channels. I think Django itself is becoming a sick. I know a bit less than this, but it looks like it probably will. My bias here is I work on Quart. So Quart's very Flaslike as are most of these. So in terms of the ASCII features I've just talked about, there's a strong bias here because the extensions are my interests, so I've written them. But basically all of them work for the basic ASCII spec. So you can go use any of these with the ASCII servers and you'll have a good time. Looking at the ASCII servers, as far as I know there's currently four or four main ones. So there's Hypercorn, Uvicorn, Daphne and Mangun. Again, I've got a bias here because I write Hypercorn. But in terms of the others, so Mangun is a serverless one if you're interested in that. I think Daphne is very popular in the Django community. Uvicorn's the most performant and my bias is I would say Hypercorn has the most features. But again, the features only I'm really interested in, so there we are. But in terms of these features, there's support for HB2 in Hypercorn and Daphne. You could do HB2 web sockets in Hypercorn, which I don't think you could do elsewhere. And then the server push and HB web socket responses as well. So those are the features that I'm interested in, I really would ask you, but most people are actually more interested in the kind of performance. Indeed, for the ASCII frameworks and the Async frameworks in general, this has been one of the kind of most exciting things that seems for the community. So to say a little about that, if we look at the last tech in power, at least I think this is the last tech in power, if we look at the top 10 Python ones, six of those are ASCII based, it did the top two are ASCII. Two more are Async and it's only Tornado and Bottle that are whiskey. So I think ASCII's made quite a name for itself here and whiskey as well. So hopefully you've got a very good idea about what ASCII is and hopefully I've convinced you that it's the right standard for the community to adopt. I think I might have time for questions. Cool. So if you have anything to ask him? I was just wondering with this new system because I'm used to Flask, but I never used any of this. Are we able to use less JavaScript now because the interaction that we have to build with JavaScript with things going back and forth with the reactive applications, is it possible to do some of the stuff on the server now? I'm thinking like what live view is doing with Dalix here? So I'm not quite sure that it is, but maybe if there was an issue with web sockets, which was easier to do in JavaScript, yeah, it's quite easy to do in Python now. So that might mean less JavaScript for you. Any more questions? Have you heard anything from Sanic about implementing ASGI 3, I guess, would be the right question today? Yeah, so they've been working on it. I'm told today from Tom, who I can't see that, oh, hi Tom, they're actually quite close. So I think they'll be an ASCII framework and maybe an ASCII server soon. Any more questions? So that was a very interesting overview. Thank you very much. What do you think still needs kind of work? Where are the areas of the ASCII spec or the implementations that are most in need of, most of the need of improvement at the moment? I think it's actually had quite a bit of development now, so I think it's in quite a good shape. There's nothing that just brings to my mind, I was speaking to Tom about it earlier, I don't think there's anything that springs to his mind either, so I think it's in a very good shape now. Any other questions? Okay, then we're going up. Thank you once another question. Oh, this is a silly question. Do I detect the influence of Swift on Security in your naming of your ASCII server, HyperCorn? No, I don't actually know what Swift on Security is. I saw a pun that wasn't there. So I use the HyperSanzio libraries mostly for it and then I've tried to keep the interfaces similar as I can to Junicorn, so that's where the name comes from. That's great, that is. Thanks for the talk, really interesting. So this wouldn't be a drop in replacement if you needed a new framework. So let's say my particular stack, Nginx, UISG, Python, Flask server, it would be a new framework and ASG and then Nginx is HTTP2, so that could be a production stack, that kind of thing, or? Yeah, I think this is more an issue we're trying to build synchronous and asynchronous code together. I don't think it's really possible, so I think you have to choose to be async. Yeah, so that was my aim with Quartz. So Quartz is the Flask API, we implemented the async await to make it easier, but most of them are very similar to Flask, so if you were to move to, say, Starlet, it'd be quite a pleasant and easy experience for you. We have time for one more question, maybe? Hey, I have a very simple question. So when it comes to the WSGI spec, it's very easy to remember PEP triple three and then quadruple three. Is there a PEP for ASGI? There isn't. There's been talk about writing one. I don't think it's progressed much recently, but I hope it will become a PEP, but we shall see. You can just search for it as ASGI or ASGI ref and you should find it. There's a, I think I showed, so you can also take that link if you'd like. Anything else? We'll have one more minute? No. Let's give our speaker a nice warm applause, please. Thank you.