 Hi, wrth gwrs dyna yn gweithio i ddau i'n tyfiau. G factorynd ar generally er i fwylltiad gyda gwych ag am gywir i'r Eri Maid ac gan gyda'r tari hwnnw i'r gweithio'r corwg, oherwydd mae o'n ddewch ar y wahanol. O'i ffordd i'r web app. Rydyn ni'n gynghyddol, a'n ddau'r werthio, maen nhw'n ddiwedd â'ch wef gwsio ysgrifwng. Ar ystod, mae hwn arnyn mewn i'r ffordd. Fy gallwch yn gweithio'r bydd y bydd mwy fo'r ar y gwrth pause, I think it wasn't the best title. So what I really want to do today is talk to you about my journey learning, SANIC, and learning the ASIN IO ecosystem in terms of doing web development and web frameworks. But first of all, just quickly, who am I? I'm a Scottish Pythonista, and speaking of which is extremely hot here. And my hotel AC is not working, so if I fall asleep, it's because I didn't sleep well last night. Otherwise, I'm the Python Glasgow organiser. I'm an open source maintainer of a number of projects. The probably the most popular is MKDocs, but I've actually been fairly inactive on that lately. I'm an open stack developer at Red Hat as my job. And in the past sort of 10 years almost, I've done a lot of Django and Flask development. So I've done quite a lot of web development. So this has been really quite an interesting lesson for me in terms of learning about ASINQ web frameworks. Because while the actual web development can be very similar, there's quite a specific mindset that you need to take because otherwise there are pitfalls that you'll hit. But first of all, I just want to talk a bit about how we got to this. OK, just checking it was updating. How did we get to this point of where people are able to use ASINQ IO for this kind of development? And I think a lot of it comes from the transition that's happening from Python 2 to 3. I feel like as a community we are kind of at the tipping point where most people should either be on Python 3 by now or they should have a clear transition to Python 3 that's under way. And if not, you're going to be in a sort of a nasty place and not too long, I think. But if you can make the assumption that most people are on Python 3, then most of them are also on Python 3.3 or above, which means they have the ability to use ASINQ IO. So this talk is not really about what ASINQ IO is, but I just want to quickly sort of define it for people if I can, which is a fairly tricky thing to do because it's something which could have multiple talks itself. It's quite a big subject. But essentially, ASINQ IO provides people a way to have single-threaded concurrent code using co-routines and essentially you have an event loop. So this is something that isn't particularly new. It's not an innovation in ASINQ IO. It's Twisters has been doing it for a long time. Tornado done it more recently. Node.js made it very popular in the JavaScript world. However, I think the sort of the real important thing about ASINQ IO is it's given everyone a common base to work on. You don't end up with silos where there's all these sort of twisted projects. Everyone's hopefully, everyone will start moving towards ASINQ IO as the standard event loop, which they can then build an ecosystem around and improve things. So I think that's a really important. And I think that's the most important thing that ASINQ IO has actually done. I have a confession to make that initially, when I tried ASINQ IO and that was about two to three years ago, I essentially dismissed it. I didn't like it. It felt awkward. For those that remember, and I guess as before Python 3.5, you had to use a decorator to specify co-routines. You had to use yield from and it just felt kind of awkward to me. It didn't feel like regular Python. But with the addition of the ASINQ and Awake keywords, I feel like it's a lot more natural and a lot more comfortable to use. I'll have some examples of these. So if you're not too familiar with them, you'll see them in a bit. But the point of that is that I've had a lot better experience using ASINQ IO this time around. So if you have looked at it in the past and you've maybe dismissed it for similar reasons, it might be time to start taking a second look. And that then takes us to SANIC, which is the sort of the thing which binds this whole talk together. SANIC is a small, contained, bed bones web framework. It's got a similar scope to Flask. I don't know if they have any plans to change that over time. I suspect probably not, and it's going to... So Flask is really powerful for being minimal, but then it's really powerful because of the ecosystem that's built around it. And I think that SANIC over time will have an ecosystem which hopefully builds and adds the sort of similar levels of power. And if anyone's sort of wondering what is special about SANIC and using ASINQ IO, why is it worth the effort compared to Flask? What can you do that you can't do otherwise? So I find this quite hard to explain, and this is because I'm probably not an ASINQ IO expert, but essentially the event loop allows you to sustain or keep multiple connections open at the same time. So that means you can take multiple requests and process them and you'll just be waiting for IO. So say for example you have a web app which has an endpoint which requests something from a database. When your code reaches the point where it makes a request to the database, it can release that IO will happen in the background essentially, and then another request can be processed at the time, and you'll keep switching between the different tasks on the event loop. So say for example the equivalent and say something like Flask would be that every request would be processed one by one, it would wait for the request to start to finish, and then they'd be completely sequential, whereas with ASINQ IO they're concurrent essentially. For people who don't understand the ASINQ reference, the web framework is named after this internet meme, which is maybe a sign of how serious it is, but that fits in quite well with the Python ecosystem I think. I don't fully understand the meme, I have to say, but essentially I think it was a kid or somebody who had done a really bad drawing of Sonic, the hedgehog, and people just found it hilarious and shared it, and these things explode. I mentioned that there has been an explosion around ASINQ IO recently in terms of frameworks and options, so what made me pick Sonic over the others, there are a whole bunch of other competing web frameworks which are hoping to get users. The number of reasons, and honestly they're fairly simple, it just seemed like the easiest to use, it was the quickest to get started with. As someone that's done Flask before it seems very familiar, and it just didn't really get in my way, it essentially provides you with a way of doing your request and your responses, and then you can call out to other parts of code as you need, which is exactly how it feels when you're working with Flask as well, I think. I started using it initially probably in early January, I think, and I think I was quite lucky as well, to be honest. It seemed like the most active and it seemed like the nicest to use, but then looking back at the stats now, it's really the only one that's gained traction, and it's the only one that's really active on GitHub with lots of issues and poor requests in community discussion, so I think I jumped on the correct train there, which was definitely partially luck, but also probably a testament to how easy it was to use. Getting started is as easy as you would expect. There is one catch, and I found out that Sanic doesn't actually work on Windows at the moment. The reason for this is that rather than using native Async.io, I actually use this UV loop. This is a faster implementation of Async.io written in Scythyn, and largely on, say, on Macs and Linux, that's just an implementation detail you can ignore. They're compatible in dropping replacements, but if you try and install Sanic on Windows, you actually get an error at the moment. I thought it would fall back to Async.io, but I found out earlier somebody showed me on Windows that it'll actually just, the installation will fail and complain that UV loop does not support Windows. So I'm not sure if that was a regression or what happened, but hopefully there'll be a way to use it on Windows at some point. So this is the, this is of the mandatory Hello World, which shows you the sort of the very bare bones API. And again, I'm going to reference Flask quite a lot, but it will seem very familiar to Flask developers. Unfortunately, with the layout of this room, it's quite hard for me to point up there, even with my laser pointer. It's kind of above me, but you can see the, so Sanic has the concept of apps, and that's probably why I include apps in the name of my talk title, but essentially you define an app to represent the, what you're building, and then you add routes to it just like you would in Flask. The key difference here is the Async keyword. So if you're not familiar with Async.io, this is a syntax that was added in Python 3.5, I think, although I did not write down the version here. And what that does is it tells Python that this is a coroutine, this function. So that means that you can await on it, and it can await on other coroutines internally inside it. But otherwise, this is just a very simple example. It just returns the plain text Hello World, and it's a fully working example. So as long as you've got Sanic installed, you can just run that. And when you do, this is the output that you get. So this is obviously the ASCII version of the drawing that we've seen earlier. And this is one of the things that I think Sanic is doing really well. We could do with some more innovation in ASCII art and entertainment when you're running your apps. Joking aside, it's actually reasonably useful because that will only appear when you're running the app in debug mode. So it's a good kind of red flag that you've forgotten to turn debug off. Otherwise, there is the standard things you would expect. This is just a great example showing you how you'd return plain text like you did in the first example, returning JSON and running HTML, which could obviously be like a rendered ginger template or something like that. But what I don't really want to do is basically talk you through the documentation, talk you through the API. So we're going to try and just do a slightly more interesting example. And hopefully I can talk you through that. So I think WebSockets are one of the best ways to demonstrate why an ASCII framework can be useful. For those of you that don't know, WebSockets are a way for your JavaScript to speak to your server in essentially real-time communication. So rather than polling for updates, for example, they'll open a connection when the page loads and then they'll just sit there waiting. Well, actually, they can send data to the server as well, but they'll also receive data from the server. It's a two-way communication and it's essentially instantaneous, I mean, other than the sort of network latency. But it allows you to do real-time websites, which is really good for things like chat or notifications and so on. Anyway, for the WebSockets support in Sanic, you need this extra package. So this is an optional dependency, essentially. It's an AsyncIO WebSocket library, which is really nice. So even if you want to do something with AsyncIO and WebSockets, but not with Sanic necessarily, I think this is the best option. It's written by, I think, it's one of the Django core developers originally, but not using Django itself as far as I'm aware. So when you have installed that, you can then use the app.websocket function, or decorator, sorry, to define a WebSocket endpoint. So, again, this is a very simple example. We have the Async keyword to tell us a code routine, and then we have the two awaits. I can't actually point, I can't read that, so I'm just going to have to explain it. Yeah, so if you're new to this stuff, if this can seem like quite a strange pattern, because it looks to you like this function could just lock up indefinitely, because it's a well-true, it's an infinite loop. But actually, every time you get to the awaits, so when you have the awaitwebsocket.send and the awaitwebsocket.receive, it will actually, the IO for that will happen in the background, it will be waiting there for something to be sent or something to be received, and then it will return to the event loop, and then other requests can be processed concurrently. So this is why it's so important to have something like Async IO for this, where you can have concurrent requests. You can't really do websockets in something like Flask, because it requires you maintain the connection, and keeping many connections open in Flask just doesn't really work. And the same applies to sort of Django and things. There are ways to do it in Django now, and so what people used to do is they would run something like Tornado or Twisted, and they'd have a websocket server, and then they'd have the Django or Flask server, and then they'd speak to each other behind the scenes, and they'd sort of send notifications, that you have to deal with keeping them all in sync and everything. Anyway, this is a very simple example from the documentation, and I think we can do something a bit better. So in danger of angering the demo gods, if you could all just try and go to this web address, it'll be kind of interesting to see if this works at all. It won't use much data, so you can use your regular cellular data, and thankfully, if you're European, you won't have any roaming charges now. But essentially, what this does is it shows the number of people that are connected to it, and it shows the lists of the user agents of people connected. I'm trying to load it now, so it might take a moment for the Heruqo app to start up. Has anyone managed to load it? How many people are connected, does it tell you? Okay, all right, I was worried this might happen, but essentially, it's a bit of fun showing you that I can still walk you through the code in the same way. 20 plus, that's pretty good. So I've done a demo of this talk in Glasgow, and I had far less people at it. And I have no idea how many it can handle before it falls over, but it seems to be better than the Wi-Fi, but that's not too surprising. How many? 31, that's pretty good. Yeah, so all of you are basically, all your requests are concurrently open and all being handled, and they're sitting in the background waiting for the IO events to finish. And the nice thing about it is how little code is actually required to make this demo. So this is a slide where I really needed to be able to point, but I'll just talk to it from top to bottom. So the first two variables at the top, the connected equals set and the user agents, they're essentially tracking the web sockets that are connected, so the number of web sockets is connected to the set. The user agent is then just a dictionary of user agents, which I routinely send to the browser. And we have our co-routine, again, which is called feed, and when the request is opened, the web socket is added to the connected set. And then we just take the user agent from the request and add that to the user agent's dictionary. And I have a little printing statement, which was useful from the console, and I probably should have removed that. Fully enough, I said that the last time I gave this talk, and I never removed it. But then we go into the actual loop, which is happening here, which is then sending the data back to the client. So what you can see is I've got a web socket.send, and it's just sending Jason, it's sending the user agents, which is a dict, as I said, and then the web sockets. I'm just doing the length of that, so it's just sending the number so it's more efficient. And then we await on that data to be sent and it just happens over and over. And what we have, the next line, is the asyncio.sleep. So it's sleeping 0.1 seconds, so that means it's sending the information updates to the client every 0.1 seconds. Assuming everything goes okay. The one thing that's really important to hear, and this is an example of where you need to understand asyncio and make sure you don't do anything which is blocking. You can't use, or you could, but it'd be a bad idea. You shouldn't use the time.sleep function here. You have to use the asyncio.sleep, because when you use the time.sleep, it would just block that loop at that point. But when you use asyncio.sleep, it then releases back to the event loop and allows other requests to be processed. So the thing with something like asyncio, it's not that you can't do blocking code, it's just that you need to make sure your code is written in a way that isn't blocking. And this is one of the gotchas I find. It's because you might be trying to use a third-party library, but if that library does anything blocking, then that will affect your run as well. And then in the finally part of the try, finally, we just do some cleanup. So we remove the connected web socket, and then we remove the user agent and print out just the number of connected sockets for my debugging again. So if anyone still has an open, you'll see that that number will decrease as people start to close it as well. So it's quite nice you get the real update there. So for example, this could be a very simple tool to just show people how many other people are looking at a web page, which perhaps could create like FOMO because it's like 20 people are looking at this hotel, book it now. I think there's one website I go on that does that. And the one thing I should note about the web socket, so when somebody closes the tab, the next time that we try and send data to that web socket, you'll actually get an exception, which is something like web socket closed. So that's when you break out of that loop and that's when the finally is called. So we never close the web socket on the server side that's only ever done on the client side. And then Sannick just handles that exception as a like end of request type thing. It doesn't treat it as an error. It's just like we're done, we're good. And then just for completeness, this is the JavaScript I wrote and it's kind of horrible. It turns out it's actually quite hard to do something very concisely in JavaScript when you want to deal with the web sockets. I'm not really going to talk through this. I mostly edit, so if anyone stumbles across the slides, they've got a mostly complete example they can work with. I actually have all the code in a GitHub repository, so if anyone's interested, I can link you to it. I forgot to add that link to my slides, but it's pretty easy to find as well. But what I found really exciting and really fun about writing this is the full thing is only 80 lines, sorry, 85 lines of code. So that's the Python, the HTML, the JavaScript, the Heroku config, which is only a couple of lines admittedly. But creating something like this was fun, but also you can see the path of how this could be useful, even though it's such a short example to write. Of course, with something like WebSockets, the complexity comes at the next stage. It's what you do with this. So say, for example, if every WebSocket was making a request from the database, you could potentially be in danger of overloading your database with connections. So on Heroku, and so in the app that I'm using Sanit4 on my non-toy app, not this one, it uses Heroku Postgres, and I'm on one of the cheaper tiers, so you only get 20 connections or something. So it's very easy to have too many WebSockets open that you run out of connections. But you can get around these problems using things like PG Banser, Postgres Banser, and have that running on each of your Heroku dinos, which then means you have potentially 100 connections or so, which only translate into one or two real connections. But I mean, that's just an example, but yeah. So this is a very simple one, but you will find that the complexity does come later. And I did say I'm not going to read through the documentation of Sanit, but I just wanted to highlight a couple of the common features that people have asked me. It's like, does Sanit have this? Does it not have this? So just to quickly mention them. Class-based views are there. So these are very like flasks, class-based views. So this means that rather than having a standard function to handle your request, you can have a class which breaks out your post gates and the other types of requests, which is quite a nice way of doing it. There are blueprints, and these are again very much like flasks blueprints. You can see that they actually say in the Sanit documentation that they are flask-like, I think. So they really are trying to mimic the flask API where it makes sense. So the blueprints are a way for you to write reusable apps, which can then be added onto endpoints. So it's kind of hard to explain, but basically it's for making reusable web apps or components to go in part of web apps. There's also an unopinionated configuration which provides multiple ways of loading up the config, but essentially acts like a dictionary, so it's easy to store the config how you want. And then it's got support for things like cookies and so on and everything you serve expect. So when it comes to testing, you need another optional dependency. So this is AIO HTTP, which, as far as I'm aware, is the most common HTTP-requesting async.io library. So essentially it's the requests of async.io, although I did hear that request is going to have a async-compatible part to it added at some point, but I don't know when. But anyway, for now, this is, I think, the most popular, and Sanit uses it internally to make requests to itself, which allows you to then test your app with it, is my understanding. And this is a very simple example of how you use it, and really testing with Sanit is simple, because your Sanit layer is such a narrow layer in your app. But the challenges I found with testing were actually much harder in general when using an async.io app just because. So when you have all your code expects to be run on an event loop, so that means essentially your tests have to set up an event loop, run it once, and then tear it down again, which is quite an overhead for each of your tests. And if any of your tests misbehave, so for example they don't shut down the event loop properly for some reason, then you'll have a test which just gets blocking because it's waiting, it's got an event loop that's stuck open. And yeah, so something I would actually be really interested in, and this is kind of a request out there, anyone else that knows about this is some input on how to best test async.io or asynchronous apps, just in general. Somebody should submit a talk on that or a blog post or something, that's my request. But it's definitely not a fault on async itself, I think it's just more inherent complexity of your overall architecture as it changes. This is just a small note, so this is something that I missed when I first was using async and it greatly impacts the performance. So the thing you should be looking at on this slide is the worker is equals eight. So if you don't provide this, async will just default to one worker. And what it means is it will spawn up multiple processes, which will then root your requests across these processes and handle them. So it obviously makes your app run a lot faster. So this is quite similar to something like gUnicorn, which will spin up, say, multiple Django or Flask processes and root the requests. But then with async you have essentially an event loop in each process, so you've got multiple event loops going on and you should be able to get quite a lot of throughput doing that. The documentation or the async developers, at least they seem to recommend one worker per CPU that you want to dedicate to async. So if you have a four-core machine, you might want to give it all four cores and use four workers or you might have something else running on there, so maybe you only give it two. The ecosystem is probably the biggest sacrifice that you'll make if you were to head in this direction compared with using a more established framework. So there is a small reasonable set of extensions listed in the async documentation. That seems to be the best sort of collection of them, but it does feel like your mileage will vary a lot with all of them. And more often than not, you end up having to roll your own integrations. So there was, I don't know, I've tried a few. There was the async limiter, which is a rate limiter. That seemed to work pretty well, but then I tried another one to do with session handling, and actually that didn't work well for me because it didn't provide the back-end I needed, so I ended up writing my own session handling, which is kind of a pain. And really I should probably rip that out and release that as a, you know, like a sanic Postgres sessions or something, but I've not done that yet. If anyone's interested, I could look into doing that. And then there's, you then have to start looking at how you would integrate with other systems. So I've mentioned Postgres a few times, so it's not a surprise to people that I'll be using that. And the thing with Postgres is you need to make sure you're using a non-blocking Postgres library. So the most common one is AIOPG, and it nicely provides integration with SQL Alchemy, so that allows you to use SQL Alchemy with a non-blocking connection, but it only allows you to use certain parts of SQL Alchemy, so you can only use the core API, you can not use the ORM API. So people are familiar with SQL Alchemy, you'll probably, people tend to use one or the other, you don't tend to use both. I used the ORM API until I started doing this, and then I had to use the core API. And they're both fine, but it's just, I don't know, it's just a pain that you have to be aware of what you can and can't use, otherwise you're willing to sort of trouble. So I think it's kind of interesting to think about when should you consider using Semic? So my top title is to do with web apps, but I don't think people should really be looking at building a large web app with Semic. You want to think of it more about high throughput services, something that you need to go really fast, or something like a microservice would work really well, so it's something that's quite lightweight, but it's going to get lots of requests, or the smaller web apps do work. And it's not to say that you couldn't do a larger web app. I think it's just when you start to require things like sessions, and then you start to have to roll your own, and then you'll need auth, but I'm not sure there's many authentication packages out there. It just becomes a lot of work. So maybe as the ecosystem grows, then maybe it'll become more like Flask, where you can just add these extensions, and it becomes almost like a full stack framework. But at the moment, I think it works quite well as sort of discrete small services. And I just quickly want to note the other options that are available. And this is something which is a bit of a tongue twister to explain, but I think a project that we should be keeping an eye on is UVCorn. So this is by Tom Christie. He's the author of the Django Rest framework, so he's very well experienced in this kind of area. And UVCorn is a commandewe of GUNICORN and UVLOOP. So he's essentially writing an AsyncIO version of GUNICORN, but using the UVLOOP rather than the standard AsyncIO loop. And this will then use the ASGI interface rather than the WSGI interface, which is the standard web server gateway interface used by Flask and Django and everyone else. The reason that this is so important is all of Sosanic, for example, it kind of has to mangle AsyncIO on top of WSGI, which is never designed for Async. So yeah, I would definitely check out UVCorn, but I know it's not ready yet. Tom actually tweeted a couple of days ago. I've seen that he thinks it's not ready for prime time, but I think it's definitely going to be a promising option for the future. And with that, I think I'm actually ready for questions, which is good, because I'm almost out of time. So I'm not sure we have time for questions. Thank you. We don't have time for questions, but this is actually the last talk in this hall, so please. Did you try and manage to success configuring WebSocket Secure? Yes, so the example I had was actually using WebSocket Secure. So Herucu magically made the app run HTTPS for me, and it's using, so you can see... In the top line there, it says it figures out whether it should use WSS or WS as a protocol for the WebSocket connection. OK, we have problems with that on Sonic, so that's what I'm asking. But yeah, it seems to just work, which is great. I had very little to do with it. Can you show the configuration file somewhere? Pardon? Do you have configuration files for Sonic? So where do you provide the certificates? So because I ran it on Herucu, it's all handled for me. OK, so that's a different case. I had a question. So maybe you've picked Sonic over, for example, AO HTTP for speed, right? No, not necessarily for speed. I actually tried AO HTTP, which is one of the, I guess, other more popular frameworks. So I only experimented with it for a short time, and I think it was probably more just the API. I'm not sure how similar it is. I'm afraid I can't remember the comparison well enough now to comment, really, but I just found it didn't enjoy it as much, perhaps it's the best way. Or it wasn't as easy to get started, perhaps. OK, so do a quick follow-up, maybe comment or call for comments. So do you feel that we are getting too many frameworks? Or maybe we should focus, for example, on creating libraries, so every other framework doesn't have to, you know, repeat mistakes of every other framework, for example, in the comments. I guess, I mean, are we creating too many? I don't know, the answer is kind of yes and no. I do think this is why Tom's project, Uvicorn, is important, because it's implementing the base layer in, I think, a more correct way. And then maybe over time something like Sanic could move to that, and then hopefully they could then remove a bunch of the nastiness they've had to do. And maybe ARO HTTP could do the same. So, yeah, it's hard to say, because I think also there's a lot of innovation going on at the moment, so it's important for people to experiment in different ways, but I think over time we will start to converge together, hopefully. Thanks. How do you deal with logging in a synchronous environment? That's a good question. I'm not sure if there's any particular, I've not done anything particularly special with it, to be honest. I maybe need to check my logging's not working. Check my logging's working, okay. I'll actually try to answer this here. Maybe you can actually try to keep a unique request ID for each request, and then include it in all the logs so they can actually combine, and see all these are all connected and isolated from the other one. Anyone else? I can answer the one regarding logging. So Sanic has implemented logging, so you can just import logging library and do logging.emit, and that's the new Python 3.way. It's very hard to overwrite the default logger. I tried and failed. But you can also pass the configuration for the logger like from the dict, that's also used by other Python applications. Configure from the dict. The same syntax, the basic library of Python. Okay. Thanks a lot, Google. Great job.