 I hope your conference was well. I had a lot of fun here this year, and let's hope that this talk will make you learn something new, maybe. So I'm just going to talk a bit about myself. I work as an IT technical consultant, which is just fancy words for people coming up to me and say that they have problems and that I need to bring them solutions to them. Well, inside my company we build products in the field of engineering. Some are related to rapid seismic risk evaluation for buildings and rapid infrastructure assessment and monitoring. Many bridges, that's it. I don't know if someone knows where this place is. Can someone guess it? Maybe? It's Turin, we're in Italy, so that's why we're doing things stuff on bridges, because here is such a hot topic at the moment. So these are some places where you can travel to in Italy if you're interested, and if you'd like to move to Turin, just give me a call afterwards when you can talk. And this is the schedule that we're going to see today, so we're just going to have a quick introduction on server cent events. We're going to see a bit more about their inner workings. We're going to have a look at the differences from web sockets, and just going to see how a generic implementation could be done for a generic HTTP server in Python, but it's actually for a generic web server. And then I'm going to just talk about some use cases. So I'm just going to have to ask you a couple of questions. Raise your hands if you have already sent data from the server to your client somehow. Okay. Who of you have used web sockets? Okay, a lot of you. And server cent events? Okay, good. This is a good opportunity then. I was hoping for that because as I recently discovered it's like a technology that's basically unknown. So as you may already be aware, these are the main mechanics with which you can send data from server to client, which is polling, long polling, which is just basically an evolution, web sockets and server cent events. So with polling, you're like inside the dark ages. It's like a very bad idea because each time that you need some data, you make a continuous request to your server. And even if you don't have data, you still make requests and that's a terrible idea. It doesn't scale well. It's very bad. And then somebody thought about long polling, which is slightly better, but like you just make a request to your server. The server keeps the connection open when it has some data. It writes it inside your request and then it closes it. And then you establish a request once again. So you basically just resolved a bit of the problem, not the entire problem. And then you have web sockets, which are the new cool thing in town. And they are more very popular, easy to use with Node.js, for example, that work right outside of the box. With Python, I don't have that much luck. You have to use libraries for your applications, especially if you're using Flask, Django, Turbogears, Pyramid. I don't know about Tornado, but that's like I can't comment on that one. Well, I think IU has them. I've used them before. I'm sorry, but I scrapped them for web sockets. They work better. I don't know now how the state of the libraries. And then there are server-scent events, which, well, basically, to give you an idea, server-scent events were born, the specification was born in 2013, and web sockets came along in 2011. So basically that's why you maybe might not have heard about it, because web sockets gained quite a lot of traction as soon as they came out, because there was nothing on the market to do something so good in the way that web sockets did it. And they kind of remained inside of our corner. So this is basically the communication pattern that you have, like the problems and the issues. Like you can see that with polling I already explained, you have a lot of... You use a lot of resources just to create the connection, because TCP has its handshake, which uses a lot of time. Long polling, it's basically a bit better. And then there are server-side events, and the web sockets basically do the same thing, just keep an active connection. So this is like the most simplest way in which you can use server-scent events. You just have the event-source API, which you can use inside your browser, inside your JavaScript, and you just can subscribe to an event which is called Message. And that's the simplest Python Flask example which I could find, but of course it's not working. I deliberately wrote blocking data source, because I think that the example where you find a while-true loop and they just echo messages to you, it's like an anti-pattern, because with this type of services you should obviously get data from somewhere like, I don't know, a queue, Redis maybe. So someone before us talked about Kafka, which is also a message queue, so something like that. Or even your database, if you fancy doing that, you could also do it with this. And there may be some use cases for that. So a bit more on server-scent events on the JavaScript side. So you can see there are some available handlers which are unopened on Message and on Error. These are, they call them properties, I think they're functions, defined on the event stream, on the event source object. But I do not advise using them, because server-scent events specification allows you to define your custom events. So you'd like something to have an event listener, let's say like client-connected, client-disconnected, that's a very bad example. So you can do it like that with Message on Error and Open. So these are some popular frameworks that I mainly used, and there are some bits on how to do this with all of them. For example, Turbogears 2, you just have to, in the expose method of your controller, you just have to add the content type text event stream, and that will enable for you the streaming from the Turbogears. Flask, I guess, correct me if I'm wrong, if you return a generator automatically enables event streaming, I'm not sure. Pyramid actually does it in a very explicit manner. And AOS TTP, I just found out that you have to use a library, but basically you can do it, we'll see how to do it with all of them, maybe. So let's have a little bit more deeper look inside of what's this specification all about and what we have inside it. So if you have a generic server, let's say whatever you like to use, I don't know, pick one, let's pick Flask because it's maybe one of the most popular. You just basically, the server has to respond with these three headers, which are content type, text event stream, cash control, no cash, because it doesn't make any sense to cash data for such an endpoint, and connection keep alive. Data is always encoded in UTF-8, and that's a requirement, and clients expect you to receive data, to send data to them that's encoded in UTF-8. And the body of the request works like this, you write inside the body, whenever you have an event or whether you'd like, a field column, value slash n character, which is a terminator. That's basically how you write every type of event. And the events that are defined are the value of the field, sorry, which is defined is data, event, ID, and retry. So data contains actually the payload for your message, which you're sending. The event name is what allows you to write, subscribe on message, add event listener for message. Here you can use an event to define your custom message, and you pair it up with a data field. And ID is an interesting one, because I never said it until now. But one great thing about servers and events is that you don't have to care about reconnection. The browser does it for you. So basically, if you create an endpoint with servers and events, you just connect from the browser, your server goes down, as soon as it gets up, the browser automatically checks for your server, it sets intervals of time, and it automatically reconnects to it. And ID is a field which is used for synchronization of the messages. So you can actually quite easily implement a re-sync message so you don't re-sync system or pattern, in which you don't lose any messages that you're trying to send from your server to your client. Because if you have the ID header, the browser will automatically, the client, will automatically add it back to you as a header, as a last event ID header, I guess. That's what I found out. To the next time it connects, so you automatically can recover this ID and maybe make a query to your database to get the appropriate event stream. And retry is a value which you provide in milliseconds. It's basically the amount that the clients need to weight between reconnections. So this is how a data package should look like, like how data is formatted. You can see that messages ending with double slash n, which is new line, mean that the message is finished. So you can obviously combine the data, the ID, the event field together, the retry one. You just send it only once in a while when it makes sense for you. That's basically it. And I also made a small example of a custom event listener. It's the same idea. The only thing that changes is the part of the, can you see the mouse? Of course not. The part of where the yield is and just added the message that has an event name. So a bit more on this, on the specification. You can actually redirect the, it is possible to make a redirect on the request. So if you have like a move, you want to move automatically request to HTTPS, it's not a problem. As I said before, only UTF-8 encoding is supported. Communication is only done in one direction. It's only server to client. You don't have any way to send data back from your client to the server, but basically that's what HTTPS for. You just make a get request and you basically have the same functionality. But of course you have to make a new request with all that means that you have to make a new communication channel. As I said before, clients always reconnect. You don't have to worry about that problem. And if you expect for, if you want to shut completed on a client, you just reply with a 204 HTTP204 code, no content, and the browser or the client will stop automatically reconnecting to the service. Another thing to keep in consideration is that there is a limited amount of global connections per site. So that browsers, which browsers define actually, there was an open issue on Mozilla and Chrome, and they've marked it as one fix because this is actually a feature. And you are expected. So when you open a stream to service and events, you are actually using one of these connection slots, and I think that are five. So you basically have four left at this point. So this could probably be an issue for you. If you're like trying to open the same, if you have the users that open your site on seven different tabs, that could be an issue. Someone pointed out that you can use a shared web worker and put your service and event inside that and use it to, and then talk to the server, to the web worker to recover your data. So you basically kind of fix the problem. It's a bit of a workaround, but it works. Let me see. Okay. So clients are always required to send a cache control or cache header. And the server always says, okay, we already talked about the server headers. As far as coverage goes for modern browsers, this was taken like two or three days ago. It's like 92.6%, which is quite good, I'd say. But if you think that you're not in, you have to support some older browsers. There are some polyfills. I think that this library is the most used one, the one that I linked in the slides. And you basically have support for service and events on old browsers, except a couple of old Android devices. Maybe. Of course, the main question, but do I use this thing only with a browser? Well, the answer is no. There are different libraries for different languages. I just Google some up. There are links to them. And basically you can use it with Python, with Android, with iOS. I don't know maybe some of you do mobile development. There's also the React Native one. I just wanted to point out that the React Native one uses basically a polyfill with just JavaScript. You don't have bindings to native Android or iOS. That could be an issue for you if you're trying to do something that works in the background. You can't have it running when your application is closed because your JavaScript thread is closed at the moment if you don't see your app in the foreground. So maybe you'd have to introduce something else. So let's talk about maybe the main differences between service and events and web sockets. These are like maybe one of the cool parts. Okay, so one can only send UTF, eight encoded messages, which is service and events. The other also supports binary data. You're strict to use HTTP while web sockets has a custom protocol which you have to manage. Well, the library is managed for you, but the protocol is much more complex than this one that you've seen here. So service and events is proxy friendly, but that's just because it's a simple HTTP request. The other one, for example, with NGenex, you may have to add some headers, but with more modern web proxies like traffic, you don't have this issue. So it's not that bad nowadays. We already said that there's a built-in mechanism for reconnection and also re-synchronization of messages. Web sockets need to manage the heartbeat. I had a lot of problems in the past with heartbeat. I never managed to get it that right, so that was one of the issues for me with them. Service and events is really hard to detect these connections on the server side, so I didn't talk about it, I'm just going to go back because I wrote it in an unfortunate place. If you send a message, start doing it with a comma and then something inside your response, it means it's a comment. This is basically how you can check for the presence of your client from the server. We just sent the regular intervals of time, just a small comment. It's like the heartbeat for server side in the web socket protocol. Of course, I messed it up. Of course, web sockets fully detect errors, disconnections on client and server side. We talked about the direction of communications. Some use cases, maybe some that are more adequate for this technology. Let's say you have a dashboard that you need to update once in a while, maybe a news feed. Every once in a while news comes to that something needs to get published to your web page. Notifications to browser that push the browser. I think it's me, right? Maybe games. For the purpose of this demo to really try out the technology, I set myself on building a game with this thing. I really got to figure out how worse it got. You could get. I actually was kind of cheated a bit because I made a game that wasn't that real time so I didn't need web sockets. As I found out today, there are technologies that you need to make games like socket.io. I don't know if you've already seen the talk today. It's like a real good one because it manages also these connections for you. It does a lot of things automatically. Basically, if you want to... If you have some old code, you can set up new things. You want to do something really quickly that is quite widely supported. You need to send data from your server to your client. Just evaluate if you can manage to implement servers and events. You could be an alternative. They have less way headaches than web sockets, especially scaling web sockets. This scales with your application. You have less dependencies. That's all. You can find some links that I've used to compile this presentation that could be helpful. And some takeaways. I just really said them. That's all. Thank you. Questions? Thank you for the talk. Except for the fact that the client automatically reconnects, how does this... How is this different from long polling? Could you tell a bit more about this? Yes. Okay. Let me get the slide up. In long polling, you have... You can see long polling is the second line. When you create a connection with long polling, you make a request. The request remains pending on your server. And as soon as some data arrives, you just write the data to the request and the server closes the request. Okay? As soon as the request closes, the client automatically restarts a new connection to your server. And then it awaits again for data. That's basically the difference that you were looking for. So in the case of server-centered events, it reconnects only if the internet dropped, you entered somewhere, the connection stays up. That's how it works. Regarding polling here, polling releases the resources from the server, so the connection can't be used. So if you have a lot of clients connected to your server, I guess with servers and events, you will have one connection over the client. Like WebSockets. So you mentioned that polling is very bad, but how it can help in this problem when you have a lot of clients and you don't mind making connections to the server, and you can let more clients served. Maybe a little bit slower, but they will be served. So you're asking... I'd like to stick to polling. Would it be still fine if I have the infrastructure to serve it? Okay, so... I won't get immediate responses because I have to poll, but I will release resources on my server so somebody else can use the connection. Okay. Let's say that you have a spike of traffic, a huge spike of traffic. If you're using polling, then you automatically make a huge amount of requests. Maybe you have smart services that automatically scale, and you're just going to get a super high bill, but with this one, you actually lower the amount of connections like you only have one for connection. So you only have one connection. Yeah, but one per connected device if I have only one per connected device. Which is permanent. Yes, of course. With web sockets. Yeah, exactly the same. So I looked a bit into traffic. It automatically manages some of these connections for you, for example, so some web proxies help you with this. So it's not such a big of an issue nowadays. I've never tested it with quite a large amount of devices, but for really small applications. Well, I don't see why I should use polling nowadays, because it's like it gives me a lot of problems. I have to think about, oh okay, so if I've messed up a time or if something goes wrong, I have all these requests in my log, which I don't understand what they are. So why should I use it when I have new instruments that already work kind of great for my perspective. And it's super easy to implement at this time. And you already have an API that helps you to build something that is advent-driven at this point. I think the drawback that he's mentioned is that for architectures that have a fixed... Sorry, I can't see you. Sorry. I think the drawback, the main drawback of this solution is that if you have an architecture with a fixed number of workers, you're going to run out of workers proportional to the number of connected clients, where it was with the polling solution. And all get a no response, and then the workers will be free. So in a way, the service and events is superior to everything at all other solutions, except in this regard, where it shares the same problem as WebSockets. Okay, let me cheat a bit. So you can... If you're using Gunicorn or something else, you can set it up to use more workers. Like with G-Event, you use green threads. So you don't have... You don't notice so much this problem, but I've never tested it under huge workloads. I would need to make a test with polling and with service and events to give you a better idea. So this could be like an integration to this talk. I don't think about it. So yeah, I want to add something because I had experience with WebSockets previously. And for me, for example, polling is good when you can refresh your data every minute. So then you don't have to connect to the server and keep connection connected all the time. But for example, I was working on the project when we needed real-time data. So we needed that connection. And well, to be honest, we were able to handle thousands of connections in a single AsyncIO server. So well, that's not a big issue. And it works on one thread. We have GJIL and thousands of connections. So it's not a big problem. And basically, if you are using polling, you can, but your data will be outdated a little bit. So if you need real-time, well, you have no choice. Okay, so I do agree with you on this one. So the same thing that you're doing with WebSockets, just to receive data. You can do it also with server-centered events. You can implement it also on top of AsyncIO. Yeah, we have the same thing. The main advantage, I think, is that you have some legacy code somewhere. Maybe you don't need to spin up an AsyncIO app and have another dependency added to your project. This could be something that you may want to take into consideration. Yeah, I also get confused with the name. Don't worry. Any other questions? Let's give a big round of applause for our speaker. Thank you.