 All right. Hi, everyone. Am I audible? Perfect. So before we begin this talk, I just have one task for you, folks. I'll tell you the explanation afterwards. But you just have to trust me on this one. So I want you folks to give me your goofiest smiles for exactly 17 seconds. And the time starts now. Smile. I want to see your teeth. Turn to your neighbor's smile as hard as can. And five, four, three, two, one. So the reason behind this was that I wanted some seconds to settle in. And 17 seconds was the most random number that I could think of between 1 to 20. We can have a debate about it afterwards. Before we begin, I've got some Robin's flags. I've got Robin's stickers. Feel free to take them after the talk. So yeah. And a few things about me. My name is Sanskar. In the morning, I work as a software engineer at Bloomberg, where I help create tools for bond trade evaluation. And during the night, I maintain an open source software called Robin. Speaking of Robin, what is Robin? Simply put, Robin is a fast, async, Python web framework with a rest time. Let's have a look at the current state of Robin. So Robin is currently hosted on GitHub. It has a BSD 2.0 clause. It has around 1,400 stars on GitHub, around 300,000 installs on PyPI. But most importantly, it's under active development. Now, by this time, I'm pretty sure many of you must have had this question, why another web framework? What helps Robin actually stand out? So here are some of the key features. Firstly, it's under active development. Second most important feature that it's not written in just another language. It's written in Rust, by the way. It has a multi-threaded runtime. It has a very simple API, and it's fairly extensible for you. It's fast, and it can serve around 10,000 requests in 0.692 seconds on a dual core MacBook, which is a very old machine. If you want the latest stats, you can check out the latest release on a very strong machine. It supports async. It has multi-threaded file and directory serving, dynamic URL routing. It supports middleware web sockets. It supports something called constant request. I'll be explaining you what that is. But most importantly, it's community first and truly open source. It follows this ideology to an extent that if you have followed me somewhere or if you have met me in the conference, you must know that I'm not a big fan of type support. But community loves it, apparently. So community shall get what they ask for. So it is a fully type supported system, and you can use the type safety like your love. Now comes the juicy part. What is the Robin story? Why did I invest so much time in making another framework? It was April of 2021, and it was the final year of my university. And my thesis deadline was approaching. And like any other hardworking student, I was spending all my time exploring Reddit. Sometime around then, a big company decided to introduce Rust into the kernels, and Reddit was filled with this meme, rewriting everything in Rust. And since I was such a dedicated student, I was working on a completely unrelated project called Encrypt text that had a Flask back end. And during that time, I used to write a lot of Node.js in React. And the thing that bothered me about Flask was the lack of async support. So I thought to myself, you know what would be cool, Sanskar, if you could create an async-supported Flask? And you know what would be cooler? If it would be written in Rust, by the way. Because during that time, I came to know that Ryan Dahl, who was the creator of Node.js, created another framework called Dino, which used Rust. So maybe this language is not a bad choice after all. But the most important questions of the slide, why the name Robin? Because if I take care of something called Robin, I help it grow. And I help it develop. That automatically makes me part man. So if I'm a little slow in reviewing the PRs, or if I'm not responding on the community, just have a bad signal beside you. And I have a moral obligation to respond to your question. OK, now coming to the technical part of the talk. Let's have a look at the tradition Python Web App lifecycle. So usually, we have a reverse proxy in the front most part. A web server, a reverse proxy is something like nginx, caddy. A web server is an ASCII or a WISCII. And finally, we have a web framework sitting at the end called Flask, Django, FastAPI, and so on. So let's have a look at a traditional Flask app. I like Flask a lot. And I'm really inspired by the API. And so this is how a Flask app looks. So how many of you folks know how to create a very basic Flask app? All right, perfect. So you import the Flask class from the app. You initialize the app. And you add some decorators to add the routes. So this is how Robin's API look. You import Robin from the Robin module. You initialize it. And you create routes for it. I have made some changes in the API that I felt would make it more useful and more friendlier according to my experience. For example, you don't have an app.trout method. You have an app.get, app.post, and so on methods to define the routes for your application. And also another reason for me to create an API that was so similar to Flask was to reduce the learning curve to explore a new framework. Because most of us are already familiar with these frameworks and writing a very different framework would be a very big commitment for everyone. So this is how it looks like. Now let's have a look at the server aspect of the code. Question to the audience, how many of us know what a visgi or an asgi is? OK, so this is a code snippet of a very basic visgi. A visgi stands for Web Server Gateway Interface. The reason why visgi were created, I think, 20 years back was because there were so many web frameworks at the time. But there was no standard to actually serve them. And it was getting very hard for the framework maintainers to serve the application. So it was released in web 3.3.3 if you folks want to check it out. This allowed the framework maintainers to focus on the routing side and the execution side, where the visgi would take care of all the dirty parts like handling HTTP requests, handling the response for you, so on. Visgi and asgi have a lot of benefits. But for Robin, you do not require an asgi server. Robin comes with a coupled asgi server. So it is one of the reasons that makes it much faster than everyone else. So this is how a lifecycle of an Robin app would look. It is as simple as writing Python 3 app.py. You do not have to decide between choosing the right visgi or an asgi server, so no g unicorn, no uvicon, no starlet. You just write Python 3 app.py. You put it behind a reverse proxy, and you have your working server. Now let us have a look at the architecture to have a sense of what actually is happening. First of all, we have a worker event cycle that basically does all the heavy lifting for you. This part manages the runtime. It passes all the instruction to the Rust code. And then spawns a thread pool in the middle. So when we type the command Python app.py, the Python code is converted to Rust objects, as you can see here. And then it is populated in a thread safe router. And when the incoming requests come to the router, the router gets the Rust object from the mapping, passes it in a thread pool, depending on if it's a sync function or an async function. And then the response is executed, and then returned back as a response. But to scale it to even more extent, we have a TCP socket that's listening to the request. And we can have multiple processes, as well as multiple workers now. So all the thing that I explained earlier can be scaled across multiple cores of your machine. So this is also another reason why Robin is faster compared to other frameworks in the market. Now, this feature called const request optimization for the lack of a better name was released a week back. So I realized that the encircled part is very gill dependent. How many of you know what a gill is? Okay, so gill is basically a global interpreter lock, which in Python accounts to the slow performance. For some reason for the slow performance, because it doesn't allow a true multi-threaded experience. So for simple things like serving, let's say, hello world or serving a JSON schema, acquiring and releasing gill would be a big overhead. So we realized, wouldn't it be cool if it would just be eliminated? And the Rust, we were able to serve Rust like the response directly from the Rust side without ever invoking the Python object. So these are different ways I could think of on writing hello world in Python and serving. So we have S strings, dynamic S strings, strings that are converted on the fly. So I went on and disassembled this code and this is how Python looks in assembly. On a closer inspection, you see that only a constant is loaded before a return value is returned. So all of this is replicated in all the patterns below. So how automatic const request optimization works is that it sees the assembly, it pre-computes the value, stores it in the thread safe router and without ever acquiring or releasing a lock, it serves the response back to you. So automatic constant request optimization is still under work, whereas the constant request optimization is already present in version 17.0. So this was just too cool for me to not just show off and make you folks feel excited for 17.1. Now coming to the usage, you can simply do a pip install Robin or install it via Konda. You do not need to have Rust installed on your machine. Now we have a little feature showcase to get you folks excited. So Robin supports synchronous functions because many libraries still have synchronous functions and as much as I wanted to make it fully async, we need synchronous support because some library maintainers either can't upgrade the libraries or don't want to. It's a very hot topic and we don't want to start a debate here. It obviously supports async function because it was one of the core reasons behind Robin's origin. And I was a React developer as well and it always bothered me that I was unable to serve React applications from a Flask app very easily. So I created a way to add sub routes and you can serve multiple React application on a single server using the add directory method. We have static file serving which is much faster than the native Python file serving. So if you ever want to serve like large files on a machine on your web server, it will be much faster than using the native Python way. We have dynamic URL routing. So if you want to have route parameters in your route, you can use it like that. So it also scales across multiple cores. So you can scale it as multiple workers as well as multiple processes if you have any restriction on the number of threads that you can use on your server. We have middlewares for logging, for authentication and we all love middlewares, basically us. We support WebSockets. So your real-time applications are also supported in Robin. Constant requests and much more at the GitHub repo. Now if you folks are still not excited, we have the performance comparison. But a PSA that this comparison is not to demean any framework or the people associated with the framework. These frameworks are the reason why I got involved in the Python ecosystem. But here you go. So this is a comparison between Flask, Fast API, Django, Robin on one worker and Robin on five workers. So as you can see, Flask took around five seconds to serve 10,000 requests. Fast API took four seconds, Django took 13. I mean Django with GUNICON took 13. Robin on a single worker took 1.8 seconds, whereas on a five with five workers, that was the maximum workers that could be allocated on my dual core machine, it took only 0.69 seconds to serve 10,000 requests. Now seeing what's in plan for Robin for the future, I want to make it more performant at OpenAPI integration because it is a better way to write documentation. By dantic support implement automatic const request optimization, ORM supports, improve the plugin ecosystem, better documentation, prove the web sockets. And I came to realize in this conference that people want to add template support like JINJA templates. So to support JINJA templates with Robin, make GraphQL integration with Strawberry. And I came to also realize that not many people are available on Jitter and the main reason to have a community communication platform was to make it more accessible. So we'll be migrating the community to Discord and most importantly, try to increase the community involvement. Speaking of the community, join our community if you are interested on making PRs, doing reviews, sponsoring, or if you are just curious, here is the link to the community. And special thanks to the people who are already there, the sponsors, the contributors, without your folks, it wouldn't have been possible. So here are some of the important links. We have the GitHub link, the PIPI link, the website, the docs link. And yeah, start Robin on GitHub and let me know if you have any questions. Yeah. And also we are hiring at Bloomberg, so. Okay, so we have a bunch of time for Q&A. Do we have any questions for the speaker? Yes, you can go up to the mic. Well, maybe, thanks for the talk. How does the Python Rust Glue works in your case? Do you use Pi O3? Yes. Great, that answers my question. My pleasure. I've used it recently and I really like it. I really hope that we've been using it a bit more. Yeah. When we want to get some performance from Python. Yeah, it's a very nice API. Okay. I'm curious about the performance stuff. Could you elaborate a bit like why you think it's faster or how it's faster? I think one of the reason is because we have segregated all the request handling, the request validation towards the Rust side. And also the thread pool is a multi-threaded thread pool and which is not very easily possible in Python because in Python you obviously have to acquire a GIL to perform like execute async functions, whereas in Rust, you bypass that restriction and execute those Rust objects in a multi-threaded runtime. The logo is pretty nice, first thing. Thank you. And second thing, do you, for the async part, do you support async.io or trial or something else? Async.io plus UV loop. Okay, and you don't like trail? You don't want to give it a try? No, I haven't tried it. I tried adding support for it, but I use a library called Pi O3 async.io, which only supports that. So if I find a way to support that, I'm happy to support it. Okay, thank you. Yeah, it looks like a great application and thanks for your talk. How does it compare to just Rust applications like Rocket and Axin? Okay, I think I just tried a comparison with Actix, which is one of the fastest Rust frameworks. So it's definitely slower than that because Python overhead is still present. So I think it will be slower than native Rust application, but I try to make it as fast as possible in Python. Thank you for your talk. Two very good questions. You showed that you run the server with both processes and workers. So I suppose processes are CPUs and then the question is, what are the workers? Like, are they the number of threats? Yes. And the follow-up question is, how do you run this on production? How do I do this? How do you, what's the preferred way to run this in a production environment? How would you do it? Ideally, I don't know the exact way of how many numbers of CPU do you need, how many workers and how many processes do you need on depending on your configuration. So usually I try to do a trial and error where I start with the workers that are twice the number of CPU cores that you have and processes start with just one and I try to increase or decrease them based on the performance metrics. Yes. And one more thing that I forgot to tell you folks is that we are having a community challenge right now. So there are some open issues on GitHub that are marked with EuroPython. So if you can create a PR to solve them, you get a shirt from me. So this Robin shirt is up for grabs. Yeah, that's pretty much it. Feel free to reach to me. Okay. One more question. Yeah, just following up on the workers and processes. In this case, when you configure the processes, is it like forked straight from the rest or in the Python side that you create those processes? The processes are on the Python side. The workers are on the R side. Okay, so if you have kind of a single process and in the case multiple workers, you still have multi-CPU support. Yeah. Is it kind of each one of the threads running its own pythons of the processor? How is that working with the white queue between them? Every processor has its own worker. Every process has its own worker. So depending on the processes that you create, we'll have the scale of that many workers. So can you share data amongst those threads? Yes. Like in Python? Yes, that's how the runtime works because the router is a thread safe router that dispatches the function according. So I don't have to rephrase this. Yeah, you can share everything that's internal that's rust, that's fine because, yeah, you just need to be thread safe, right? Yeah. What about data structures from Python sites? Can you share those across your threads as well? As long as I think they're thread safe, but I never actually had to share the data structures. So that's all. I'm not really sure. No, no, I'm sorry. That's all. Thanks. Very nice talk. Yeah, sure. Okay, thank you folks. That's all for today. If you have any other further questions, then I think Sanskar will be available on the hallway. And can we have another round of applause? Thank you to Sanskar. Thank you everyone. Thank you for the 10 minutes, guys.