 Όλοι, το next talk is from... or wise and it's about Python RPC and it pops up over web sockets. It's a pre-recorded talk and we will play the video and then the Q&A starts with our... Here as well and you can ask your questions to or about his presentation. First of all I have to say that I am amazed that I have to host the session from home and have a great talk everyone. Hi everyone, I'm really happy you are joining me in this talk. We are going to be talking about Python RPC and it pops up over web sockets. I'm more wise and founder authorizing it previously at Rookout. I come from a cyber security and development software engineering background. And I love Python probably just like all of you guys and I love building things with Python. And I'm excited about sharing what we've built here with you. So before we dive into the open source packages that I'm going to share with you and I'm going to go into their design and into their concepts and also how to use them. Before that I'm going to talk about the problem space itself. So as you all know the cloud is growing, the edge is exploding, more and more elements of software are kind of eating up the space and becoming more complex, adding more elements. It's no longer just running a simple server on one machine, it's multiple instances, it's multiple services in the cloud, third parties interacting with one another. It's edge devices, IoT compute devices that are deployed across the field. It's things running both on-prem and on the cloud collaborating. Obviously crazy scale of stuff like Kubernetes and serverless where it on the fly changes the amount of instances that you have and obviously things that are going towards the edge. With all of this complexity, our software is becoming more complex and we need to connect all of these elements together. We need to keep them in sync. We need to find a way to share data between all of them. At the state that we have on the on-prem sync with the cloud and what we have at the cloud also sync to the edge. And the obvious question that remains is how do we do that? How do we connect and scale everything with this ever-growing complexity? So that's what we're trying to solve with these packages. This is what we'll be talking about both with RPC and Popsup while using WebSockets. Before going into the specific packages themselves, let's look at a specific example. And I brought the example that brought me to develop these packages. An open source project called Opal, which you can find at opal.ac. Opal deals with the world of authorization. So let's say we have an application and we want to control who can access it, the different roles, permissions within that app. A modern way to approach this today is to use something called OPPA, Open Policy Agent, which is essentially a microservice for authorization. The application can query it at any moment in time to understand if it's allowed or not allowed to perform an action. But as our application scales out, we are confronted with a problem. How do we keep the state of the application and the state of the authorization microservice and also the state of all the other relevant data syncs that we have, our billing service, our sales management, all of those different elements that comprise our solution. How do we keep them in sync with our authorization layer as well as with our application? So this is where Opal comes in. It's essentially a layer to synchronize the application and the authorization layer. It uses both a server for main management and a client that sits next to each policy agent filling it with the data that it needs. And it does this through a WebSocket Pups of channel, which we'll be talking about in a minute. That channel allows Opal to keep OPPA in sync throughout time as the application progresses. And with every little interaction when you're adding a new user, when you're inviting someone new, it will integrate that information into OPPA instantly. If you're doing or you're building authorization for application, by the way, I really recommend that you look at both Opal and OPPA. So this is one key example of how we need to have something that's spread out, our entire application with the different sidecars of authorization. We need to keep all of those in sync with things that are running on other clouds in addition to ours. So let's break down the challenges or requirements that we have here. So first of all, we need this to be real time. We can't afford to have a polling solution. We can't have everything delayed. When you apply an action in a modern piece of software, we expect everything to apply in real time. For example, in the Opal case, if you invite a user in, you expect the user to be able to access that application immediately and not that the next deployment or when the next polling interval hits. In addition, we want it to be bidirectional. We want both components to be able to share data with one another. The concept of a server and client, when you're talking about cloud-native infrastructure, kind of deteriorates. Both sides need to trigger one another sharing data. An updated one layer at the authorization component, for example in our, in the Opal case, can affect a different client or a different server. We need all of those to constantly be in sync, so they need to be able to trigger one another bidirectional fashion. We need easy networking. So we already mentioned this. We're looking at the cloud, we're looking at the edge, and we need all the different components, no matter where they are, to be able to communicate with one another. So we'll be traversing the entire internet. We'll be traversing firewalls, we'll be traversing routers, we'll be entering VPCs and on-prem networks, and we need it to be, no matter where those software elements are deployed, that they'll be able to communicate with one another in a bidirectional, real-time way. So we need something that enables us to go with easy networking, to traverse with ease. We need something in addition that is durable. Because of that widespread area and a lot of different connections, a lot of those connections might be severed at one point on another. So we need a system to be durable. We need it to be able to survive and reconnect as things change. And lastly, we need it to be scalable, especially if you want to run this on Kubernetes or serverless or anything of modern scale today. It always ends up relying on the ability to take one instance and scale it out to many to support higher worklines. So these are all their requirements that we have. And now let's look how we can use the different elements of architecture to approach them. So we'll be looking again, RPC, PubSub, and WebSockets. So for the real-time bidirectional aspect, WebSockets are really a good way to start. First of all, they create a bidirectional channel. While they rely on HTTP, they enable both sides to trigger a conversation on an ongoing channel. So unlike general HTTP or restful HTTP, where you trigger a request, get a response, and often terminate the session, WebSockets stay up constantly, and both sides can trigger queries or requests on top of that channel. WebSockets also contribute to easy networking. Because they're based on HTTP, most routers, most firewalls would be very forgiving for them. And also they save us a lot of friction with creating an easy direction. So clients can trigger an outgoing HTTP connection, going easy through the firewall. But after the connection is established because it's bidirectional, now the server can also trigger requests on that existing connection. Durability and flexibility we're getting from RPC. The basic concept of RPC is just being able to call remote functions. So you'll see what we'll be doing is exposing specific Python functions to the other side. By exposing functions, we're essentially connecting the two Python interpreters, enabling any element of code to be shared between them. So we can add more functions as we need to get to both the durability that we need to kind of maintain that, and also to add more capabilities. Lastly, PubSub, the ability to subscribe to events and publish events and have those reach all of the subscribers, will really enable us to take this to scale. So in a minute we'll see, we'll be taking a one-to-one RPC channel, and then we can create many of those. On top of that, through the extensibility that we're getting out of RPC, connect it to a PubSub channel, then move from a one-to-one kind of relationship to a one-to-many kind of relationship. So let's dive in, and we'll start by looking at the design of the first package that we've built, FEST API WebSocket RPC. It's called FEST API, by the way, because one of the fundamental building blocks that we used is the FEST API server, which I'll elaborate more in a minute. So first of all, the basic component of our RPC connection is the RPC channel. Essentially, it's a JSON-based channel that provides the ability to pass both requests and responses for functions. Our request will be targeting a specific Python method that we exposed, and in the end it looks something like this. As you can see in the screenshot here of the code, we can see we have a server that inherits from RPC method base, and it exposes specific methods. For example, in this case, we can see it exposes a concat method that takes two parameters, concats them, and returns the string value. On the other side, we'll be calling these methods for an interface that looks something like this, other.method and calling that method. So in this case, we'll be doing other.concat and passing the two arguments that we want to concatenate. So to build this, we'll be using asyncA. As you can see, our functions here have the prefix async to define them as async functions using core routines. This will really enable our code to be suitable for real time. Things will be running concurrently within our RPC server and RPC client. Obviously, as we mentioned, we'll be using WebSockets for having it bi-directional, and we'll be using Pydantic to get serialization. Pydantic really works well with Fest API. It's a native library that Fest API uses, and we'll be using it to define our RPC messages, both for requests and response, and we'll be using Tenacity, another cool library, to maintain that durability that we mentioned. So first, let's look at a key piece of code in our RPC server. When it receives a WebSocket connection, it reads JSON content out of it and uses Pydantic here with RPC message to parse the data, and then we get basically a Python class code message, and we can check, is it a request or is it a response, and then trigger the behavior accordingly. So that's really the basic functionality that we have, to pass on a serialized message, have it target a specific functioning with the specific parameters and have that invoked, wrapped into our response, and then sent on the wire as a response where the other side can wait on it, and then receive the response, parse it, and pass it to the calling component. Within the RPC client, which is triggered via a async with call, we have our basic wrapping using tenacity. Basically call our connect method, wrapped with retry, and we pass in a configuration telling it to constantly retry. So if it fails, if it throws an exception, if it gets disconnected, tenacity will take care of reconnecting for us. Tenacity provides flexible configuration. We can literally go into it and tell it, we want some exponential backup. We want you to wait ten minutes before you try again, stuff like that, and we can expose that. FestAPI WebSocket RPC exposes that to the user. So you can also use it directly yourself from there, but it also comes with exponential backup of some randomization just as a good best practice. So that was RPC. Now let's see how we take it to the next level So RPC enabled us to do a kind of one-to-one conversation. Now we want to go and enable the one-to-many. And we want to leverage RPC to do so. So the basic component that we've built within the FestAPI WebSocket PubSub package, which you can find here on GitHub, is the event modifier. It essentially registers subscriptions for callbacks. And you can see this is basically the signatures of the main methods of the event modifier. And you can see the main one that I've opened here to your callback. It literally picks up the subscription that's passed to it when you call notify, and it just calls a callback function. So that will really work well with our RPC implementation, because RPC, as you remember, is just about calling functions. So on the subscribing side, using RPC would just be accessing the RPC channel, going into other and calling the subscribe method. So we're literally calling subscribe, the same logic within the event modifier on the remote component. And when we publish an event, it works the same way. We call back into the remote component and say, hey, publish an event for me. So just by relying on the RPC's callback architecture, we were able to plug in something really simple that just remembers who subscribed to what, who subscribed to which topics, and then just call callbacks onto the remote side. So in that fashion, we moved really quickly from having a one-on-one interaction to a one-to-many interaction. Lastly, we need to be able to not only scale out on the clients, so imagine now we have a PubSub server and we have many clients serving it, but we may want to also synchronize with multiple server instances, because remember, we want to be able to run this with multiple workers. We want to run this on Kubernetes with multiple instances or on Lambda serverless with multiple instances. So now we need to sync those multiple instances. So in case we have one client connecting to server one, but server B is the one that's triggering the notification, it will propagate to the right server and then through that to the right client. For that, we're using another library called Broadcaster, which is also really cool. It enables you to, in an abstract way, to interact and use both Redis PubSub, Postgres Listen Notify, and Kafka. All are classic, basically, message queues that we can use, and we use them essentially as a backbone PubSub. So our own PubSub relies on the backbone PubSub to propagate messages between the different instances of the server. So this has a key advantage of shifting the workload. First of all, deploying, let's say, a new Kafka instance can be rather heavy. By shifting the workload to the WebSocket PubSub, we basically enable most of the work to happen at each server node, and only when we have to propagate it to the Backbone PubSub. So we can then utilize less resources. Unlike our first API WebSocket PubSub that we're describing here, Kafka, even RabbitMQ, they require more resources, they require memory, they require storage on disk, they require more redundancy. So we can save a lot on those resources by only passing what we need between servers to those there. But still, we can connect to them really easy with the broadcaster interface with our FastAPI WebSocket PubSub package. So let's have a quick review of our stack. So we're using FastAPI for the basic server infrastructure. We're using WebSockets for the clients WebSockets, because FastAPI, while it's great, their client implementation has some hiccups. We're using PyDentic for serialization, both for the subscriptions that we're passing on the wire, and obviously, as we mentioned, the messages both for RPC requests and RPC responses. And we use Tenacity, as we mentioned, for the durability factor for the ability to reconnect when things fail. And we're using Broadcaster to expand our PubSub in servers, if we need. So now when we have a basic grasp of what these two packages are and what they deliver and how they address the problem, let's see them at work. So we'll start with the RPC side, and we can see here two simple examples of both the server and the client using the package. Let's start with the client code. We can see that the client is running within an async IO loop and essentially just calling concatenate on the other side. So we have client.other.concat, and we're passing the values that we want, and then we wait for it until we get our response and we print a response out. In addition, we have the server side where we can see that we expose one method. We can expose that concat method, getting those two variables and concatenating them and establishing that into our FEST API server via a RPC endpoint, just pass to our concat server definition, exposing those methods and running this via UVCore. So let's just run this and see how it behaves. So we'll start with the server so it can wait for the connection. So the server is up and waiting and now when we trigger the client, we can see it run and we can see that we got our concatenated string. It essentially ran on this interpreter, went to the other side, called this function, got the value, got it back and printed it. We're ever straightforward, but still a pretty cool response and it works smoothly. Now let's move for a slightly more advanced example, bi-directional interactions. In this case, it's not only the client calling functions on the server side, but also the server calling functions on the client side. And we'll be also using those for synchronizing between the two sides, between the two components. So you can see here that in addition to the server exposing the concat method, now the client also exposes methods. And we have two, we have allow queries, which is essentially a function that signals an event within the client, telling it that it can start sending requests to the server and we have allow exit. So it will tell the client to wait a random period of time and then exit. The server essentially controls the interaction with the client. The client here does just as we described, it connects to the server and then waits for the server to trigger it via calling that method and setting that event. So it waits for that event to be triggered and otherwise it doesn't continue. It then calls the concatenate method just as before on the server side and prints the result and then waits on the server to let it know when it can exit. So let's run this, oh, one more thing, the server side, in addition to exposing the concat method, it uses on connect to tell the client after it waits for a while to trigger its queries and when the query of concat, when concat is called, it also in addition to returning the value, it also in the background triggers the allow event for exit. Okay, so let's run these examples as well. Again, starting with the server. So we're running the bidirectional server example. The server is now waiting and we'll run the matching client and you can see that both sides interacted and then now after that delay, the client exited. So we saw that exact flow and of course also the concatenated stream that we wanted. So you can see we can have both server and client triggering methods at one another and also using those for synchronization which is really great when we're working at scale. Let's take it a step further and now let's talk about our Popsub client and server. Here we have a slightly more advanced example where we'll be combining, in addition to Popsub we'll also be combining a classic RESTful API server with FastAPI. So let's start with our server example. So our server just as before in its FastAPI and exposes instead of an RPC endpoint, a Popsub endpoint and it exposes a regular HTTP route at slash trigger that sends event through the Popsub. We can see the function that is called, it waits a bit and then it publishes the events for guns and germs. It waits a bit then it publishes an event for germs and then finally it publishes an event for steel and adds data. This is a reference to a book that I like just by the way and this is the offer of that book. On the client side, we have that mirror image of a client initiating and waiting for events for both guns and germs having a callback to on events. So we'll be seeing this print happening when that callback happens and we'll also have another later subscription to steel where we are calling a different method where we'll be printing these information, these information points and also terminating the client which is kind of similar to our allow exit kind of behavior before and obviously running this as async AO. So okay, let's run this again. We start with the server just having it up and running. We'll have the client running and nothing will happen here initially because everything is waiting on our trigger event. So now let's go and approach our server locally and so we triggered the server and we see that in addition to the browser we're asking for the Feb icon which is interesting, the trigger went correctly and it triggered the events on the client side as well as the final event triggered by this goal also passing data. Awesome and now if we had we can have multiple clients subscribing here. You know what, we can actually do that and you know why it's not interesting. So we can have multiple clients subscribing here and all of them receiving the updates at once running in parallel. So we reviewed the codes and now you know if you want to use these packages you can go in, fetch them from GitHub or from PIP, you can just PIP install them. PIP install can do PIP install FEST API WebSockets RPC and PIP install FEST API WebSockets PubSub and they're ready to go and you have those code examples also available in documentation on our GitHub. Really easy to start. So that was our session, I hope you enjoyed it. I invite you to stay a while and listen a bit forever and ask me questions in our breakout room which you'll be getting a link to in the chat right now. And of course you're welcome to look at these packages that I've shared with you as well as other projects that I've built that use this. Authorizing that I've built with a soft coin which also leverages OpalAC which is a solution to manage your authorization and OpalAC that I mentioned that is synchronization layer that uses FEST API WebSockets PubSub to make sure that the data reaches into your open policy agent and also Rookout which is a previous company that I created that while not using these packages uses a similar pattern to enable you to set breakpoints on the fly in production. So those breakpoints when they're triggered they're triggered through a WebSocket connection that runs within your code waiting for you to set that breakpoint. Cool. Thank you very much. I hope you enjoyed this talk and look forward to hearing your questions and getting your feedback and both here on PyCon as well as on GitHub you're welcome to open board requests, issues, requests and I'll be happy to chat with you. Thank you very much and enjoy the rest of the convention. Hi, Or, how are you? Great. I hope everyone enjoyed the talk. Unfortunately the talk was a little bit longer but I have I think we can cut one question which is why would you use PubSub over WebSockets on Message? Over on Message? So why would you use PubSub in general? So the PubSub channel will give you the kind of scale instead of having one you can have many instances at once and with the PubSub solution here explicitly you can also scale it out and then mention the talk through our PubSub solution so you can also scale it out on the backend side. Hope that covers the question. Okay. Unfortunately I have some connectivity issues. If that's okay. I don't think that we have more time for another question but you can always reach our room. Also if you want to check with me longer time you can find me at OpalAC there's a link to a Slack channel there happy to answer questions all day long. I have to say that this was a great talk for me. I hope that you liked it and I hope everyone enjoyed it. Thank you.