 Resvan and Livy, we're gonna talk a little bit about real-time clustering with open sips. With that, we'll give you the four. Hi, everyone. Good to see you again. I see a lot of known faces here. I'm glad I see you and I'm glad you come to our presentation to see how we are doing with open sips 2.1. What are the latest features that we've been working on? So, real-time clustering with open sips. Well, why do we need the cluster? That's obvious. We want to do high availability. We want to have our platform as scalable as possible. We have to ensure that our platform has geographic distribution and everything is, and all these instances should behave as a whole, as a single, big, huge platform. Right? Everybody wants that. In order to do that, we have to somehow make sure that all those instances, all those single instances, share some data. They could be doing it using a shared database, or ideally, there should be a mechanism that will make these nodes communicate to one another in a very efficient way so that we don't really need to install a new platform. We need to install and add some latency to a shared database. Well, that's exactly what we've done with our open sips clustering. We've provided a very efficient communication clustering layer, which is built on top of a custom protocol, a very efficient protobin. We call it protobin because it's binary, so you don't really need to parse anything. Everything is serialized in a very, very efficient manner. It's built over TCP, so it is, you don't have to deal with any retransmission or stuff like that. The TCP layer will ensure that data is properly sent. And it also uses some application, some application data, such as link state, which gives us a hard bit package between these nodes in order to see the entire topology of our cluster. And you'll see that it's very simple to use. But let's see what are the features, why we are looking into this clustering model and why we are investing so much time in it. So again, imagine that you have a cluster that can have any topology. So for example, you might have nodes with spread in the U.S. And you can really have fast lanes through each other, so you can always do a full mesh. You'll end up with something like this, right? One of the main feature of the clustering model is link redundancy. So for example, having such a cluster, if one link goes down, then this node will still be available, even though there's not a full mesh here. Because we'll be able to send or use any of this information through this node. So this is one of the main features that the clustering module presents. Another one, this is basically a youth case. We've been working on the cluster module to ensure high availability between nodes. So imagine that we have here a customer that is using this node to make a call. But suddenly, this node goes down. Well, using the clustering and the dialogue replication features, we are able to easily move the call on this guy or whatever node is closer to the caller. This is done automatically. You don't really need to do anything. Simply set up a cluster. Anycast, well, is the same thing. You have here a caller is calling to this node, but somehow this node becomes to be very, very loaded. So the Anycast algorithm puts it out of the route and moves the route over here. Well, having a dialogue replication in the cluster, you will be able to easily move the dialogue on a different node. Again, this is done instantly and without any further action. Another thing, you like to do a platform-wide calls per second limitation. That's really easy because all these nodes share all the calls per second information. All you have to do is set up a cluster and ask the rate limit module to replicate its counters. That's it. And as I said, everything is very, very simple. All you need to do is, all you need to do if you want to scale, you simply power up a new node, you link it to one, to an existing node, and that's it. It will start by itself to connect to all the other links as much as possible if that is possible. If it's not, it will be alone like this one. Doesn't matter. It will still be part of the cluster. So as I said, when inserting a new node, all you have to do is find out the network topology. You can do that by looking into the database to see how each node is provisioned and you can get all the nodes from there. Or you can simply connect to a user node and start exchanging information with that node like topology information because you might have some nodes within you. So in this case, this happens when, for example, two clusters merge. And after you connect the new node, it will start to query all the other nodes that he will be able to talk to in order to create a very, very fast, let's say the fast lanes to all the nodes. As soon as the connection is done, it will start pulling information from one of the nodes. It doesn't have to be a single node. Each node, each new node can sync from a different node, which whatever one is less busy. And once the one is fully synced, then it will be able to also replicate the new data to other nodes or, of course, will start making, will start processing traffic. But that's it. That's as simple as possible. All you have to do when you want to add a new node, just power it up, link it to an existing node within the cluster, and that's it. Everything will work out of the box. Well, so these are the features that we've been, the cluster feature, let's call them, that we've been working on. But let's see some practical examples that we have done, and I will ask my colleague Liviu to present you. Thank you, Resvan. Yeah. Hi everyone, my name is Liviu, and I'm going to start off by saying that Resvan had an interesting, all throughout idea that everything is simple, right? So all about the cluster is simple, and this is kind of the same thing that we had in mind when we decided to tackle this distributed SIP user location problem, which as you all may know, it's not something new, and people have had to solve or deal with it in one way or another for so many years now, and we just want to make it easy. We want to simplify this process as much as possible in the 2.4 release. So this is kind of what people have been looking for. Some customers want to geo-distribute their SIP user locations. Others enjoy or prefer to, they have very volatile subscriber numbers. They might want to easily add new servers or scale them up or even down in some cases. A lot of them start worrying about high availability, what, due to different company politics or policies, and they have to make sure that their service has certain uptime requirements and whatnot, and I guess the first one is something that everybody worries about, and you kind of can't get past it these days. The natural server problem, and the fact that pretty much all of your user agents are going to be behind some sort of net device, and last but not least, the keep-alives that have to be originated by the platform to keep those bindings alive. So putting all of that together, we kind of ended up with two designs. Everything is mentioned on the website. I'm going to give the links some more later. But for now, let's go with the first one, and this is for ideal for setups where you have multiple locations, and we are going to use the cluster to handle all of the communication and synchronization between the contact sharing and all that. Pretty much we can have the advantage now to skip the SIP part and just pack everything, all the minimal information into these kind of metadata packets, and thus share the state across the whole cluster. We have the advantage of being able to resize any of these locations, and maybe even in a highly dynamic way without any kind of data corruption ensuing. And again, this is for the multiple locations, kind of set up, and the best part about it is that the script does not change at all. So if you're familiar with the save and lookup primitives of the OpenSips and Kamayo type of flavor, that's going to be the same even in a distributed fashion. Just save the subscriber. The cluster will do everything. You'll propagate it to all the locations, and the same with the lookup. It will handle routing. And okay, so I guess there's four more minutes left. So this is what I was talking about, the routing, where if we are, so let's say Alice has registered and her state has been propagated all throughout the cluster, and now Bob calls her in C, because that's his local pop. C cannot directly reach A, although it knows about it, right? Due to the net. So there's, as I was saying, when you do lookup from the script, everything will be automatically handled, and actually the SIP invite here will be translated into a cluster packet, and then it will reach Alice. And regarding the second idea that net keepalives, they will all be handled, I mean, only on the A side. Although these guys know about Alice, pop, yeah, pop, B and C, they will not do any pinging. And that pretty much sums up the first model, and the second one is where, while the first one was more focused on geo-distribution, this one is highly focused on a big, large deployment with a big bunch of subscribers for which you kind of don't want to handle all of this data inside your open SIP boxes, and you'd rather outsource this to some database that offers proper sharding and ways for you to scale both your reads or write, whatnot, and again a good advantage of this design is that you can have an SBC in front of it and pretty much treat all your data as just duplicated inside the cluster. So to sum this up, again, we have all the location data in the database. There's no longer, we no longer store it in the open SIPs memory. You can resize your platform, whichever layer, cluster layer, add or remove boxes on the fly, same with the database. And a couple more minutes left, and the scripting stays the same. Again, this is all documented on the website, under the development tab. We highly encourage you to go there, check out, there is a lot more, it's quite verbose, and you can get a lot more info there. This is a quick run through the natural-versal behavior for the DB-enabled cluster. We look up, so the invite hits the platform. We end up doing the DB-lookup, and from this point we can route it out through the SBC that has the net binding. And with regards to the keep-alives, we can structure our queries in such a way that each box only pings its own slice of subscribers. So again, there's no extra pinging going on and we minimize the number of options that we need to push out in order to keep the subscribers alive. So to sum everything up, there's quite a big bunch of stuff coming up in the 2.4 release with regards to clustering. We have a lot of things going on with distributed user location, cluster self-discovery, and dialogue syncing. And please, you can easily contribute or invalidate, refute whatever we say because there's no right or wrong. It's just a matter of making as much people happy as possible, fitting in with as many requirements and areas as we can. Right, so I've been live you, Resvan, and we encourage if you want to find out more about OpenSYPS or learn more about the ecosystem around it, please visit, you can enroll with the summit that's going to take place in May. There's some discount things going on in there. You can check the slides out later and make use of it. Pretty sure that will expire pretty soon, so make sure you, yeah. Okay, if that was it, if you have any more questions. I don't think we have questions. Oh, we're past? Okay. Yeah, we'll be around. Thank you.