 Hi, my name is Ken Giusti, I'm a developer at the ApacheCupid project, and I maintain and develop also messaging the AMQP10 driver, I'm employed by Red Hat. And I'm Andy Smith, I also work with Ken, also the messaging driver, previous to working on OpenStack also worked on some projects around fault system management and cloud management as well. I don't know why we're getting the feedback. Okay, so in this presentation we're going to talk about new technology coming from the ApacheCupid project, it's called the dispatch router. And we're also going to discuss the concept of brokerless messaging in OpenStack and where it fits. We're going to give a demonstration, and we're going to talk about the work we've done for integrating the dispatch router in OpenStack environment. Additional information if you're interested in what we're trying to do in the next future releases. I wouldn't use that. It just happened. Well, we do much better with the router than we do with hardware. What happened here? Free software folks, you get what you pay for. What did you hit? I think you've logged me out. Something's not working folks, did you lose connectivity for though it's dead? Yeah, but there's nothing to show. Oh, what happened here? Oh, that's great. Hey folks, it's free software, you get what you pay for. This is bizarre, huh? Is it a scrolling issue? Nope. What the heck is going on here? Okay, let's try this again. I won't touch the buttons. You know, I'm going to do the buttons. I'm going to do the buttons. Okay, so. I'm going to do full screen. Is it full screen? I know. Let's not. Why do we do the demo? You think this is good? So, what is the dispatch router? Okay, this is a new messaging component for the Apache Cupid project, fully open source, licensed at Apache 2.0. It's separate from OpenStack. It's been in the field for about two years now. What happened? Excuse me. What is it? It's a high performance message forwarding engine. Think of it as, it's a message router. Think of it as kind of an IP router, but at the messaging level. All right? Unlike an IP router, it's not dealing with IP addresses, but with message addresses and routing based on message addresses. So, it sits above IP. And it's made possible by the features of AMQP-1L. AMQP-1L is a lot different from the O10, and they use it like the older Cupid D and O9 used by Rabbit, and that the AMQP working group got rid of the requirement of a broker, or broker's optional, now it's a symmetric peer-to-peer. And part of the reason was that, you know, back in the day when it first started, the reason they got kind of made the broker optional is that it seemed like every problem, you didn't even have a hammer, every problem looks like a nail. Sometimes you need different solutions than a broker for your particular messaging problem. And this, we believe this project, this technology will simplify the configuration and operation of very large-scale messaging deployments. So, talking about the router, a little more specific. So, as Ken said, if you think of, like, your classical IP networking router, it kind of has three main processes that it kind of goes through. One is kind of learning what the network is, understanding what's connected to the network. The other process is understanding, okay, once I understand what's connected to the network as traffic comes in, how do I get it to its destination. And then the third characteristic is how do I describe the treatments that a router is going to apply to the traffic that goes through the network. How do I describe, really, the messaging router kind of follows the same concept. So, from a learning perspective, life really begins when targets start subscribing to the message bus. That's how the router begins to understand what's attached, what are the connections into the network, what are the subscribers of the network, and what are the addresses, the messaging addresses that they want to listen to. With that understanding of what targets are attached to the router, you can then begin to understand the forwarding function. So, as a source wants to send traffic to a particular destination, with that connection link using the AMQP protocol, the router will see the indication of where that traffic wants to go and direct it to the target. So, really, the router is just looking at the addresses in the message, understanding where the destinations for those messages reside from a target perspective, and then sending that traffic along those links. The thing to understand is from a delivery perspective, the ownership is end to end. At no point does the router take responsibility or ownership of the message. That negotiation is really between the source and the target, and it's up for the target to send that acknowledgement back to the original source. From a delivery perspective, there's different patterns that the router supports. So, it can support a unicast model, but can also support a very efficient multicast model as well. It's stateless in the sense that, again, no time, does the router take ownership of the message? So, if a router does fail and resume operation, there's no kind of re-establishment of state across the router infrastructure to continue the forwarding of messages. So, a single router on its own is in all that interesting where the messaging router, really the value of it comes from, is in the ability to connect routers to create peer relationships so that you can begin to build topology of your messaging infrastructure. It's very easy to create these peer relationships by having two routers connect to each other. Once those routers connect to each other and learn of their presence, they begin to exchange messages. And it's basically two classes of messages that they exchange. So, if you're familiar with IP networking, write the classical router hello message is the first message. So, over this hello message, routers indicate to their peers, their presence, their current state, is basically the concept of an epic number. So, as a router operates and its state changes, it's able to indicate to its peers so that those peers can take actions and the second message transfer that the routers perform is your classical address update. So, as the routers learn of the attached targets to those routers, they exchange with their peers the presence of those targets at those routers. And that's how the routers begin to understand locality. So, as they start to perform their forwarding function, there's this understanding of where does a target exist and what router in the network does that target reside at. So, from a forwarding perspective, we extend it by a router and make a decision to send to an ex-hop router to reach a destination or, optimally, it can determine that a destination is locally attached to the router and send that traffic directly to that target. Again, in all cases, acknowledgement is end to end. Ownership is end to end. The target is the one that acknowledges that message to send back to the originating source. So, what gets very interesting is when you kind of start to build arbitrarily large mesh topologies, right? So, using these same mechanisms by interconnecting routers, establishing peer relationships, the same processes of learning where targets exist, exchanging that information with peers, understanding the router topology is what lets you to build a very large scale mesh interconnect for messaging forwarding. So, what immediate benefits of this is obviously a scalability in a couple of dimensions. One is you get the scalability from a connection footprint. You're able to take your connection pools and kind of expand it across the router mesh, but you also get the concept from a forwarding perspective, you get parallelism, right? In the sense of, as you distribute your source target relationships, the ability for routers to optimally select and use the shortest path to get from a source to a destination increases your overall aggregate messaging throughput of the system. The other thing that you get that's kind of built into this protocol is obviously the dimension of availability as well. So, as all these protocols run, if you're, you know, we're pretty familiar with IP routing and the ability for routers to detect failures and network connectivity and recover to those network connectivity failures, the router mesh works exactly the same way. So, as you have established transfer points between the router mesh, if a failure does occur, the routers immediately detect that failure, understand that there's a reason to change state across the router network, exchange that information with the peers, and immediately the routers will update and respond and route around those network failures. So, that path that traversed the top piece of the mesh before now goes through an alternate route automatically and, again, what's key is it's transparent to the endpoints. There's no interaction with the endpoints. They don't have to be aware of this change in the router network. I think another point is that since this is all through peer relationships, there's no kind of concept of split brain, there's no reelection process when failure occurs. It's all automatic. It's immediately going to resume operation when network exceptions occur. So, what's nice about the router deployment model is that you can start simply, right? You can start with a single router, start to get a feel for it, how it works, how it might change your interactions with services, start to build out more complex data center topologies where you're looking for additional redundancy. You know, the placement of routers and associations with service is quite flexible. So, again, it gives you a lot of benefits from scalability, but, again, as you think about growing larger scale, as you want to start taking on multi-site interconnect, it's a very simple process. Really, it's just establishing a peer relationship over TCP between peer routers through your network. You can now begin to broaden your machine and connect. And there's no, you know, basically there's no limit, right? You can begin to create basically messaging routing tiers so you could have a core mesh that then is used to then distribute to peer meshes across the data center. Again, the key point here is that it's all based on locality, right? So the forwarding is going to take the shortest path. It's not going to send stuff across the network and back. There's no central point of conversion in your message transfer. It's a truly distributed peer-to-peer relationship. So, again, you know, large-scale geographic data centers are a possibility with this. So, actually, now, somebody came to you, a boss came to you and said, hey, you got to manage this. Got to update my resume. Management's really important when you talk about these skills. You have to have a good management solution. And the router does. Every router has a router management agent running inside of it for monitoring and for control. This management agent runs in a management protocol on top of AMQP10 that's been defined by the AMQP Working Group. Essentially what this means is if you can reach the router from your management client, you can manage it. As a matter of fact, we have a client, a console client here. You'll be able to see that it talks to all the routers and shows you an aggregate. Topology and aggregate statistics, and you can drill down each one of them. So, here's an example. This is part of the GUI console. This is part of the Cupid router project. It's not necessarily an open stack, but for this demo, my friend kind of ported it to Horizon, which is kind of interesting. But here you see aggregate counters and you see four routers here. So, you can drill down and get statistics and monitor the health. It also gives you a live topological view of your mesh. Those yellow circles are the two groups of client, if you will. This is his setup, the guy got the screenshots. That blue circle is important because that's the console. The console gets the whole thing by that one connection, QDR, QDRX. So, we can query all of them. And it doesn't really matter how big your configuration gets. So, this one, there's a little blue router here off the corner edge. This was set up by a friend of ours, Mark Wagner from the Red Hat Scale Labs. This is across 20 discrete machines, four nodes per machine, I believe, four virtualized routers. And so, we've got 80 routers in this particular protector. And this was to test the routing protocols at a larger scale. So, how do we use this? Where does this integrate with OpenStack? First, I want to talk a little bit about Oslo messaging. This is part of the Oslo project. It provides an inter-service messaging library. And it actually could be used outside of OpenStack, too, because it does stand on its own, but it is used within OpenStack for messaging. Basically, everything you see on a broker arriving message is coming and going, is going through this library. It provides a high-level abstraction of two common messaging patterns, RPC, Remote Procedure Call, and Notifications Plus Sub. But underneath, you can plug in different message bus technologies. You can use Rabbity. You could use CupidD. You could use ZMQ. It's transparent to the application above. So, we have a choice. Not only that, you can configure one pattern to use one bus and another pattern to use another bus. I'm going to look at that a little bit. Before we go into that, I want to talk a little bit about the high-level message services provided by Oslo Messaging. So, there's two. There's Remote Procedure Call and Notifications. These are two very different patterns in terms of messaging. For Remote Procedure Calls, it's a synchronous exchange between an RPC client and a server. The client generates the message, the server receives the message, processes it, and sends back a reply. So, in terms of time, they're tightly coupled. If the server wasn't there, the client call would fail, right? So, they have to both be present at the time of the transaction. Notifications are completely different. Notifications are synchronous, and that notifiers can fire off events without worrying about any consumers being present. That means they have to go somewhere. Listeners could come in later points at any time and start consuming those messages. This pattern, in general, is called when two clients are separated in time across messaging. You must use StornFord, a message bus that offers StornFord capability. Cue, you must use a broker for that because you need somewhere for the messages to hang out before they get consumed. All right? Let's look at how these two patterns are laid on top of a broker using a broker as a message bus. This is any broker. This isn't just RobinMQ, this is CupidD2 also. So, messages simply go off, get cued up. Those notifiers can go away. They don't wait for an acknowledgement from a peer. They don't need an acknowledgement, but they're consumed. And eventually, at some point, the messages are pulled off and consumed by a listener or multiple listeners. A lot more complicated for RPC. Okay? For RPC, there are four transactions that happen for every single one call, right? The message is originated and cued to the broker. At this point, the broker acknowledges the message and that's significant because it's now taking control of the message, owns the message. And the client's visibility to whether or not the message is delivered ends at that point when the acknowledgement is hit from the broker. So, you need to make sure those messages get through, right? In case of a broker failure, you need some state. Eventually, the message gets consumed by the RPC server. Assuming it's there, processed any reply messages then, again, sent back to mirror of the RPC client. The message is then cued, acknowledgement goes back. Server doesn't know if anybody's consumed it. It sits on the queue, pulled off again. Acknowledgement goes back again. So, we've got four transfers, four cuing operations for every single one RPC call. This is a significant load on the broker. Non-trivial load on the broker, okay? So, this is what we're able to do in Newton. Also, the message I said had the ability to configure different types of back-end technology based on the protocol. So, in Newton, you'll be able to use Rabbit or any of your favorite browser, a browser. Your favorite broker for notifications by configuring a notify URL. The default RPC URL could be configured to use this mesh technology or any other type of technology we support as purely for RPC traffic. As you can see, I just wanted to point out that the transfer URL, AMQP colon, this is the same transport that works with Cupid D. It talks both to Cupid D and to this because it's a protocol-level transport. It speaks AMQP 1.0. It actually even talks to ActiveMQ and there's been some work for Artemis because they all speak 1.0. So, for the driver, it all looks the same. Oh, dear. What could possibly go wrong? Let's see now. They always tell you, never to do a live demo. Always recording. No. Okay. So, on this machine, on this laptop, I have four containerized routers that interconnect over a virtual ethernet. I'm going to fire up a little script to bring those up. It's not really a mesh. It's a network. Okay. So, we have four routers. They're specialized. They're two edge routers which clients connect to and then there's two interior routers. I've also started a console so we can look at that. There are a number of command-line tools that you can use to manage and query. QT stat happens to be a querying tool. I'm going to point it at localhost 8888. This is edge one. And I'm going to have it dump the nodes in the network. As you can see, the routers, the four routers there. And for each router, there is a next hot, how to get to it from the current router you're talking to. So, in order to get from edge one to edge two, it needs to take demo interior one. Demo interior two isn't there. And if you could see the, if you could see the topology, and I think we can, there are four routers. And here's the topology live. And boy, that did not scale at all in this. Ah, thank you. It's a little dynamic here, right? So, these are the two edges. I can lock these down somehow, but forget it. These are the two edges. This is the console here speaking. These directional hours are not directional. This shows the direction of the TCP setup. Okay, these are bi-directional TCP links. But you can see that, for example, edge one at the top there. I'm sorry, at the corner here, it's connected to demo. Fortunately, the names aren't showing up at all. But demo interior one. Oh, notice the green line. That shows the optimal path from the red highlighted node. Right, so in real time, we query how to get from edge two to edge one. That green line shows us what the raw computation ended up with. Is that an interior? Yeah, so we're going to put the edges on top just so we get an idea. So this is all well and good, but you can go to the overview, statistics overview, and you can see there's no real, other than the halos and a few management things, there's no real addresses learned at this point. So I'm going to go back to my handy-dandy console, and I'm going to fire off some Oslo messaging clients. They're very simple RPC and notification clients that I've written. There's a few of them. What we should now see, ah, a whole bunch of addresses just popped up. And you will see that. If I can drill down on these things, the screen real estate isn't one I anticipated, but, for example, I know my tool sends debug messages. Ah! You can monitor exactly how many messages have been delivered through the mesh in the real-time rate. Okay? So it works. Now, come on. We can't solve it that right? We got to break something. That's really what we do here today. So let's try sudo docker. Remember it goes through interior one. Stop. Demo. Oops. Sorry. My name. Demo. Interior. The or. Ah! So at this point, oh, one died. If you can look back at our QD stat, and we can see that, interior one is still there because the routing probably hasn't timed it out. But you can see that they're going out through router interior two. That was fairly immediate. The console, not so fast, right? I'm going to go over to topology and see what's going on. Oops. Yeah, that was good. Oh, you're going to break my heart, huh? This should... Queer. Ah! Okay, kids. It's working. So now, yeah, this is a mess. Now we have edge one going through interior two going back to edge two. And you can see that the path has been restored to the secondary path. So, let's bring it back. And I'm sorry about the graphics again. This will take a little longer because the routing protocol has to boot, but you can see that... Ah! There it is. It's snapped over to interior one. Again, it's because it's an optimal path. And so did this update. So, you can see that the path is restored along interior one. Here comes the big one. And the traffic has been restored. That's it. That's the stars we're going to go through. Now you're going to make these clients and stuff available on GitHub. Sorry. We don't need that anymore. Okay. Just leave that there. Yes. Thanks. So, as Ken and I go out there and start talking to people about the router, we get a lot of frequent questions, right? It's like, how do you compare and contrast this to, you know, obviously using a broker back and how do you compare and contrast to using alternative backends like 0MQ and such. And I think our response is typically it's really on a use, you know, what's your use case? What are you trying to get from an architecture perspective? Because as you compare these technologies, they're all different, right? They're all going to put you in different places at the end of the day. And so as we try to kind of take a step back and think about using the router and deploying the router for RPCs initially and eventually other kind of architectural models down the road, you know, one of the clear benefits of the router is going to be with regards to scalability, the ability to easily add and remove router capacity and configuration and topology is really a core attribute of the router, right? The protocol is built to make it automatic, to make it easy, to put routers in place in a very flexible way. This is going to be very different, right? It's a very, it's a peer-to-peer distributed model. So as you think about expanding in different ways, the router is going to have some benefits in that regard. I think it's second, you know, the second benefit to think about the routerless mesh-based approach is again from an availability perspective. Because availability really isn't a separate concern from the router perspective. It's really built in the way it works, right? It's really built into the way it's going to exchange information with its peer relationships, the way it's going to constantly learn who's attached to the router mesh to understand when changes in the mesh topology occurs, changes in connectivity occurs to take automatic action to recover and restore services that way. And I think the last dimension to kind of think about it again from an architectural perspective is ultimately what is the topology that you're shooting for in the end state, right? In terms of kind of maybe local data center, converge infrastructure where everything is kind of localized, the router may not be where you want to end up from an architectural perspective. You may be fine with what you have today or an alternative back end. But if you're really planning, if you're really trying to think about how am I going to get incremental growth both, you know, within the data center and then horizontally across different multi-site locations, the router is technology certainly to consider. Because again based on that locality-based message forwarding, it's going to make optimal use of the resources and really optimize the way messages get through your infrastructure. So those are just some quick summaries. And then, you know, in addition to the Oslo messaging driver and the router itself, the other thing we're looking at is terms of how do you consume this technology kind of in a general fashion, right? The thing has to be consumable. We can't just give you a router component, tell you how to change the transport URL and you're off on your own. Really we're also looking at kind of the holistic view across OpenStack in terms of how do we begin to account for installation coverage, how do you get this so it's easy to make the transport configurable across all the core services. So today, in addition to the packaging, trying to cover all the packaging fronts, you can go try this out today with the DevStack plugin, right? There's documentation to say how can I do a standalone setup, use AMQP as the back end, get the router deployed and you're off and running. We're working on triple-O installers, other installers in the future, but the other thing we have is there is also a puppet module that's in OpenStack that is integrated across the core services. So there's an easy way using the puppet and there's an Oslo puppet module that's also quite helpful that through the service configuration you can then configure the back end and the parameters that are specific to the router. Can also reference there are some pretty good monitoring and management tools, QtStack, QtManage, and certainly Horizon Console plugin is in the roadmap where in addition to just kind of giving you the router topology, really I think the value of that plugin is going to be is to then show you the relationship of the router infrastructure back to the core OpenStack services themselves, right? To give you an understanding of how is my OpenStack service using and consuming the messaging infrastructure to get a sense of my performance and operation. Okay, so in the new release we have... This is where we introduced the blueprint and it goes into much more detail than we talked about. So if you're interested in the justifications of the approach taken on a kind of coder level, something you might want to check out. If you want to use it in DevStack or deploy it straight, look at the developer, also messaging developer documentation. There's a page on A and could be one. That describes how to use the router. It also describes how to use CupidD if you're interested in that. Same router, two different back ends. And if you're interested in the project, you want to talk to the project engineers, CupidD patches a web page, there's REC channels and all information there to read documentation, to talk to other engineers who have been working on this. So what's next? This is the first release of this technology in OpenStack. So we need people to literally kick the tires, right? We are looking for people who beat on this and demo it and POC it and come back to us and work with us to improve it, right? Better understand their operational needs and tune the driver and the router perhaps in optimal configurations. Immediately in the next term, both on the CupidD project site and also the also messaging driver, we're going to be working on improvements to throughput and latency performance generally. And we're going to refine some tooling with some changes in measurement techniques for directed messaging. The goal of the project is to have, network scale into the thousands of router nodes. That was 80, so we've got some work to do. But that's the goal and we'll get to that soon enough, I believe. The functional integration. There are installers that we're working on and it will be completed for a triple O for fuel, configuration, Puppet. We've got some work in there and the horizon plugin as Eddie had mentioned for the router management. As far as what happens with brokers in the long term, this is really the Cupid project, not the also messaging or open stack. The intent is to complement brokers. The idea is to use a router mesh as your on ramp and your back plane to messaging and to plug services in. For example, broker. This would allow us from open stacks point of view to intelligently route RPC data traffic between clients and servers directly using the mesh, but also route notifications to the broker and source them from the broker. That way you can virtualize your broker instances. You can actually have multiple brokers using different namespaces. Again, to be deemed. And, oh, yes. As I said, it will route notifications and messages to the broker and consumers from that address will be pulled from the broker. All right? And that's it. I think we have for now, so... Okay, thank you. I think there's a mic for questions. Yes, so, any questions? Is there a microphone out there? Thank you. So you mentioned that the... or you talked about the fact that the routers calculate the best path through the mesh for a given... two given endpoints. And you showed it recalculating that in the case of a failure. And I was wondering, does it deal intelligently with, you know, congestion, et cetera, which... That's a very good question. Especially in the open stack context. That's a very good question. I didn't talk that much about the AVP-1.0 protocol, but it can track, and the router does track, outstanding message queues, basically, right? And it can, if you have multiple, for example, multiple destinations to consume that are consuming a certain address, it can not only balance them, but it can get feedback from how backed up those queues are, if you will, inside the broker, and it will vector them to less busy consumers. That's a question about it there. So you demoed the high availability for a single router. But what about clients? Can a client reconnect to a different router? Oh, yes, absolutely. Was it buffing in the mesh? The clients at the edge of the mesh, in also messaging specifically, also messaging has nothing to do with this particular driver, but it supports multiple hosts to choose from, right? So if there's a failure to connection to one host, the client will fail over in itself. So you can home these between, say, two equal cost path routers. It will use one, and should that go away, keep a live or connection drop, it can vector over to another one. All tolerance by the also messaging library itself at the edge, but tolerance of the internal mesh handles everything within it. And the client would register to the routers, basically the same client ID, so the message gets rerouted, or the message that's in flight gets lost? Well, if the message, if the path goes down, the message can be nacked from the router that's attempting to get it to the path that just failed. So if it sees the port failure, it will send the message back with a negative acknowledgement. The recovery is left to the client. Okay, the client needs to decide the best way to handle it, but it's guaranteed that it was never delivered. So if retransmission is possible, that can be done. In terms of messages lost, if you don't have truly redundant hardware, it's certainly, things can get lost. You have cataclysm failures, things will get lost. Those will also be signaled back to the client, but they are in doubt, right? So it'll be a negative acknowledgement, but it's in doubt. It's up to you whether you want to transmit it or not. If it's item potent data, and does it make a difference, you can retransmit it. Something more important, you have to elevate that to the application. Okay, thank you. Oh, thank you. So in one user slide, you have this diagram between data center and have one, I would call like a router out of each data center. Is that, that's the assumption, but I'm wondering would this also work with if you have more than one, you know, router from data, is there any restriction in the protocol to say you're going to have to have only one? My second question is, if I can summarize the benefit of this over what we have, rapid and qubit, basically it's parallelism, right? And the fact that a router is smart, it can do, is that basically it? Is there anything else? Yeah, I think it's, the intent is that there's the right tools for the right job. Like brokers are really good at brokering and storing messages. And this is meant to complement that as it's, as you use them together, the router mesh will handle the forwarding and fault recovery of the paths, right? So all it does is dumb, it does the forwarding and routing. Integrate it with a broker, that does the brokering, right? So it's more about kind of balancing your load as you need it. You need queuing, you use the router. Flexibility, basically. Okay. One last one. So you talk about scaling this to a thousands of nodes at a point there. I'm wondering if you have done any baseline or comparison, how you think that this will scale up to a thousand, for example? Well, we don't quite have the hardware to do a thousand at this point. But the 80 router node was fine in terms of CPU utilization and memory. I think that all the routing protocol, timers and configuration, numbers are, parameters are configurable. So we believe that when you start getting higher, we're going to have to come up with different configurations and deployments in order to reach that goal. I see. And because this is flexible in terms of reconfiguring the match, it would be possible. Yep. And the current router, I think the current version, I think is just hard coded, configured to be like 128. Oh, yeah. So I think again, as Ken said, from a protocol analysis, that the understanding is architecturally it can scale to a thousands. I think so. One thing is to change the implementation and then to a point to do the test and validation. I think the other concept that might be part of the router architecture is also the concept of areas, in terms of, you may want to have some isolation in terms of the interaction of routers across the mesh. And so that would be another, again, the upstream Cupid project is looking at as a capability. Okay. Thank you. Hi. Yeah. Just wondering how do your current, how do your latency and throughput numbers compare to rabbit currently? The, on the broker, I'm sorry, direct in and out broker compared to rabbit. The broker somewhere, at least on this laptop, I have the pcaps files, but the broker somewhere in the tens of microseconds, in and out. I'm sorry. The router is tens of microseconds. And I believe that Cupid is, I'm sorry, Rabbit and Q is about 100 microseconds, but that's simple benchmark right there. Yeah. At scale, I don't know. You didn't test that when you had the Skylab up and running? No, I didn't. That was sent to me this morning. Okay. But yeah, we certainly, if you're interested, we can query the guys upstream and we can figure something out. Yeah, that'd be great. Yeah, sure. We're using the Okada release to really focus on performance improvements, like I said before, right? This is getting it to work phase, right? Then we've got to come back into the forums and there's tweaks we can do on both ends, on the driver and in the router. So from your presentation, I didn't get a sense of how you impose neighborhood relations because this is all TCP IP, you know. So anyone is potentially a neighbor of another router. Sure. So how do you impose the topology? Well, that's largely up to you and what your physical layout is like, right? In terms of TCP interconnect, right? So for example, in this geographic distribution, you would probably have very fast-switching 10 gigabit ethernet between the nodes that are local, maybe something dual-redundant fiber, perhaps, or some of the high bandwidth cross-continental link, right? So the other thing to consider, though, they're not all running on the same subnet. Those subnets may not be reachable. We may have unrotable subnets on one data center and unrotable subnets on the same subnet and the other data center, right? This doesn't work at the IP level. This works at the protocol level. You could have a 10-mumble-mumble here. You could have 10-mumble-mumble there. The routers can bridge that. They're aware of the connections, but they don't see the whole TCP picture, right? They're aware of these other routers talking to them via their TCP connection, but they don't know what the IP connection is. It could be totally different IP, or the same IP configuration and disparate clusters. So my question is, I mean, when you have these two routers, they are going over multiple hops potential. I mean, how do you know which router is a neighbor or the other router so that you exchange your hello messages? Oh, yeah. So my question was even before... I'm sorry. Before your explanation. Off into a different world. So do you impose it with some configuration? It is configuration. There is a configuration, an initial configuration file that does lay out. There's a concept of connectors and listeners. A listener is a TCP service that the address is used by the neighbor. It can be configured with a connector to connect to a particular listener on another router. And that's really the only TCP configuration you have. That can also be updated and done via management dynamically. So you don't need a topology discovery, basically. You can grab it from your configuration file. Oh, well, in terms of clients. From the router. Oh, between routers, yeah. Okay, thank you. Great question. So these routers would need a floating IP. So we can connect to them across clouds. The routers would need an IP. Yeah, I mean, if they're going to talk to all the network, they're going to have to have it. They need to be able to route between the connections, but not to the next hop with IP. You could have a different summit there that has different IP space. Okay, great. So the question is, why should I move from some broker less model like zero MQ to this router model? What are the typical advantages I would get out of this? Especially when right now we are not aware of the performance of this router-based model in high, super high scales. Well, don't. I guess, ZMQ is a kindred spirit in solving this problem, right, with eliminated Qing and RPC. And for that matter, if the broker is fine for your load, use the broker. Use RabbitMQ. This is the known quantity in messaging bus. But ZMQ and the router are kind of different in deployment. One thing is that the router system, its intelligence is managed by the network itself, right? And the endpoints are fairly dumb. It looks like a broker to them. They just connect to where they need to connect to. The mesh takes care of forwarding, routing, and recovery, right? The zero MQ stuff, that's direct TCP connections. So it's got very low latency. But that intelligence is pushed to the edge where the address mapping has to come into play. So the router, the drivers in Oslo messaging have to kind of keep that knowledge there. And it uses, I think, a matchmaker service for that, which may be fine, right? If it's working for you, like I said, it's working for you. What we kind of envision this as is if you have multiple disparate data centers that perhaps have overlapping IP addresses, or you expect to start small and grow to overseas or geographically distant things, it might be a better alternative. Again, it's what you use in your use case. Yeah, yeah. Thank you. So one more question. There's one right over here. So as clients would be introduced into the whole mesh, or maybe they would be going down, would there need to be updates across the whole routing infrastructure, or how does that work? So for example, right now, the source is sending some messages to the client. And there is a way so that if the message doesn't get delivered, it automatically knows that it did not get delivered. But do other sources, maybe sources that had been previously talking to this client get notified also that the client is not available anymore, so that they don't send him any more messages, or would they would have to wait, for example, to send him a message and it gets lost and then they would be notified. How would that work? Well, it's the second. It's a ladder. AMKB10 has a concept of message credit and it's optimally granted in batches to producers. So if the router says this guy downstream can take 10 messages and you get some credit, it can use all those credits. So you don't get a feedback from the endpoint. And matter of fact, it could be multiple endpoints sharing the same address in the case of a fan out or more importantly load balancing. So if this one goes away, you wouldn't want to necessarily know about it, right? So anything in flight is up for grabs or anything to be transmitted. Okay, so the concept of multicasting can be also extended to kind of load balanced IPs where you can have, for example, if you have an application that's a consumer that's sitting behind an IP, they would share the same identity in the network so that they would be consuming the same kind of queue like in the old terminology kind of, right? Does it look like a single source? No, so let's say... Single target. Single target, yeah. Okay. Just to add to your first question, if the target is mobile, if the target comes back, again the routers will learn that it's attached to a different router and then we'll indicate that. My concern is that if, for example, between data centers, you have an expensive link where there is quite a criticality in that link and it's quite expensive bandwidth-wise. If there are also a lot of kind of control messages going on that would be consuming the bandwidth as well. That's just my concern. Yep. So for that use case, if you have a target in your local data center in the same address and a remote data center, you can bias it that it will only go to the remote in the case that the local is not accessible, right, in that failure mode. So it will optimize. It's not going to send traffic across the other data center unless it really knows it can't reach it any other way. And there are no... There are a great number of notifications going on and bouncing to and froes, just... Okay. Thanks. We had a time. Can apologize for the initial presentation challenges, but thank you very much. Yeah, thanks a lot.