 Hi, this is Tom. I'm David. We are both working on a project called Divas Broker Over the many years we have always been told Divas is fine and Today I want to tell you well Divas is fine It works. Many of us use it. Many of you probably use it We use it at Rathad for a lot of things. A lot of our software relies on it for many many years now for over a decade If we've been using Divas So why do we keep talking about it and why do we want to talk to you today about Divas? To explain that I want to start with a short demo Imagine you use the tool decon. Decon is a very simple tool to set and retrieve and query global variables In this example we use our terminal to write a value 2017 into the variable All systems go year and then we can use decon to query that as well, and it will return us a value Now fast forward one year. We of course want to update that variable to 2018, but this time We want to use the underlying bus call Decon underneath uses for writes A bus call to the decon service and here we have an example where we use a command line tool to do the same thing We call a function called change on decon most of the upper part can be ignored And as you see in the last line, we set the variable as to year to 2018 and To verify that we try to read it with the older one and we notice it wasn't updated We don't know why Everything like if you if we look into it everything works, so we just try again So we do the same command again with a bus CTL call and we try to read it again And suddenly the value appears so if you try to reproduce it in a command line It's very easy to reproduce. So what happened here like for some reason the message did not reach the decon service This is of course unexpected and Of course, this is not what debus does like what debus demon does all the time But there's some special case why debus drop that message and of course, this is a very crafted example in this case But it's still unexpected and the keen reader might notice that we have one line that says expect reply equal thoughts That's the debus feature that says I sent a method call I'm not interested in the reply and usually it's just so the the other side doesn't have to send a reply and you save On a few cycles and you don't send don't have to assemble a reply But in this method we could we were able to repress like to craft one situation Where for no particular reason the debus team decided to just spursly drop the message And this is definitely against intuition and it makes it like It's an edge case That is hard to predict and hard to deal with because there's really no real reason to drop this message And these edge cases exist in the best team for many for a long time and people have to doubt have dealt with that in The past and sure some of them are edge cases for some people But for other people they may tend to end up debugging for two hours some weird situations just because they hit that edge case Right so and the thing is that this is Not a problem that is inherent to the debus specification. This is just Some edge case that we found in the debus debus demon the reference implementation of it And for this reason and many other reasons like it We decided that to try to investigate if we could write a replacement for the reference implementation that doesn't Have all of the same problems. So we decided to go from to find what is the right design principle that we could use In order to still implement the same specification still be a drop-in replacement But it get around some of the issues which we discovered here Thank you So firstly what we want to achieve like one thing we want to achieve is deterministic behavior So what really is happening here? In this example, it's a race condition. So if you are in a very special situation that you have an Activatable service that is not yet activated and it is when you Sorry, it is when you send a message to it and The client that sent the message Leaves the disconnects from the demon before the Target service has been started then that message is dropped So there's a timing going on there if you waited a bit longer before the command-line tool finished it would have worked So this is like very everything. This is not something that is documented in specification Obviously, and it's something that it would take quite a lot of effort to figure out why on earth was this random message randomly dropped So this kind of stuff obviously we don't want to do we don't want that some you you make an action on the divas demon And then after which you do something else unrelated and it affects the result of what you did Thank you Moreover, we really don't want to drop messages silently by that I mean that if One pier sends a message to a different pier and it doesn't for whatever reason go through then either the target Or the destination for target the destination or the center should be notified about the problem I'm not talking about writing some message to a log But one of the two parties should know that something was dropped like we're not on the network I mean not sending UDP package to the network where you know that of course packets can be dropped We are on the local machine that is really no excuse that's packages just go missing and nobody's told about it Moreover the what? We have limited resources on our machines So it could of course happen that we just don't have the resources to perform some action like sending a message sometimes Messages must be dropped. So the thing that we want to achieve us that earlier is just not to do it silently So in that case you end up you're sending a message method me saying you're doing a method call on somebody And there is just not enough memory available to do that You have to figure out what you're going to do So if you send a method call you can just simply reply the demon can reply to the center that you're out of memory Or you're out of your quota and you can't do it straightforward. How about if you? send a method you received a method call and you would send a method message a reply to it and Then you don't have the resources to send the reply Now you're to think well, it's sort of different. It's not I mean what happens to the divas demon is that if you don't have the resources demon tells you that The reply failed to be transmitted But that's not really what you want because firstly nobody is listening Waiting for the demon to tell them about replies to replies And secondly wasn't really your fault somebody else triggered you to send To send the reply and they were not able to receive it So what we are trying to do when you try to within an action happens on the on the on the broker We're figuring which D which user or which peer is responsible for the action and we are blaming or accounting on the responsible user Not just the one that made the action So if a reply is being sent it's a destination that's responsible if a method call is being sent is the center that's responsible Lastly we want to make sure that the Divas demon or divas broker keeps state about all the peers connected to the to the bus they can be They can be matches that they've installed because they are interested in broadcasts can be outstanding At signing rope message replies and so on and so forth And what we want to make sure is that if two peers are talking to each other the state of Independent peers that's nothing to do with it should not affect things like the performance So let's look at an example Again, we are now using a simple command line tool to send to do a message To the method method call I we are calling now on system D the ping method This is a standard method that most divas Clients implement and we're just picking system here here at random. It doesn't make a difference which which demon we are talking to And so you can send it so you send a message call bus it yell and it returns So you set a ping and you get a pong back so it applies. It's just a simple echo test now That's we made a little Client that shows our little problems. So we call Our own client called noise and we pass it in a name. We want to Send noise to basically so the point here is that this is this could be running as a different User's unprivileged. I think that nothing interesting going on here really you just we are doing some Action on the bus targeted that system D and now we try again with the same thing We try again to call ping and now it doesn't return. So somehow this client Which is unprivileged and running as a separate user should not be able to in to Affects the interaction between our user and system D is able to stop system D from reply So we just have to cancel that and it's not just this one method call actually with this Test client we made there. We are able to basically mute system D or any any Client on the on on the bus. So we are able to stop them from sending any messages at all So that's bad, right? We shouldn't be able to have an unprivileged client that sends the messages and targets some other client and just makes them Stop sending anything at all. So what's really just to go back to what's actually happening here that? Noise is doing the same thing is sending ping to system D But it's not reading out the replies so just We send ping to system D and system D answers it but we are not reading it out So the buffers inside of the demon are growing and because divas demon doesn't Distinguish between method calls and method replies in this accounting the person that the p. That's being Blamed for the growing buffer is the wrong one So it ends up that system D divas demon thing a system D is sending too many messages So they are blocking them from sending any further messages. So in divas divas broker We do it the other way around so we we track who's responsible and we will then tell this guy that they're not no longer allowed to send any Messages, but this thing still works just fine. So I'm really appears and all affected So the divas broker project is Our alternative to US team where we try to follow the principles that Tom explained and some more principles Which we discussed in detail at the our dev conf talk this year So if you're interested in why we pick these principles and how we follow them You're more than welcome to look or to ask the talk to us or to look at that talk Divas broker today is ready to be used And there is actually a fedora change request and it's accepted and divas broker will become the default in fedora F-13 as it is scheduled right now And you can already use it on the Web page we have at the end there are instruction how to use it as simple as installing the package and running one System control enable command line and it should work as a drop-in replacement for divas team with no Observable differences it should If there are bugs or you're always more more than welcome then report to report them to us So as a last part of this talk, we want to describe Or show some of the benchmarks we did afterwards. So when we did the final Implementation of divas broker and in this we have a most basic benchmark we can do We sent a unicast from a client to a different one and measure how long it takes and The second one is a pipeline call where we try to send many method calls without waiting for them to be done And the third one is a round trip where we sent a message and wait for the reply So it's two messages that are sent and under all benchmarks that we did we could observe two to three times speed up compared to divas team And Other than that the basic benchmarks don't show any algorithmic behavior Any algorithmic change in the speed up so we're Observably in all of these three cases just about three times faster as divas team which surprised us I made us a bit happy and assume The next next example was just connecting to the system bus and We have two benchmarks again where we just connect to diva steeman or divas broker and The second one is including sending a first message because a lot of the things people do in divas is creating New command line application or something that just sends one call out or sends a bunch of calls out and then disconnects again And again, we see a three times speed up compared to divas team, which we achieved with Steve of broker But other than raw performance what we talked before against we want to make sure that everything on a bus Scales properly so that things that do not affect your operation in an Semantic sense also don't do it Actually when running the command lines when running the messages so Two more benchmarks that we have to show is We created a very simple Measurement of a single message, but changed what kind of background Messaging or background state the demon has so in one example we took a lot of Matches that we install for objects On the system bus and agree increase the number of matches and then looked how long it takes to send a single broadcast That doesn't match either of these matches and we try to use a common case. So we we don't try to just create a Fake example that wouldn't happen, but instead what we said We imagine the case where many objects and exist on the system Then we say of course people match for these objects But there might be an event for only one of these objects firing and as it turned out Divas demon scales linearly with a number of matches you install on these different object object paths Even so even though when you send one broadcast it only affects Possibly one of these matches And in our case we made sure the bus broker always scales In this case it looks constant, but it actually scales logarithmic So you cannot on the system bus install matches Then are unrelated to a specific broadcast, but still affect Like it won't affect the behavior on the bus broker, but on DBS team it might in quite a lot of cases Except for interfaces affect their behavior linearly Another example that we did is instead of installing many batch matches in back in background. We made a lot of Outstanding method calls. So we said there are many clients running in background, which is just send method calls and wait for the reply and Then we sent a single method call and measure how long that one method called runs and again. We see a linear behavior of DBS demon Yeah, linear behavior and a constant behavior for DBS broker in this case It is really constant and it means regardless of how many other outstanding method calls They are in background if you send one method call it will be in constant time on DBS broker With the one exception you we always assume the CPU time Like these methods calls are not running in parallel. They're just outstanding So we run them before and then just measure that time used for that one method call and And yes, and all of these Benchmarks Show that we try to you follow our principles that things run independently of each other We don't have clover state and we don't drop messages spuriously And we believe that this is crucial to avoid nasty arrows that you have to debug for a long time And that's why we wrote the DBS broker project and why we believe it's currently ready to to be used and it will be used in Fedora 30 and we have already heard from several people who used it in their Test environments and we are more than happy to hear about more reports of people deploying it Hi, I just just wanted to see what it when you say you rewrote the demon. Does that mean that it's a Demon rewrite using Lib D bus or you wrote the whole thing. No, the there's I don't think any shared code between Divas Deep Divas program diva steam Lip divas is part of the Divas repository and it's really a lot of the details of the demon implementation on actually also part of the Lib D bus implementation So for instance accounting and someone is all done in Lib D bus for diva steam I'm sorry if you want to change that you actually would have to change the divas Furthermore Lib Divas and diva steam use a private API, which is not accessible for accessible for external projects So no, there's currently no shared code between the two projects There's some upstream work in diva steam and going on for implementing container support Which can be used for flat pack portals and any other container use cases What are your plans for that in divas broker? So our plans is to implement everything that's in the spec and that's our statement So we make sure whenever there's something in spec We will adhere to it even if we believe it's in a way that would break our guarantees And for instance right now there are things that the spec allows that would break some of our principles But we follow them so we say we will still implement them because they're in the spec We do Regularly comment on you On any issues on the bug tracker that we are aware of and try to make sure that our statement like that the people working on the upstream implementation know what our Comment on that is for the container one, I think we Commented on all of these basically fine with the implementation like with the idea of concept of it There are some details for instance Tom commented that it should run as a separate Bus name and not in the same bus name and the other major one was probably about object path, but Otherwise We are yeah, okay with it, and I think it's I think we stated that all in the public comments Yeah, I think so, but you haven't started implementing it. No No, I mean so far the comment from a committee was that he's still not sure whether this is the final draft so we Did not hurry in implementing it what we are not opposed to I have a question That in the bus demo there is problem with restarting that many clients are Not able to survive the disconnection from the demon So I wonder if some way of transparent restarts is tackled with the bus broker So currently we do not support that and there's an open issue that was open some time ago in our issue tracker Where somebody explained the the witch for this and in principle? We think it would be nice to be possible to do that. We'll be able to do that, but at the same time it would require a huge amount of work So it's not high on our list of priorities So we wouldn't be opposed if somebody turned up and you know made it happen, but at the same time I don't see it happening any time soon Thanks So there's there's a weakness in how Dibba's had a little idle exit of auto-starting things There's like a race condition where you try to idle start but someone queued you a message and you hadn't got it Did you ever look at fixing that? Yeah, you mean yeah external idle, right? So you mean I'd like exit. Yeah, that's actually an RF an RFC in the Dibba's bug tracker where we propose something called connect client and discuss The idea is basically to get something similar to socket activation, but for Dibba's so to try to avoid the name activation Instead actually take it just a socket that is pre-connected and has everything already set and do normal socket activation of services Yeah, that would be our preferred proposal It does change the semantics quite a bit and doesn't make it like trivial for other services to adapt to that I Do remember that we talked about that quite a lot During the KD bus days. I don't exactly remember what our conclusions were so right now We don't have any differences there to Dibba's team So I think that it's a conclusion that is that as far as we have figured out this with the current Dibba specification I don't think it's possible without extending it So we need to some extension and we have proposals extension to it But there's nothing that we could just magically fix Any other questions? Have you thought about taking more of a part in the spec process and maintenance? Yeah, I mean we would like to do that and we have put it a bit and we would be yeah, you definitely participated quite a bit, but like Taking more ownership, maybe That would be yeah, absolutely something we're interested in. Oh, I'm interested in that this Don't want to like put words into your mouth, but uh, thanks You