 Okay, thank you very much. So this is a brief announcement which is good in the interest of time because we seem to be quite late on schedule, so I will try to make it even more brief and I will try to make it relaxed. So it's not going to be very technical because I just ideas that in consensus lab we had, mostly my colleague Enric Moniz had, I'm presenting his work, he unfortunately couldn't come, so I'm here to present mostly his ideas that he came up with in how to scale asynchronous randomized Byzantine agreement. All right, so what is the goal? The goal of this project of ours is that we want to have a Byzantine agreement that is optimally resilient to Byzantine failures, that means we tolerate as many Byzantine nodes in the system as is theoretically possible or at least come close to it, come close to it asymptotically. We want to tolerate an adaptive adversary, well, you probably many of you already are very familiar with this concept, we'll get back to this on the next slide anyway. We want it to be scalable and we want it to be asynchronous, which means by this famous FLP result that has been already discussed in leftheri stock, it means that the algorithm needs to be randomized because deterministic algorithms for consensus are prohibited, well not prohibited but just cannot exist in that sense. All right, so let's look at the system model. I will just skim over this, we want to have a algorithm that works in the asynchronous system model where we have some Byzantine processes that can fail arbitrarily and we have n processes and f of them can be faulty and we want to reach a failure threshold such that the size of the system is only needs to be greater than 3f plus 1 and of course I said a Byzantine process can fail arbitrarily and of course this is not quite true because you always put some restrictions even on the Byzantine process for example that it is computationally bounded so it cannot arbitrarily invert hashes, cryptographic hashes and so on and also we want the adversary that does control the scheduling of messages in the system not to be able to remove messages after the fact, that means the adversary cannot look at the some processes message and see the content of that message and then decide oh I actually cancel this message so even with these restrictions the adversary is actually pretty strong and we further assume that there is public key infrastructure and that any message sent from a correct process to another correct process will eventually be delivered. All right so the basic ideas that we are exploring in our algorithm are basically a combination of Braka's asynchronous Byzantine agreement algorithm and Algorand, Braka's algorithm it's an ancient algorithm I think it's older than me actually and it's a round based algorithm where there is some asynchronous rounds and in each round the processes try to agree in three steps and if they don't they flip a coin and Algorand came up with this nice idea or I don't know whether whether it was first used there but there it was definitely made popular that you you have in order to scale you have a big system and you have a committee sampled using VRFs and each node only communicates once with the rest of the system and this is to counteract the the adversary that is adaptive so nobody knows that I'm supposed to talk so they cannot dose me unless they dose everybody and when I do talk I've already done my job and then they then the adversary can actually dose me and it doesn't affect the protocol anymore because I have nothing more to say all right and we combine these two approaches to get a robust multivalued randomized agreement all right so just a just a small recap on how we want to approach this we start from Braka's randomized binary agreement and it's a protocol that operates in asynchronous rounds and there's three steps in each round in one in the first step I basically I'm a node I make sure that a proposal can be decided or look for a proposal that can be decided and in the second step I look whether no other proposal than mine can be decided if anything gets decided and then in the third step I either confirm or confirm that or I see that oh we actually still disagree and I need to update my proposal and in each step I reliably broadcast what I currently say my proposal is and wait for others to share their proposals and actually this is my my my phd advisor well before he was my phd advisor he came up with this nice analogy in his course that this that imagine like people are walking in the corridor and it's a narrow corridor and this must have happened to you that you just walk down the corridor and then there's some person going opposite direction and you bump in each other and then you go you go right and the other person goes right and then go left and you keep doing this all the time and each time you pick a random direction eventually everybody will go to their right or to their left and then you can go and this is how this is how you actually circumvent the flp impossibility it ties in perfectly with with uh what uh el fterris was saying in his talk it was the first talk of of uh of the summit that uh the flp impossibility actually states that you cannot reach deterministic consensus only if you start from a disagreement but if you start from a state where you already are agreeing then then you can reach then you can confirm that and reach uh consensus and this is exactly that we we start jumping from left to right and then we we don't know where to go so what what we do to break this pattern is that we just say okay i just pick a random position and then when we happen to pick a consistent random position each well then then we then we have agreed or then we can agree now the problem with this approach is that this is just a binary consensus so there's only two possible values i go left or i go right and uh it's expensive each of these rounds involves an n-square message exchange and uh and potentially a coin toss which is also an expensive operation so now how do we so this was kind of the background and now how do we try to tackle these uh these issues to get to a more efficient and better algorithm and this is really just the high level ideas because this is fresh work in progress so in brackets binary agreement the the coin is flipped if in the so-called by valent state if if we know that there's a by valent state that we know that both one and zero potentially could be decided as a result of our consensus and uh the termination is guaranteed if all correct processes get the same coin result so if i randomly choose to go to my right and the others randomly choose to go to their right then we can pass the corridor and i mean this analogy could be extended to more people in the run eventually it will break but this is the mental model all right so what we do to go from binary consensus to multi-valued consensus is that if multiple values can be decided and we need to flip a coin then we flip a binary coin between not between uh one and zero well we do flip a binary coin this is between one and zero but we map it to zero meaning i take whatever i was proposing and one i take whatever valid proposal i've seen so far that is the smallest whatever smallest proposal i've seen so far and so in the next round i will adopt the smallest proposal i've seen so far and eventually this would converge to the smallest proposal that that everybody could try to agree on using a binary coin flip this is the high level idea and then so okay so we got from we got from binary consensus to multi-valued consensus so how do we reduce the message complexity well the reliable broadcast we try to implement it using gossip and this slightly changes the safety property of what we get as a result to also to probabilistic ones and then we use a scalable and resilient shared coin shared coin implementation if we do need to flip a coin we try to make it cheap as well and this is uh really work in progress as well and it's the ideas are based on a paper that this that is called not a coincidence that presents a shared coin implementation that is efficient now more on this scalable and resilient shared coin so it is um the ideas are improving on this existing work and it also processes uh exchange processes exchange random values they rely on verifiable random functions and the the core idea of that work that we also adopt is this what they call a common core of processes that when they exchange some values on some commitments to their shares there there's a there's a notion of a minimum there's there's an ordering on them and with high probability the the commitments that processes collect will contain the global minimum value of whatever has been proposed and so if all correct processes actually include happen to include the minimum value then they then all of those pick that value which means they end up in a uh in a state where in the next round they can decide this this is the high level idea of it and uh also we adopt the random the algorand approach where only a few randomly selected processes actually send messages and everybody listens so to summarize we use bra we use ideas from brahas agreement as the base as the structure of the protocol and we integrate ideas from algorand and from the not a coincidence work in order to make uh things more efficient so for algorand we use vrs to select random committees and circumvent the possibility of an adaptive adversary basically dosing some concrete nodes and uh from the other work not a coincidence we adopt the idea that if we choose a minimum value from the collected ones there's a significant probability and regardless of how how big the system is there's significant probability not one but significant that uh that we actually happen to choose the same minimum value and uh this is so this is what we are exploring and working on just as a brief announcement and uh we'll keep you posted when we have more details on the protocols thank you very much thanks for the talk um do you have any like uh insight initially about sort of what sort of performance you'd expect like per round complexity makes kind of how fast it would terminate an expectation on expectation it should be a constant number of rounds because the coin itself and regardless of the regardless of the size of the system should have a constant expected uh true but you have like a linear number of maybe if a linear number of proposals and then maybe different people see different ones and you're flipping to kind of agree on the lowest one so it seems like maybe it would be super constant I don't know uh what does it what what is super constant well like bigger bigger than constant you mean yeah yeah uh yes well we didn't do the proper analysis of this yet sure but uh we want to have it we want it to be like maybe in this in the number of rounds I would still say the expected number could could be constant I couldn't imagine that because because if you just pick the smallest value regardless of how but how many how many how many processes you have you just I I guess it doesn't depend on the size because uh the minimum value either is in the set or it's not yeah of what you collect and maybe if you yeah okay with some constant probability and then how many times you need to repeat it to an expectation to actually get it it's uh it should be constant yeah yeah the communication is not constant though would it like quadratic per round I mean you know we definitely aim for sub quadratic per round so like uh lambda times n kind of like the security parameter times n maybe I don't know yeah it would be maybe n log n okay if if you use if you use a gossip for for broadcasting and then everybody does that but uh again these I just presented a high-level idea and it's it's not even like my work but I was I was I was properly briefed on it but uh I'm we're trying to figure it out at the same time yes I don't mean it's a girly yeah yeah yeah but thank you we we this is exactly what we are trying to figure out now it's very interesting thanks