 Okay, so hi everybody. Welcome to the last MPC session of this Eurogrip. We have three talks. The first one is Round Optimal Byzantine Agreement by Diana Guinea, Vipul Goyal and Czenda Luzan and Czenda is presenting. Okay. Can you hear me? Yes, okay. Yeah, so thanks for the introduction. I'm very happy to be here today in the Round Optimality session of MPC. So I'm going to spend the next 20 minutes talking about Byzantine Agreement, which is a fundamental building block in the design of MPC protocols. And I'm going to focus on round efficiency and hopefully convince you that one can achieve VA in a very efficient way and yet conceptually simple manner. So what is Byzantine Agreement? So this is a distributed protocol among N parties, a bunch of parties, each party has its own input, and they want to jointly compute an output with two guarantees. The first guarantee is that all the honest parties have to output the same value. And the second guarantee is, well, if all honest parties input the same value X, then this is the value that they should jointly output. Of course, these two guarantees need to hold even when a subset of the parties are corrupted. Well, achieving each of these guarantees independently is very easy, right, but achieving both at the same time is not trivial. So in this paper, we focus on the synchronous model, right, so we consider the setting where parties have synchronized clocks and they are connected via a complete network of point to point authenticated channels with some known delay. And in this setting, we typically describe the protocol as proceeding in rounds. And yeah, so basically when someone sends a message in some round, then it's guaranteed to be delivered before the next round. And in this synchronous model, then crucial efficiency metric is the round complexity or the round efficiency, right? So how many rounds do we need to achieve Byzantine Agreement? So let's look at the landscape. So we know for a long time that for deterministic protocols, Byzantine agreements can be achieved in T plus one rounds, right? Where T here is the corruption threshold. And yeah, we've known this for a long time and this is not very good, right, because it means that the round complexity is linear in the number of parties, right, for usual thresholds and over three and over two, right? Luckily, we also know that for the case of randomized protocols, the landscape is actually much better. We know that, well, the longer you run a protocol, the higher the probability to achieve Byzantine agreement. And from the feasibility side, we know that if you run a protocol for something like C times R rounds where C is a constant, then we can, we know solutions that achieve Byzantine agreement with very high probability, right? Except with probability two to the minus R. And this was initially achieved by Feldman and Michali, right? And since then, there have been a significant line of works improving this constant under different assumptions and also with different corruption thresholds and so on, right? We also know since almost 40 years, the lower bound by Carlin and Yao, that basically says something like if you have like a linear number of corruptions, then in R rounds, any be a protocol in curse at least some error that has the form R to the minus R, something like that. So the question is therefore apparent here. So can we close this gap, right? Can we go from two to the minus R to R to the minus R? And this was not known even for any linear number of corruptions and not even for static corruption. And this is what we do in this work. So we present a protocol that essentially closes this gap for any fraction of corruptions up to n half. And we also support adaptive corruptions. So note that the threshold here is up to one minus epsilon. So there is still a small gap. And the constant here actually depends on this epsilon. So you can think about it as one over epsilon or something like that. So before we get into this, how do we solve this? Let me start with a little bit of background and this is going to explain a bit, one of the most prominent paradigms to this design BA protocols. This was initially introduced by Rabin and many, many other people, right? And this is, I refer to it as the seminal paradigm. The first thing we do is we run a weak form of agreement, right? Which I call here weak consensus, also known as cruisers agreement. And the idea here is just imagine that parties start with an input bit and we are going to expand the domain to a domain of size three where parties can output either a bit or bottom. And bottom is here intuitively to capture situations in which parties don't know what to output, right? So what does weak consensus achieve? Well, validity is exactly the same as dissenting agreement. If all the honest parties input the same bit, then this is the bit that they output, both for zero and one, but consistency is weakened slightly. So it is fine that some parties output zero and bottom, it's also fine that some parties output one and bottom, right? There is a little bit of tickling, but it's not allowed that some honest parties output zero and other honest parties output one. And this is depicted in this picture, right? In this array of three positions. You can look at it as like the honest parties output a value in the green region, right? So, believe me, weak consensus can be achieved in a very simple way. A constant number of rounds even deterministically. So, and then this paradigm basically bootstraps this goes from weak consensus to full dissenting agreement via a common coin. So how does this paradigm work? Well, we first run weak consensus. Each party obtains one of these three values, zero, one or bottom, and then we flip a common coin. And for this talk, I'm going to assume that the common coin is completely random, uniform value between zero and one, and that everyone gets the same coin value. So the paradigm works as follows. We first run weak consensus. We all jointly flip a coin. And if you don't know what to do with your output, meaning your output of weak consensus was bottom, then you simply take the value of the coin. Otherwise, you have some confidence, right? Then you stick to this value. So if your output of weak consensus was either zero one, you stick to it. If it was bottom, you take the coin value. And then we repeat, right? So the effect of this paradigm is basically, well, if all honest parties started from the same input. Then after weak consensus, then they all output that input bit and no one ever listens to the coin and therefore this value gets propagated throughout all iterations. And that's what we want. So if parties start with different values, at least weak consistency guarantees that the output of honest parties lie within two, within the green region, right? Two consecutive slots in the picture. So it can be something like zero and bottom. And in this case, if we are lucky, if the coin hits zero, then we reach agreement, right? Because the zero guys stay zero and the bottom guys take the value of the coin, which is zero. And therefore the probability of agreement here is at least one half. And what is important here is that the value, the output value of weak consensus is independent of the value of the coin, right? That's why we are tossing the coin after weak consensus. And at least intuitively, we can think that with this paradigm, it kind of, the iterations are independent, right? And each iteration takes at least two rounds, one for weak consensus, one for the coin, right? So the limitation here is that kind of the best we can achieve is in two rounds, we can achieve agreement with probability one minus two to the minus R. So how can we go beyond this, right? So to break this barrier, what we are going to do is we are going to use like a generalized paradigm that was initially introduced by Pizzi, myself and Los like last year. It's something slightly different. So instead of running a weak consensus protocol, we're going to run a so-called proxensus protocol, which is kind of a generalized primitive, where instead of expanding the input domain to a size three, which is what we do in weak consensus, we're going to expand it to an input, to an output domain of size S, where S is some parameter. And proxensus is going to guarantee again two properties. First one is validity. We're going to look at the S, the domain S as positions within an array of size S, and validity guarantees that if the input is zero, then the output of proxensus is the left most lot, the one in green, right? And if the input is one, then you get the right most lot. And consistency simply says that, well, the outputs of honest parties lie within two consecutive slots. It can be anywhere in the array, but they need to be together, right? And indeed this proxensus primitive generalizes many of the weak agreement primitives that we have seen in the literature, including weak consensus that I've talked about, but also others, graded consensus and so on. So how can we use proxensus to achieve Byzantine agreement? Well, we are going to run the proxensus protocol. This is similar to the seminal paradigm, right? And then we're going to flip a coin, and this time the coin will be multivalued, a value between one to S minus one. And then we are going to run some extraction procedure which based on the output of proxensus and the value of the coin, we are going to extract a bit, right? And in this picture, the extraction procedure is actually very simple. It will only indicate an inner position where we will cut the array. So for example here, if the coin takes value of three, we are going to cut the array at that position. And if the output of proxensus was left, was at the left of this cut, then we are going to interpret that the output of the protocol is zero. And if your output position was at the right, then the output will be one. So why does this work, right? So, first, validity, if all honest parties input the same bit, let's say zero. Then this means by validity of proxensus, all the honest parties output the left most lot. And because we are only cutting at an inner position, it doesn't matter where you cut, all the honest parties will lie to the left side, meaning all honest parties output zero. It's similar with one, right? Why do we have consistency? Well, if you look at the picture, proxensus guarantees that the output of honest parties lie within two consecutive slots, which means that there is only one coin value that will cause disagreement. No matter where you cut, most of the coins will push all the parties to the same side. And therefore, like in this picture, the bad coin would be C equals two, right? And like this, we can reach agreement except with probability one over the domain of the coin, right? One over S minus one. Okay, so the trick here will be, can we expand long enough, right? Can we achieve R to the R slots, right? Within R rounds. And in that case, just by flipping one coin, we reach, we achieve PA and that's it, we are done, right? We don't need to iterate or anything. Only one coin will be enough. And this is exactly what we do in this paper. We have a protocol that achieve an expansion in the first step to R to the R slots in a very efficient way. So I'm going to talk about this expansion now. So how does this work? So the parties look at this huge array of R to the R values. And depending on their input, they are going to position themselves in one side or the other. So if the input was zero, they are going to position themselves on the leftmost array. No one in green. And if the input is one, they are going to position themselves there. And the idea is, okay, the honest parties are still very far away, right? But we are going to do a bunch of iterations and we are going to try to bring the honest parties together, right? They need to be next to each other to achieve proc sensors. So how does this work exactly? So, well, our first idea would be, I'm going to just distribute my position in the array to everyone. I'm going to say I'm in the leftmost lot to everyone, right? And then we are going to take the average somehow. And of course, like this, if everyone was honest, then we immediately reach the same slot, right, because we are taking an average on the same set of values, right? This doesn't work because obviously, like some parties are cheating, right? So some parties can send different values to other parties and then we take an average over different sets. And then we end up at different positions, right? But yeah, then we just need to distribute the value in a bit more clever way and do the average in a bit more clever way. And this is how we kind of solve it, right? So how are we going to distribute the value? So to distribute the value, we are going to use like also a weak form of agreement primitive, right? She's kind of a proc sensors primitive also, but for a single sender. And you can also look at it as a kind of a great gas protocol for those that know about this. And the idea is that if the sender is honest, then this, and he tries to distribute a position X, then everyone gets X. However, if he's dishonest, then some parties get some position Y and others parties get some what I call Y tilde, like looks like Y. Or so the honest outputs lie within two consecutive slots, right? Just like in proc sensors, either the honest parties get Y and Y tilde or honest parties get Y tilde and bottom, right? And then what is interesting here is that the honest parties will also keep track of who is cheating, right? And here, basically, you know that if you receive a Y tilde or a bottom, you know that the sender is cheating, right? And we are going to exploit this. That will be a key in our protocol. So the distribute step will also guarantee that if everyone knows, everyone locally recognizes the sender as corrupted, then the best the sender can do is just to distribute the bottom. This will be a guarantee of the distribute step. So how does the protocol work? Well, first, the parties distribute this, the position where they are, and now we take some sort of average. Maybe if you don't need to understand how this average exactly works, but I'm going to say a few words. So the way this average works is essentially you look at the values that you got and values. The bottom values are useless, so you just throw them away, right? These are the set C0. And you know these guys are corrupted, right? C0 corrupted parties. So from the remaining ones, the non-bottom values, you are going to do an average, but you're going to discard T minus C0 at the top and T minus C0 at the bottom. The idea here is that if you do an average on the remaining values, then you're actually doing an average over an interval that is kind of a sub-interval of the honest interval that you have in the beginning. So for example, if after discarding bottom values you have this array, then you discard top and bottom and the remaining ones you do an average. Yeah, okay, so here is an example of the protocol. You distribute the protocol to everyone. You do this discarding and trimming. Next iteration, you again distribute your new position. You discard and trim and so on. And the effect of this is because every time you do an average over an interval that is a subset of the initial honest interval, then validity will be guaranteed. Right? If all honest parties start with zero and you do an average of a sub-interval of zero, then it's zero, right? And consistency is slightly more tricky, right? The point here is that how much do parties convert? And the point is that the best adversary can do is to transmit these y-tildas and bottoms because in that case some honest parties will take y-tilde into account and other honest parties will get bottom in which case they will discard this value, right? That's the only case where the adversary can cause a little bit of disagreement. And if you look closely, what happens is that then all the honest parties will recognize the sender as corrupted if he sends y-tilde and bottom. And therefore in the next iteration, Pi is identified by all honest parties and then the next time he tries to distribute something, he can only distribute bottom. So it's kind of like you can only harm the protocol a little bit, but only once, right? And that will be the key. So the picture of the protocol will be something like this. The honest parties start at very different positions in the beginning and then they converge. And the amount of convergence depends on how much the corrupted parties try to cheat, but they can only do once, right? And in the paper, we can show that on average we can shrink the interval by a factor of 1 over r and therefore achieve our result. So going back to my initial picture, we do an expansion to r to the r slots and then we cut and then we achieve this in all of r rounds, probability except like agreement except with probability 1 over r to the r. There are some numbers like for small thresholds of corruptions, our protocol is actually pretty fast, I would say, but for large corruptions there are some hidden constants, right? This one over epsilon factor that is hidden. Okay. Thank you. Questions? I have a question. Maybe I miss it. What assumptions are you using? Yeah, in the paper we need to use like a really good coin, right? That's crucial. So we need some trusted setup for threshold signatures or unique threshold signatures, something like that. Okay, so another question is when the threshold is optimal, with optimal threshold, the error probability is just too high. So the best you can achieve with BA is n over 2, right? We have this small gap, 1 minus epsilon n over 2, right? So please close this gap, right? Okay, thank you. Let's thank the speaker again. Thank you.