 Γεια σας. Είμαι Λευτερίς. Είμαι πολύ πρόκληρο σε συμβουλή. Πρόκληρο σε συμβουλή που εμπωθούσαμε. Και μετά, στους φορές, ξεκινήσαμε να είμαστε ασάμπωσης ότι πρέπει να χρειαζόνουμε μια πρόκληρη, να γεννήσει το κομμάτι σας για να τελειώσει τις συμβουλές. Και ξεκινήσαμε να εμπωθούσαμε πολύ σε στους πρόκληρες και σπιστικά σε στους πρόκληρες, όταν η κομμάτι σας είναι καλή. όταν your adversary tries to actually not let you finish or tries to actually get your secret key. And I'm going to present basically a series of papers that we have done in the last four years, more or less. I work at Mr. Labs and I'm also a professor at ISD Austria. So, this is not the main innovation, but I would want to give you a bit of an idea of how we started. I would really start with the idea of randomness. What we really cared about was scalable consensus algorithms. So, if you look into the work that is done in the last few years, you always get to see this assumption that you have a 2F plus 1 secret shared key that the consensus uses in order to reduce its communication complexity. Whether this is hot stuff that gets you to order of efficiency, Jolteon is a new thing that is deployed in a couple of systems or even in full asynchronous, the set of the artist Vaba, which gets you to n squared efficiency, which is actually the lower bound. The problem is that if your setup is bad, things are really bad, like you get broken safety, you get that the adversary can actually impersonate the whole consensus, the whole blockchain to a client that doesn't know and give completely bogus answers and doublest mention of anything that they like. So, what we did in our series of works is instead fix the setup and get it down to n cubed, which we think it might be the lower bound, but we haven't really proved it. And then, because it's decentralized, you can't really have a bad setup. You actually get security end-to-end from the beginning to where you know the validators to all the way at the end. And maybe if I manage to make it, I will show you how we can even refresh our setup very efficiently in n squared steps, right? So, I guess you've already, most of you know, if you do randomness how secret sharing works, but let's talk in very, very high level. The idea of secret sharing is that you have some secret. Let's say a public key, for example, and you want to give share, so parts of the secret multiple parties. How do you do it? Well, you choose a threshold, let's call it T. You get a polynomial of degree T, and then you make sure that that polynomial at the point zero has the secret, right? Then you just evaluate this polynomial at different points and you give it to your parties. As a result, thanks to Lagrange, we know how to take those points, interpolate back the polynomial, we have enough and recover the secret key, okay? So, the secret, these are the shares. The shares can be as many as you like, it doesn't have to be exactly the degree of the polynomial, you just need the degree of the polynomial to recover the secret. So, we start with some secret sharing, which it does, but it assumed an honest dealer. As a result, if you have a dishonest dealer, you have no idea if the secret is completely leaked or if the secret doesn't even exist. Then we can get very fabled secret sharing, where we add cryptographic commitments to the secrets and this allows us to check that the shares are correct and know exactly what to interpolate. And if we get into a synchrony, there is even a synchronous very fabled secret sharing with their multiple implementation now, but it used to be that you kind of share the secret shares. As a result, parts that did not get secret shares at the beginning of the protocol can still ask around to collect back the secret shares, okay? And once you have this amazing thing, you can make your secret be a public key and then you can do cool stuff like you do threshold signatures that are constant size, so you can have a whole group, 100 parties, and if they all agree or a threshold of them agree was you have defined at the beginning, you can give a constant size signature saying, well, those parties have signed a statement. And even cooler, if you use some kind of unique signature scheme like BLS or RSA, then because the signature is deterministic, you can use it as a randomness. Like, signature isn't for example, as a result you cannot predict it before you see the signature. As a result, you can actually use it as a random number. Why is this cool? Well, let's see a very simple consensus protocol, like host, like how this works if you have a leader party, the leader party asks the rest of the parties, hey, can you commit on this statement, this block? And the parties check, they say everything is consistent, transactions are valid, cool, so they sign back each one with their own signature. Now the party will collect this proof, which is basically a concatenation of signatures, so linear size, and broadcast it to convince everyone that this is now a block that we can append in our blockchain. The problem is that this is linear size, so if you broadcast to everyone, you're already at n squared, so not really that efficient, especially in partial synchrony. So what we do if we have a threshold signature scheme is very simple. You first secret share the key that is the group, and then the parties just sign back using their partial share, this X size. And if you have enough, your consensus is usually two thirds, you can interpolate them, get a constant size signature, which is the secret X0, and send it to everyone, and now it is just one message. So even if you broadcast it to everyone, that's linear, and that's really the best you can get for consensus when you want n parties to replicate the message. You can't really go below that. Okay, problem as I said, safety is broken the moment your setup is broken. Because if the adversary knows the secret share, they can just say the committee decided to commit on a different block and just double spend whatever they feel like. Cool. Now why do we also like it in a synchrony, right? In a synchrony, we also get compaction as before, so we can actually reduce by order of n the communication complexity, but also this randomness helps us circumvent this famous FLP impossibility, which more or less says for n that hasn't shared that really if you have a synchrony, you cannot really terminate consensus. And how do you circumvent, you cannot terminate consensus if you have a deterministic protocol. So how do we circumvent it? Well, you can get a random coin, these threshold signatures are also random coins, if you like to look at them like that. And this makes sure that the adversary has no idea who the proposer is until you kind of have to commit to a set of proposers. And as a result, you can get liveness. Now, why is this bad if your setup is compromised? Say if it's broken as before, but also liveness is broken. And the problem with liveness being broken is that in a synchrony, you can never really know that your liveness is broken. All you can say is maybe I'm unlucky and my randomness was not as good as I expected, so I will keep trying. And I keep trying and I never can ever say, oh, let's stop and check that everything goes well. If liveness is broken, you can't really say it until the end of time. So quite bad as well. How do we circumvent this kind of protocols? We use this idea of distributed key generation. So instead of having one party sharing a secret, you have every party sharing a secret. P1 shares a secret, P2 shares another secret, P3, P4. When you get all those parties, it's sharing its own secret. You can add up the secret shares to a combined secret and say, okay, now we know that as long as one of the parties was honest, then enough randomness randomized the full result and as a result, our setup is not compromised. Cool. The problem is when you go in a synchrony, because in a synchrony, you have faults. So you can only say, I can only wait for n-f parties, because if I try to wait for more in a synchrony, I have no idea if they're messaging me or not. And now you get this problematic thing that you have to decide for which n-f parties I'm going to wait. I can wait for some set, some other we can wait for another set. We get different n-f parties. If we say we terminate now, well, we no longer have the same secret. As a result, we no longer can actually do anything that makes sense. So it seems that we need to reach consensus to agree on which set of n-f parties we're going to use in order to bootstrap our synchronous consensus. So it's kind of weird. One way to circumvent it is to say, I don't care about a synchrony, it's a very theoretical model, no one cares. I would just assume that the system is synchronized in a while. Fair assumption, we can talk offline whether it actually holds all the time. But if you have that, then you can do something like this amazing work from 2009, where you basically just take a consensus protocol. It used to be something like VB-15, now you can take something like hot stuff and just use it in partial synchrony to terminate consensus. So it works. And it is not that bad, but you need to assume weak synchrony, which as I said, maybe good, maybe bad, but not really interesting when you do research, you want to do the hard thing. And also it kind of only allows for thresholds of f plus one. But to get the compaction of consensus, you need thresholds of two f plus one because this is the quorum, the two thirds, not the one third, okay? So this is the starting point, but we want to do better. And to do better, we really had to look into what trusted setup gives you and what actually consensus needs. So if you look at trusted setup, it gives you a secret key that is privacy safe by two f plus, by two f parties. So if you even have two f compromises, your key is not leaked. But in consensus, we already know that if you have f plus one compromises, you are completely screwed, you get forks. So it really doesn't make sense to try to protect there. On the other hand, we get the key that's live on n minus f parties, which also matches the consensus needs. As a result, we just look if we can actually weaken the setup, which no longer would be trusted to much consensus. And this is basically what allows us to get to a functional ADKG. So the first word that we tried to do that was back in 2019, 2020. And the key insight was really, let's try to get these different thresholds. And we had to build multiple building blocks with these assumptions. The first building block is what we call a high threshold as a very fabled secret sharing. High threshold because it allows you to generate keys that have a threshold of two f plus one. But this is really threshold to sign keys. Like if you have a plus one nodes compromise that the setup phase you're already lost. And this is basically why it actually works. Because if you actually try to get higher thresholds, you're going to not be live. So our key is safe as long as f parties are malicious or keys live if you get two f plus one on s party. So exactly matching the quantum intersection rules of consensus. How it works, more or less like most secret sharing protocols. So what happens is first you have a secret. You do some secret sharing or very fabled secret sharing in our case. And you get n secret shares. Now you say, okay, but this is not enough. We might have slow parts to recover it. So although you get a two f plus one threshold on this first secret. So you need basically three out of four of the secret shares to recover the secret. You go and you reshare your secret. Now the key here is that unlike the initial secret that has a recovery threshold of two f plus one, the secret shares have a recovery threshold of f plus one. As a result, you just need f plus one parties to help some other slow party to get their secret share of the secret. So it's kind of jumping around a bit. So let's see an example here. We have four parties, S1, S2, S3, S4. S3 is malicious, S4 is not a secret so it has no idea what's happening. Okay, so we try to secret share the secret. S1 and S2 say, okay, yes, we've terminated. S3 also says I've terminated but we'll not participate later. So what we have is these secret shares, S1, S2, S3 and these partial shares, the Ys, okay? Now S4 wakes up and says, oh, I need to recover my secret share. I ask S1, I get Y4 one. I ask S2, I get Y4 two. If the threshold was two f plus one, this would not be enough, right? It would not be live. But likely because we have reduced the threshold of the secret shares. Of the shares of the secret shares, this is enough. We can recover S4 and now we have actually three out of four of the shares to recover the secret. And this kind of allows us to increase the threshold to two f plus one for even protocols like the initial one, the partial synchronous one. So just using this high threshold AVSS together with the protocol from Kath and Kolberg, we can get a setup of hot stuff or jodder on that is actually n cubed log n. So cool, we moved on but again, we really want to get into the hard problems here. So this is where it gets really interesting. I can't really go deep into the paper because then I will take an hour just for one paper. But the basic idea is that once we have this AVSS, we start doing building blocks. The first building block we do over AVSS is what we call a weak distributed key generation. So this is weak because it might never terminate in a synchrony. So you always kind of think like it terminates and then you might be surprised and say, oh, there is more. And then let's keep going. So this works very simply. Every party does an AVSS protocol, P1, P2, P3 and then you're going to say, okay, I've seen at least F plus one party is terminating their secret sharing phase. So I'm going to say, okay, I'm willing to run this weak DKG with P1 and P3, let's say. This is the ones I have terminated. But it can be that later I see P2 also terminating. So I can actually update my prediction that now P1, P2 and P3 can all have all terminated. So I can use the new key with three secrets inside. Every party kind of broadcasts that. And then if you see, two F plus one parties that have a matching set, so P1, P2, P3 in our case, then you say, okay, I'm going to try and run consensus with this key that has the secrets of P1, P2 and P3. But I might update later my prediction. So although I'm trying to run consensus with the P1, P2, P3 key, I just get another one. So I say, okay, now I also want to try with P1, P2, P3, P4, right? So this can keep increasing. And the only limitation you have is that you can only announce larger keys. So you cannot just have P1, P2, P3, then go back to P1, P3, then go forward and say P1, P4. Your size of secrets in your key has to be increasing. And this is kind of what allows us to eventually terminate. Why does this work? It works because if I'm honest and I have seen P1 terminating it's asynchronous secret sharing, then by the definition of secrets of this protocol, everyone will eventually terminate the same thing. So they will eventually propose the same set and as a result we're going to eventually converge. All this eventually is although to eternity. So that's why it is weak. It's not necessary it will terminate. So I'm going to jump forward the rest of this paper to look a bit into the rest. The second key thing is that we do this eventually perfect common coin that basically uses the weak DKG output in order to flip a coin. If everyone agrees then we can terminate consensus if we do not agree then we can try again. And the cool thing is that the way it works you can only try at most F plus one times. Okay, so let's move forward here. So in order for this to work because we have disagreements we need to use a binary agreement protocol that can actually handle this agreement. Hopefully there is a ton of research in distributed computing doing binary agreements and the one we could use was the one from most FOD and authors which actually has n-square communication complexity. So as a result n-square communication complexity run for n parties in order to agree that each party has terminated their secret sharing. It's n cubed F times because we get F times that we can disagree goes you to n fourth. You can play a bit around with error correcting code to get down to n cubed log n but that's all. So actually if you do it in Vaba you only get to n fourth because of the weak DKG that can't go lower. So cool, but it's actually not really that efficient. So we try to actually get practical solutions and once you have the knowledge that it can be solved you have really the precision to start actually trying to solve it. The first was really can it be solved? So what works with this practical ADKG where there are like two things that we had to change in the black box in order to reduce the communication complexity to n cubed. One is that this two-dimensional secret sharing that I described is quite inefficient because you need two dimensions and it's dimensions of the orphan. You have to broadcast it so really you can't really get much lower than n cubed log n with a lot of automizations. So instead we managed to reduce it to n squared per sharing so n cubed total by using a bit more faster cryptography so encrypt and then prove your secret shares and then using zero less proofs to show that the degree of your polynomial is the one you want. Check the paper to see exactly how it's done. It's not novel cryptography, you can find it in other fancy crypto papers. The second thing that we had to get out was the fact that our weak ADKG is kind of too weak. We could try to binary search this convergence of F disagreements to get log n. We tried a bit, I think it's possible we didn't persist much but that's it, you have to get a lot of rounds of disagreements. So we completely changed our approach. And again all the time we want to answer this question, how are we going to boost our consensus? So how does the practical ADKG work? Well, again and parallel secret sharing as always but now instead of trying to figure out what is the common key that we're trying to agree it's part that just says I believe one and two for example like node one says I believe one and two terminate correctly, node two says I believe two and three terminate correctly, everyone chooses a set of F plus one part that they believe they terminate correctly. And they broadcast to everyone else committing that I want to believe that, okay? Once we broadcast, every node gets a belief of what everyone else believes. So node one here knows that it believes that T1 like one and two is the correct set, the correct key set let's say, node two has shared the belief of node one and it has its own, node three also shared the belief of node one and it has its own. But you might get like, it can be that parts are malicious so they really broadcast correctly. So what happens is that every party has announced what they believe is a nice key to use. So what we do is we do not try to converge on a single key, instead we say we're going to run binary agreements so we're going to agree on whether node one has terminated using its belief of who else has terminated. So you're running a binary agreement on the secret sharing of node one using its proposed key set of one and two. You're going to run a binary agreement of whether node two has correctly secret shared their secret using their belief of parties two and three correctly terminated and so goes. So if a party was honest and correctly did this step, you have a nice key, they have a correct belief so you can terminate binary agreement. You have a consistent strong key. That's beautiful. The problem really comes when a node is either malicious so they didn't really give a correct belief or it's just crushed because then you don't have a secret key, you don't have a coin to use in the binary agreement and this is where it becomes interesting and to really solve it you need to dig deep into FLP. So FLP if you ask any grad student of distributed computing that hasn't really read the proof will tell you that FLP says that consensus is impossible in a synchrony or deterministic consensus is impossible in a synchrony. But what the proof actually says that consensus is impossible in a synchrony if you start with what is called the bivalent state. So if you start with a question like half of the parties believe zero, half of the parties believe one so it can go both ways. If you actually start from everyone already pre-agreing on zero or pre-agreing on one it will terminate there. You don't really have a question to answer. So how we solve it is basically by exploiting this. The fact that if a party, if a malicious party or a crushed party did not propose a key set no one will enter with the belief that they have a key set. Everyone will enter with zero and as a result you don't need a coin for this kind of things. You can use one of the newest binary agreements the one for Tyler Crane which is what we call good case coin free. If it starts from everyone proposing zero you don't need a coin. You terminate in two rounds, you output. And as a result for the honest parties we're going to flip coins and decide zero one. For the malicious parties we're going to decide zero because everyone will enter with zero and we can terminate the KG. And this can get us down to and cubed. So we are actually practical and we implemented it and as you can see especially for low threshold, for T plus one threshold it's quite scalable. Like even for 120 parties a DKG takes 10 seconds. No, yeah, no, 40 seconds and uses account 10 megabytes of bandwidth, right? So if you're running it every day it's nothing, right? You actually use it. And if you see we even have comparisons with DRAN and we actually perform better than DRAN although DRAN is not a synchronously performer of course. This is DRAN of 2021. We haven't benchmarked the latest implementation. But it is practical, at least for T plus one. The real extra cost comes when you get to higher thresholds because of all the crypto you're using and as you see for two T plus one especially the time explodes. Like for 64 parties you only need two and a half minutes. So in a paper that we're going to be presenting in using security we try to kind of attack this last problem of getting ADKG if you need higher thresholds because for consensus we really need those higher thresholds, right? Let's keep a bit this. So the basic basic idea and again go and see the paper if you care about exactly how it works is that instead of trying to do the initial secret sharing on the higher threshold like we did with the HAPSS or with the practical ADKG what you do is you do multiple, you share multiple secrets at the same time all with F plus one threshold. And then you can use all these algorithms you can harvest it using a hyperinvertebral matrix to actually create a coefficients of a higher degree polynomial. So basically you can just secret share two polynomials of F plus one agree on the secret sharing like before using basically the same ideas of ADKG and then put them inside a matrix multiplication to get out coefficients for a two F plus one polynomial then you do some broadcast like this kind of how it works but I'm not going to go deep into it and you can get any threshold you want from T plus one to two T plus one by only paying the cost of sharing secrets and committing to secrets of T plus one. And as a result if you look here like this is the numbers of practical ADKG so if you look at the two T cases you see that the performance is like eight to nine X like of course it's more costly than doing a single sharing but it is basically proportional to the degree you want instead of paying a huge cost to do the commitment. Running a lot of time we can also very efficiently refresh keys so once you have a key you can actually refresh much more efficient why you have a coin so you can use this coin to terminate consensus you can use this coin to do something and as a result you don't really need to do again all these binary agreements all the time you can just run real consensus and the cool thing is that once you do sampling you can really sub-sample very low because your two-thirds less holes goes back to half like you no longer have an agreement to really decide you'll need to make sure that at least one of us party refreshes. That's the APSS even more efficient than the ADKG so really practical we can actually set up keys without making any synchrony assumptions now and let me just talk a bit about this last thing which is really how we work at Mission Lab it's our new blockchain called SUI and the basic idea is that you don't need to do all payments with consensus you can have what we call consensus payments using reliable broadcast so you have a transaction you can send to the validators they can process it you collect your proof that they processed it you send it back to the validators they execute based on the fact that they have all a quorum processed it and you terminate so super simple and the cool thing is that you can use this workflow to run things like lotteries and do a lot of randomness stuff without again needing consensus and it works very simply again if you have set up a secret share it works basically on the same pipeline you have a user transaction that also sends a seed to the randomness the parties process transaction and sign the seed with their partial share then you collect the certificate you also interpolate the secret shares this is the randomness you send the randomness for execution and you terminate the only important thing is that you are like you the client is the one that decides if they finish the protocol as a result you need to put the money in advance and claim them back at the execution step because if you want to pay later you can just decide not to do it but I think that's it and I'm happy to discuss more on any of those topics or flying or during the questions thank you very much are there any questions from the audience? yeah I actually have a couple of questions first when you mention trusted set up do you just mean like a common reference string model or is it one? I mean a trusted dealer trusted? a dealer like I know that has a secret and does the secret sharing and then sends the shares to everyone yeah and then you said you would test it with DRUN and we've had feedback from you know like I don't even remember who told us but basically they told us when they tried running a DKG with DRUN was like I think it was 256 nodes which we've never done so far they encountered some bug in DRUN itself did you run such big DKGs and did you see such bugs? no we tried to do 128 and it didn't terminate so that's why it's up to 64 but it didn't terminate because of the time yeah like after 3 minutes you're like okay let's kill it I don't want to pay WES anymore so it could be that it's very very slow but you know it's already very far higher than what we cared about and the final question I guess we mentioned earlier we really wanted to move to an unsynchronized DKG setup and we are reworking the way we do DKG and everything so currently what we do is Pedersen DKG and resharing following one of the Pedersen based papers I think I don't even know where the resharing comes from anyway it's very basic you know and so do you have a recommendation maybe of what we should be implementing in practice if we were to change our DKG knowing that we want to do a resharing with the new scheme of the same secret we already have from the Pedersen DKG yeah so like DKG have two components like you know how you do commitments how you do the initial resharing and how you agree on it so you can more or less keep the same like your commitments scheme like you can do it with Pedersen if you want and just look into the black box of how to agree on it like so it's really completely modular so you can move what you have by changing the message pattern and get it asynchronously usually don't really need to change the crypto you might pay a bit more in like communication complexity but I guess you wouldn't care much like if you want to keep the crypto you keep the crypto there's nothing that interfaces the tools in such a way that you have to really co-design them so what you're saying is we should just stick with what we have you can stick with what you have if you want to keep it on the crypto side and really look into the message patterns and how to create a protocol that is asynchronously safe and live basically and so you have like a specific one in mind or like there is a good practical one I would say that you can check our papers for the most practical ones Which one? There's a practical ADKG I guess your threshold you want is F plus 1 you don't care about high thresholds Yeah, we might want Yeah, when you say threshold you mean the threshold of mal-issues nodes, right? Yes So we might want to have more than that maybe I think the thing is that in a synchrony For liveness In a synchrony of the setup phase that's all you can get Okay Then let's just take it offline Yeah, okay, yeah, sure Another question at the back here Thank you for the presentation Do you have fundamentally different requirements for consensus for producing new I don't know how you call it new pulses of randomness versus resharing states where you want to reshare the key where I would guess you actually want almost all the parties not just a majority to actually get a new state of their key share So not really because what you can do is make sure that a majority gets the shares and they can help the parts that didn't show up get them whenever they show up As a result, if you have a high threshold enough they can recover later What if the first majority just because the second majority was late what if the first majority includes mal-issues parties that will no longer That's why you want 2F plus 1 to say, okay, I have a secret key, right? So F can just drop dead and the other F plus 1 can help the slow ones recover So you need to play with the thresholds Any final questions? Yeah, so I think in your protocol it's multiple rounds and that's kind of make it challenging So if you're going for sort of non-interactive version using publicly variable encryption kind of approach So does it make the asynchronous problem kind of straightforward? It's still challenging It's not the round really The problem is that you need to harvest somehow randomness from inconsistent proposals So it won't really make it easier like there are different trade-offs, I think and also in the complex there are different trade-offs because if you want to publicly variable share the secrets there's an order of hand by definition message like you need all the secrets and everyone to verify them So there are really good protocols with that and they also have similar complexities Right, right For the full non-interactive it's not really a synchronous like to be full full interactive you would need to use a blockchain so you're already sitting there, right? You're putting everything on a broadcast channel Like if you're assuming broadcast channel, you mean Yeah, if you have broadcast channel then this is irrelevant The whole point here was you don't have a preset broadcast channel Yeah, I mean, yeah Okay, we can talk about it Yeah, that's fine Good Oh, we can take more questions we're ahead of time I think I want to understand better it seems like this is like really well designed for the threshold is really well designed for the blockchain use case basically you like I guess what you end up with it seems like is that if you actually had F plus one malicious parties they could then basically use their F plus one shares say we construct all the shares for all the other the two F plus one shares for the second layer and then have the actual secret Is that right? Yeah, basically if the parties were malicious of the setup phase already then they can just start the transaction from the beginning and figure out what happened if they get compromised later on and you have the higher threshold then they have deleted these by-products and you can actually survive your actual threshold even if it's 2T plus one So it depends when you compromise them Wait, when are they able to delete the by-products? Do they need those to bootstrap offline parties or asynchronous parties? Is that not true? Depends on the like for the first protocol that I've said, yes but then if you do like things like encrypt and then share you then encrypt it for the public key of that party, right? So you can't really do anything there you can't just open it up you can only give it to them Got it, that's interesting Okay Okay, we'll take one more Sorry, just maybe like very basic I just want to make sure what you're solving So you're just doing VSS or you're doing you call a distributed key generation but it's not like I didn't hear the word like DSA or El Gamal so it's just a shared randomness just VSS, so what are we doing again? No, no, we're sharing a public key a private key, we're secretly sharing a private key you can't use El Gamal encryption on that key So the primitive So you just share randomness you want to do Shamir's secret sharing of a secret that's your basic task sorry, I just want to make sure I get the basic problem you're doing Yes, so you want to generate a private key so share randomness if you want to call it So you don't want time Yes, but you keep it secret so you can then use it again and then you can use like the last thing that I didn't really present there is sharing to refresh that you can either refresh it So you can share 0 so if the key is the same or you can actually run it with a new key you can just run it with a new key and now everything is new, you just lose the interface I'm just curious because it looks like the number n I've seen was like 600 that's like negligible number So I'm trying to understand why sorry, I'm not a distributed guy but you're fighting for this n to the force and cube but my understanding, the problem with 60 parties is just waiting for round trips whether it's n cube and to the 10s it's irrelevant, it's like kilobytes it's totally irrelevant, so I'm just trying to understand why you're fighting for this is it like a theory question or this is a bottleneck like 10 kilobytes, megabytes it's also on the computation of these n's play 128 cubes is like what No, I'm just trying to understand because n is like a hundred computers who run billions of instructions a second so why is this important just high levels that understand this So, like, I don't really Why don't sync terminate like what you're saying in three minutes So I would imagine for n equals 100 even if you take n to the 10th protocol it will terminate under one second so something didn't terminate in three minutes what is the problem in this field is it just this round trip so computers adversarily online to me like n cubes is super fast for n equals 100 it's like not even a question to optimize So I think that the point is that you want to get it to like you would want to refresh this key basically every hour let's say, right and also it's in a synchrony, right so these numbers work because all messages are delivered on time if you start having shaky network the more messages you have to send the less will arrive the more you have to resend the more you can very easily fan out but I, in general, like I get your point but I could be wrong it's like everybody is doing it so I'm sorry for asking basic questions So I think that this is like there's two fold one it is still quite slow like you can't really generate keys in second or middle like yes we do that now because we can but if you could do millisecond of generating fresh randomness you could do a lot of other blocks and applications and now basically no one will even implement one you want to go to the possible and see what people will do right and I forgot what the second was anyway like I think that and ah yeah the second is that we use this complexity as kind of a guideline you know like let's get more efficient No it's a beautiful theory question sorry just I'm trying to put practical head which I'm failing so in practice the reason it's like important it's is it really this end cube and so on or this is like round trip and cube is like a red herring you know end cube versus end of the force so why do we have this practical problems you know when you do your experiments So I can tell you like because ADKZ is not really a thing that is very practical and you're in consensus where we have the same thing right so PBFT was the the holy grail for many many years and the problem was that it had an end cubed viewed change which is a lot of messages like if you have a buffer you can't really have 8 million messages there or 8 billion messages waiting to process you're going to start dropping them right and as a result the protocol will never terminate because you have to assume that message will keep arriving the network is safe right so there are practical implications on this kind of stuff also because they're on a synchrony Let's take it off now but thank you Thank you Wonderful thank you very much Thank you Another put your hands together once more for left hand