 Hello, welcome to my talk. My name is Daniela Escudero and I'm going to present my joint work with Ivan Dankar, Divya Rabi on information theoretically secure NPC against mixed dynamic adversaries. And I want to state that a manipulation of GP Morgania research, but this was a Work done while it was at Orhus University. So let's first recall what secure multi-party computation is. In this setting we have a set of multiple parties, which I'm labeling P1, P2, P3 up to P8 and they are connected in point-wise channels. So they have peer-to-peer channels and their goal is to securely compute certain functions. So each PI has a secret input, which we denote by Xi and we consider an adversary that corrupts T out of the end parties. And the idea is that at the end of the protocol execution, so they are connected by these channels, they exchange a lot of messages, they interact among each other, and at the end of the protocol the idea is that the parties get the output of certain function that they choose evaluated on their inputs and that only this is revealed. Nothing else about their inputs is revealed except from this output. So this is the general goal of NPC, this is the general premise, and we have several results, in particular in the domain of information theoretical security, which is what we are concerned with in this work. We have the following two main results. So for a passive adversary, which is an adversary that corrupts the parties in a passive way, meaning that it can see the messages they receive, the messages they send, and also their internal state, but it cannot change their behavior. So they will follow the protocol execution, the protocol specification faithfully. Now for a passive adversary, which is a very weak adversary, we know that we can get perfect security assuming that 2T is less than N. Perfect security meaning that the protocol is essentially unbreakable, like no matter how many resources the adversary has, how powerful it is, it will not be able to break the protocol because statistically everything is hidden, like perfectly hidden. Now against an active adversary, which is a more realistic adversary, it can essentially change the behavior of the corrupt parties arbitrarily, so it's an adversary we really would like to consider in practice, we obtain the following results. If we want to have perfect security, then we have to go for a stronger bound on T. So instead of having 2T less than N, we have to have 3T less than N. So the adversary can only corrupt a third of the parties if you want to have perfect security. And you want to have statistical security, which is a slightly weaker version of information theoretic security, meaning that the protocol has a very slim, very small chance of failing. You can get back to the 2T less than N domain. So these are standard results and an interesting fact is that you can also consider mixed adversaries. So this was considered in this very important work from KryptonID8. And the idea is that now we want to consider an adversary that simultaneously corrupts some parties actively and some parties passively. And even on top of that, we will consider what we call fail-stop parties, which are some parties that the adversary does not control, the adversary cannot see their state, it cannot alter their behavior. And the only thing the adversary can do on these parties is make them stop, so the adversary can cause them to crash, basically, and stop sending and receiving messages. It's like a way of cheating with these parties, but it's of course not as harmful as the adversary being able to read the state or the adversary being able to modify their behavior completely. So when we consider mixed adversaries, this work that I'm citing here brings the following results. For perfect security, we can get it as long as 3TA plus 2TP plus TF is less than N. So if this is exactly a generalization of the bound we knew before, because before we had that perfect security was possible, if 2T was less than N, if the corruption was passive, and 3T is less than N, if the corruption is active, we can see it here. By setting this to be 0, we get 3TA less than N, and by setting this to be 0, we get 2TP less than N. So this is a strict generalization of the previous results. And for statistical security, we get a similar inequality, except that now we get a factor of 2 here instead of a factor of 3. And these are very important results, and this is the first work that consider mixed adversaries. So adversaries can simultaneously corrupt parties actively and also possibly. So let's try to understand in more detail what these results I just mentioned mean. They mean that for every tuple you consider, so for every three numbers you consider, such that 3TA plus 2TP plus TF is less than N, I'm talking about perfect security here, of course, just for illustration purposes. For every such tuple, there exists a protocol that is perfectly secure, as I mentioned, this is the example of perfect security, against TA active corruptions, TP passive corruptions, and TF fail stops. So again, for every tuple you choose, if it satisfies this inequality, then there will exist a protocol with these properties. And the general idea in this paper is actually relatively simple, is just use Shamir's secret sharing with the degree equal to TA plus TP. Now Shamir's secret sharing has the property that no set of shares that are as large as the degree can leak anything about the secret, and in particular because an adversarial corrupting TA parties actively and TP parties passively can learn exactly TA plus TP shares, because it can see the states of these two shares, of these two parties. Then we can see that privacy holds in this case, because we're choosing the degree so that no set of TA plus TP shares reveal anything about the secret. So privacy will hold because the adversary can only see the state of TA parties plus TP parties, and also because of this bound here, together with the choice of the degree, you can show that error correction holds. So for error correction techniques to hold, certain inequality has to hold between the number of errors and the number of shares that you get at the end and the degree, and well that inequality will hold in this case with this choice of parameters. And error correction is a method that allows you to at the end of the protocol get the result. So this is the general reason why this choice of D will work. So again, in the mixed adversary setting that we just saw, for every tuple that exceeds the protocol. So for example, if you set n equal to 8, and you choose TA equal to 2, TP equal to 0, and TF equal to 1, well, 3 TA plus 2 TP plus TF is equal to 7 in this case, which is less than 8, the number of parties. So in this case, we know that there exists a protocol such that it supports two active corruptions, no passive corruption, and one fails of corruption. So we know it exists because of the theorem. If instead of that, you choose TA equal to 0, TP equal to 3, and TF equal to 1, again, the sum with the appropriate weights is equal to 7, so it's less than 8. In this case, there will exist another protocol. This can be an entirely different protocol by prime that supports no active corruptions, three passive corruptions, and one fails of corruption. So this is just an illustration of what these results mean. You can choose a tuple, and there will exist a protocol. You can choose another tuple that satisfies the same inequality, and there will be another protocol, and in principle, it can be different. So what we want to study here in this work is dynamic NPC. So in a nutshell, we can see dynamic NPC as a way of reversing the quantifier's order. So in the previous result, we had for every tuple, there exists a protocol. So instead of having these, here we have for every tuple, there exists a protocol. Instead of having these, we want to have that there exists a single protocol such that for every tuple, we get some notion of security. So we want to reverse the order of the quantifiers. We don't want to design a protocol specific to every single tuple that you can come up with. We want to design a single protocol that works for every tuple. As an example, we would like a result of the following type. There exists a protocol such that for every tuple, such that they are prepared in equality holds, the protocol is perfectly secure against an adversary corrupting TA parties actively, TP parties passively, and TF fails stops. Now this result won't hold. Like exactly as I stated here, this result will not hold, but we will be able to show more concrete results about when does this hold and how does this hold. But this is just an example of the type of results we would like. It's very different from the one we had before. Before we had for every tuple, there exists a protocol, and now we have there exists a protocol that works for every single tuple. And I want to begin by illustrating why this is difficult. This is more challenging than the things we had from before. And the main reason is that the protocol does not know the concrete values. You want to design a protocol that works for any tuple such that some property holds. It can be 3TA plus 2TP plus TF less than N. You don't know anything about TA, TP, TF except that they satisfy this inequality. And as an example about why this will be hard, let's go back to the case in which N is equal to 8 and assume that you want to design a dynamic secure protocol, but still use the same paradigm of using Shamir with this degree. So you use Shamir with this degree and let's say that you use exactly the degree equal to 2. So let's say you begin with degree equal to 2. In this case, if the tuple TA, TP and TF happens to be 201, respectively, then you can get privacy because the maximum amount of shares that the adversary learned is 2 plus 0. So you get privacy. And you also get error correction. You can check that the error correction bound will hold. This is essentially the construction that you would get from the previous mixed adversary construct papers. But now, if you choose this D, this protocol you design should work for any tuple, not only for this tuple. So what if you also consider this new tuple, 031? Well, this D will not work for this new tuple because now in this new tuple, it is still, this new tuple still preserves, it still satisfies the bound because 3 times 0 plus 2 times 3 plus 1 is less than 8. So your protocol should be secure against such tuple, but it's not because the degree is 2 and the adversary can see 3 shares because it can corrupt 3 parties passively. So there will be no privacy. Sure, there will be still error correction. This is trivial because there are no actively corrupt parties actually, but there is no privacy. So if you choose this D, it will not work against this tuple. It may work against some tuples, but not against this tuple in particular. So let's say you change and you choose D equal to 3, you increase D because the problem was privacy, you just increase it and now everything should work. So now, you have privacy and you also have, you can show that you also have error correction again because a TA is equal to 0. But sure, maybe now with the choice of D equal to 3, now you get your protocol to work for the tuple below, but for the tuple on top, your protocol simply will not work because maybe you will have privacy because the degree is 3, which is larger than 2, the maximum number of shares the adversary gets, but now you don't get the error correction. You can check that the, it is possible to check that the error correction bound will not hold in this case. So the conclusion is from this slide, if you choose this fixed degree for your protocol, it will work for this tuple, but not for this one. And if you choose this degree, then it will work for this tuple, but not for this one. So this is the challenge in designing dynamically secure protocols. You cannot simply go and choose one single tuple, one single paradigm, one single degree. You have to be more dynamic. Well, as the name implies, you have to be a little bit more flexible in the, in the way you design your protocol. And this is basically the main difficulty in designing these techniques. So I want to, to mention that there are previous works in this direction, in the, in the direction of designing a dynamically secure protocols. So we have these work from crypto 13 by her at all, where they do introduce the concept of a dynamic adversary actually, and they consider these adversaries in the computational security setting. So what you do, where you make computational assumptions. This is in contrast to our setting here, where we consider information theoretic security. Now, in very recent work, 19, in Patra and Ravi, it considered the setting again of computational security, but this time in contrast to the work on the left, they consider round complexity specifically. So they prove some lower bounds in the round complexity. And they prove that a certain protocols they design would be optimal with respect to these lower bounds. Now, I should also mention that there is this work that a, even though it does not, it's hurt at all, but even though it does not consider explicitly a dynamic adversary, it's still related to it because you consider general adversaries or general structures of adversaries. And it is not hard to see that a dynamic adversary is just a much more flexible or larger collection of a, it's a larger adversary structure. And they consider this is exactly what they consider in this work. So we do a more third of a comparison to this paper in our work. So we invite you to read it there, but the conclusion is basically they do achieve some feasibility and impossibility results, but because they consider arbitrary adversarial structures, their results are essentially polynomial on the size of the structure. And well, for threshold structures, like the ones we consider here, this size of the size of the structure is actually exponential. It's very large. So our results are more meaningful feasibility results for the concrete case of threshold structures. Okay. So what are our results? The results we achieve here are the following. So for statistical security, we have to assume the bound from the non-dynamic setting. This is the bound from the work I cited at the beginning for the mixed adversary. We have to assume that these bound holds. Otherwise, we cannot even hope to get protocols in the dynamic setting if they cannot be obtained in the non-dynamic setting. So in this concrete scenario, statistical security, we get the following two results. For secure function evaluation, which we abbreviate with SFE, this is the part that just provide input and get output. In this setting, we show that GOD protocols are possible to design. So this is a very good positive result. But unfortunately, even if you only require fairness, not even GOD, you need to have a number of rounds that grow linearly within, asymptotically speaking. So the number of rounds of this protocol grow linearly within. And this is very bad because you can have, in the non-dynamic setting, you can have protocols with very low round count if the function is simple enough. For example, if the function is in NC1. So in these cases, in NC1 or NC0, you can get constant round protocols, but in the dynamic setting, it's impossible to get this protocol. So this shows a strict separation between the non-dynamic setting and the dynamic setting. Now, for reactive NPC, which is where you get input, you get output, and then you may reuse this input again. And this reactive NPC can be obtained basically from SFE and verifiable secret sharing. In this setting, fairness is possible, but GOD requires in addition that R, the maximum between TA plus CP, and F, the maximum between TA plus TF, is less than N. These R can be interpreted as the number of shares or number of states that the adversary can read. It corresponds to active and passive parties. And F corresponds to the number of parties the adversary can cause to crash, which is the active parties and the ones that are fail-stop. So this is in terms of statistical security, and in terms of perfect security, where we assume the corresponding bound from the non-dynamic setting, so essentially the same as before, but with a tree here, we show that SFE is actually impossible. And this is a very interesting result. Even with a board, it's still impossible, even if you, if the adversary is restricted to not make any fail-stop corruption, it's still impossible. But fortunately it's possible in some cases. For example, we explore the case in which you restrict the adversary to not make any active corruption, and in that case, we show it's possible. Or alternatively, if the adversary does not corrupt any party passively, it's also possible. But we did not explore the exact conditions under which it is actually possible. For reactive MPC, we show that GOD verifiable secret sharing, so verifiable secret sharing where the output is guaranteed to be obtained at the end, is impossible. And this remains to be the case even if there are no fail-stop corruptions. However, we show that GOD VSS is possible in some cases again. So if, for example, if we were 68 to be zero, so no active corruptions, and we also assume this extra bound, we can show that in this case it's possible, and actually we can show that this bound that I'm saying here is also optimal in this case. Same for the results here. If there are no passive corruptions, we need to assume this extra bound, and we show that indeed this is the case. You need to assume that bound. And I'm also not mentioning this slide, but fair VSS is possible whenever secure function evaluation with perfect security is also possible. So these are overall our results, and now I would like to proceed to describing the techniques that we use to achieve such results. So let me begin by describing our results on a statistically secure SFV with GOD. We show that this is possible. And first, the setting, again, we assume the bound for mixed, dynamic mixed adversaries, sorry, in the statistical setting. But furthermore, for simplicity, we will assume that TP is equal to zero. So actually, this term disappears and the bound is to TA plus TA less than it. This makes sense. This is not artificial. This makes sense because you can see that both passive and active parties have the same factor, the same weight. Both of them have a factor of two. So for the adversary, it costs the same in terms of in terms of threshold to corrupt an active party or a passive party. So it makes more sense to consider the strong adversary that only corrupts parties actively. So we have this bound, and we also assume in addition that parties can detect when another party fails stops. And this is, again, not artificial. Also, we can have a broadcast channel in which parties broadcast constantly some harbids, mentioning that they are alive. And whenever a fail stop party crashes, it will simply stop sending these harbids. And whenever a party stops sending these harbids, it's considered to be fail stop. So this is the general layout of our protocol. I'm going to start with a bigger picture and then we're going to consider the small details. So first we're going to assume a box, which is a non-dynamic protocol, also a traditional protocol just like the ones that we always knew they exist. A classical protocol, non-dynamic with GOD, tolerating TA less than and half corruptions, active corruption. So this is a protocol that tolerates dishonest minority, and it's non-dynamic. We know a lot of these protocols. Now, the parties are going to provide the inputs to this protocol. And in the protocol execution, they will restart if some fail stop is detected. I want you to notice that because 2TA plus TF is less than N, if you remove a fail stop party, you remove one from here, you remove one from here. So the new values will still satisfy the same inequality. And finally at the end the protocol will produce output. Now, what is the output? The output is not exactly Z. So Z in this case is the function that we want to compute. But the output that we want to produce with this protocol is not exactly Z in the clear, but it's a shared version of Z that I'm going to describe in a moment. Why do we want a shared version of Z instead of having Z in the clear? Let's say that we modify this protocol so that instead of producing shares of Z, we produce Z. The problem of doing this is that the adversary may cause some fail stops towards the end of the protocol. So when the protocol is about to end, for example, it can make a lot of parties fail stop. And the adversary will learn output while the other parties will not. This will not be fair. In particular, it will not satisfy guaranteed output delivery. And the reason why this happens, even though the protocol is assumed to satisfy guaranteed output delivery, is because the protocol is assumed to satisfy this property in the non-dynamic settings. So it is assumed to tolerate active corruptions, but it's not assumed to tolerate fail stops. So this is the problem. This is why we need to have a different approach. So we're going to have a shared version. I'm going to describe what this shared version means. So what this means is that at the end of the protocol execution, again, if this protocol, if some fail stop is detected in the middle of the protocol, the protocol restarts. This is not a problem, by the way, because nothing is leaked because of the properties of the protocol. Absolutely nothing is leaked about the output at any time. So you can always restart because the adversary can change its inputs or something, but it will never learn anything related to the output. This is also the case because the output, again, is not in the clear, is shared. So the second step is that the parties are going to engage in a protocol that enables them to reconstruct this shared version of Z. And now after the reconstruction, they will get indeed C in the clear, which is what we ultimately want. What are the properties that we want from this reconstruction protocol? Well, there are mainly two properties. So the protocol has to be fair, meaning that either the output succeeds or the adversary learns nothing. And this is not trivial to achieve because, again, as I mentioned, if up here we remove these brackets, we try to output the result in the clear, this will not be fair because the adversary can always make a lot of parties crash at the end. It can learn all their values and it can remove or it can prevent the honest parties from learning these results because a lot of parties crash at the end. This is a good moment to mention, by the way, that we assume the adversary is rushing. So it means that the adversary can choose to first hear the messages from the failed parties and then make them crash before these messages reach the honest parties as well. So this is why getting fairness is really, really difficult. And then, so this is the first property we want from this reconstruction protocol. It's not trivial. It's actually one of the main complex parts. And we also want that if the protocol aborts, so if it, again, if it does not succeed, if it aborts, we know that the adversary learns nothing. But we also want, in addition to this, that the parties identify some set of active or fail-stop parties. So it doesn't simply abort and say it's this failed, but also it failed. And by the way, these set of parties are guaranteed to be actively corrupt and these set of parties are fail-stop. At this point, when they can identify these parties, they can simply restart. So we do the same here in the second part. They start interacting in this reconstruction and at some point, they, it fails. This reconstruction aborts. So then the parties will go back here to the beginning of the protocol, but now removing the identify active parties and removing the identify fail-stop parties. This is secured to do because of the fairness property. Nothing is leaked about the output whenever an abort occurs. Never, never the adversary learns anything about the output. So this is why we can sort of go back to the beginning and restart the protocol. And well, you can intuitively see that if we restart this thing a lot, every time we are kicking out some parties, so eventually this will succeed and eventually this will result in the output. So basically, this is the layout. This is the template. And what we are missing right now is just how to get this box. How to get this box in such a way that it's fair and in such a way that whenever you abort, you identify some, some parties. And again, just, just to mention it, the hard part is getting fairness. Intuitively, is because you can like reconstructing things in a fair manner is difficult. If you have an adversary that can first get the messages and then choose to prevent the other parties from getting them. This is the main interesting part. So how do you get the fair reconstruction? Well, we're going to have some notion of sharing. So the notion of sharing that we're going to use looks like this. You're going to have some random value that is given to the parties in point-to-point channels at the end. And also you're going to have some secret shares. So these square brackets actually mean Shamir secret sharing with some message authentication codes that allows parties to detect incorrect shares. So the details are given in the paper, but for now it's just, it's just good to think of it as Shamir secret sharing with some additional information that allows parties to detect whenever a party, whenever an actively corrupt party send an incorrect share. So we're going to have some r for the shared representation of z. We're going to have some r and then a series of shared values where the sum of all these values, z1 up to zl gives you the masked version of z. So it gives you z minus r and also each zi is secret shared with the degree corresponding to its index. So zl is secret shared with degree l, cl minus one will be secret shared with degree l minus one down to z1 that is shared with degree one. And here l is chosen to be this quantity here, which is the maximum amount of actively corrupt parties the adversary can have. In particular this is private because the adversary does learn, it cannot, it cannot learn in particular this value because it has a very large degree, it cannot get enough shares and because it cannot learn this value, it cannot learn this value. These type of sharing are called leveled sharing and they were introduced in the original work, the first notation I gave on the dynamic NPC setting. So with this type of sharing, whenever you want to reconstruct a secret shared value, we do it like this. First for d equal to l down to one, you reconstruct each degree. So the parties begin trying to reconstruct this one. Then they reconstruct the next one up to, until the last one, they try to reconstruct z1. Either each of these, because we're using Shamir secret sharing with max in the honest majority setting, where you only have error detection, you don't have error correction, the protocol can fail. So either the protocol succeeds, but if it fails, the good thing is that because we're using max and the way that everything is designed, the parties will be able to identify kA, kA active parties and kF felt sub parties such that the degree is greater than or equal to this quantity. This is a rather standard when we use max with Shamir secret sharing, but to give you an idea about why this works just very intuitively, whenever you try to reconstruct shares with this degree, everyone will announce their shares and their max and their tax, and maybe the corrupt parties, the actively corrupt parties, and maybe the felt sub parties won't do this. So for sure, you're going to have n shares at the end because they are announced by other parties, but you have to subtract the possible ones that will not be broadcasted, that will not be sent, which are the ones corresponding to the actively corrupt parties and the ones corrupt into the felt sub parties. So this is the number of shares that the parties have at the end. And if these are larger than d plus one, larger than or equal to d plus one, then they can't reconstruct because this is larger than the degree. So if they could not reconstruct this because this quantity was not larger than that, so it's smaller than or equal to d. This is an intuition why if the reconstruction of cd with degree d fails is because a lot of parties misbehave actively and fail stop. And the number of shares obtained at the end was not enough. So this is upper bounded by d. So this is the property of this secretary scheme. You can reconstruct by starting with the larger degree down to the lowest degree. And if one of these reconstructions fail, then it's because the parties can identify certain number of active parties, certain number of felt sub parties such that this inequality holds. This will be important later on. So let me claim the following. Assume that d is greater than one. We claim that if the reconstruction of zd with degree d results in abort, then the adversary does not learn the next value. And this is enough to get fairness because it means that maybe the adversary when they are reconstructing zeal, maybe the adversary gets to learn zeal. And again, the reconstruction of Shamir values, Shamir playing values is hard to make fair. So maybe the adversary can learn zeal and no one else learns zeal. This may sounds like like a breach for fairness, but it's not because the claim shows that even though the adversary learns the value, the adversary will not learn the next one. And the adversary needs all of them to be able to get z. So this is why this will give you fairness. So if the adversary cheats in opening zd, it will not get zd minus one. By the way, this is assuming that d is greater than one. The case of d equals one is handled slightly different and I leave the details for the paper. So I defer you to that. So how do we prove this claim? So first we know that if the reconstruction of zd with degree d results with degree d results in abort is because there are at least Ki, Ki active parties and Kf felt so parties such that these inequality holds and the parties can identify these these the honest parties can identify these set of parts. So in particular, if there are Ki active parties, we don't know anything about TA TA is the number of active parties the adversary chose to corrupt. We don't know anything, but because we know we identify at least Ki of them, well, TA has to be for sure greater than or equal to Ki and the same for TF TF has to be greater than or equal to Kf because we identify Kf first parties. So now we can bring all these inequalities together. Also with the help of the one we started with, which is the the main inequality for statistical secure and PC to TA plus Tf less than n. And when we combine them, we essentially get we essentially get this beginning quality. So we essentially get that D is greater than TA. So the adversary has less than these shares. Why is this? Well, the adversary disrupted the reconstruction of zd. So everything stopped there, the protocol stopped there. So the so regarding the shares of CD minus one, the only thing that the adversary has is the shares from the corrupt parties, because he never received any message from for that for that reconstruction, because the protocol stopped before the reconstruction. So the adversary only gets TA shares, but they are less than D. And if you want to reconstruct a degree D minus one sharing, you need, well, at least D. So this shows that the adversary who only has TA shareings does not have enough to reconstruct this sharing down here. And this is why the claim follows. So with this, we conclude the proof of the upper bound. I would like to recall again, this means that we can get a statistically secure protocol for secure function evaluation in the dynamic setting. And I roughly gave you an idea of how it worked, a general template. So now I would like to bring your attention to the lower to the lower bound for these protocols. And the interesting thing is that if you notice this protocol needed to restart a lot. So bringing you back to where we had the template of the protocol, the protocol had to go back a lot. So if something went wrong, you go back, you go back, you go back. And in the worst case, it can go back a lot of times. It can go back, essentially a number of times that is proportional to the number of parties, because in every rerun, a new corrupt party is identified. So it has a running, it has a round count that is proportional to N. And what we show here is that being proportional to N, having a round count proportional to N is actually inherent. So to show this, we're going to suppose that there exists a dynamic and also fair, a statistically secure protocol with exactly N fourth rounds. And we're going to show that this is not possible. So let's get started with the lower bound. To this end, consider first the following scenario. So we have the N parties here. And the adversary chooses the following corruptions. It chooses to corrupt actively the first N fourth parties. It chooses to make the middle N have minus one parties to fail stop at some point. And while the remaining parties, which you can compute will be N fourth plus one, they will remain honest. Again, here we're assuming by the way that 2TA plus TF is the N. And you can see that these satisfy these bounds because 2N fourth plus 2N half minus one would be exactly less than N. So this is the first scenario. This is the adversarial structure. This is what the adversary chooses. And the strategy in the actual protocol execution looks as follows. So R by the way is the number of rounds. In round one, the protocol, the corrupt parties just follow the protocol normally and the fail stop parties also follow the protocol normally. They won't crash yet. In round two, it will be the same. So the protocol execution will continue normally even up to round R minus one. But in round R, when the parties send to each other the last messages they are supposed to send, the adversary will cause the actively corrupt parties to stay silent. And it will cause the fail stop parties to also stay silent. What does this mean? Let's analyze what this situation tells. So here in the last round, these parties will not talk. But remember, because the adversary is rushing, the adversary will actually get the shares, will get all the messages. So at the end of the round, the fail stop parties are supposed to send some messages. The adversary will get those messages, but the honest parties will not. So let's analyze the situation. The adversary learns the output. Why does the adversary learns the output? Because it behaves, the adversary behaves normally during the whole protocol execution. And as I just mentioned, the adversary gets the last messages. So from the point of view of the adversary, the execution just happened successfully. The adversary received all the possible messages in the protocol. And because the protocol is supposed to be well correct, it gives the adversary the possibility or the ability to compute the output. So because the adversary learns the output and the protocol is fair, the honest parties also must get the output. This is the definition of fairness. If the adversary gets the output, the honest parties must get it as well. However, the interesting thing is that the honest parties, even though they got the output, as I just mentioned, they did not get any message in the last round. Because that's a strategy the adversary chose. The messages the parties get here, they only come from among themselves. They never heard from the fail stop parties or they never heard from the actively correct parties in the last round. What does it mean? If the honest parties got the output, but they never got any message in the last round, it's because they knew the output already in the previous round. So here in round R minus 1, the honest parties already got the output. And well, this is a particular strategy that we chose with the adversary corrupting the first set of parties actively and so on. But if you rearrange for any subset, you can essentially prove that any set, by the way, when you do the count again, there are n4 plus 1 honest parties. So any set of n4 plus 1 parties that you choose already know the output in round R minus 1. So not only this subset down here, this subset has n4 plus 1. Any subset you choose down here of size n4 plus 1 already knows the output in that round, not in the last round. Like a subset of this size does not need to go to the last round. If they get together, they already will know the output. So that was one scenario. Now consider this second scenario. Now the adversary will not corrupt n4, but instead of corrupting it fourth actively, it will corrupt n4 plus 1 actively. In addition, it will corrupt, because it's corrupting one more active party, for every new active party, it has to subtract two failed stock parties. So instead of corrupting in half minus one, it will corrupt in half minus three failed stock parties. And the rest will be honest. How many are the rest? You can check it will be n4 plus 2. This is the new adversarial choice. Like this is the choice that the adversary will make regarding the active corrupt parties and the failed stock parties. So the strategy in the protocol execution will be like this. Round one will be just like nothing happened. Everyone will behave honest. Round all the rounds up to round R minus two would be like that. But in round R minus one, the actively corrupt parties will remain silent and the failed stock parties will remain silent. And well, the honest parties, they will only hear from them from among each other, right? Because they never get the messages from these parties of here. Well, and in the rest of the, like they remain silent from now onwards. So let's analyze this execution. It's similar as before. The adversary controls n4 plus one parties, right? This is what the adversary is controlling. It can see their state. And even though the adversary got here, the corrupt parties, they don't speak here. Again, because the adversary is rushing, they will get all the messages. So they will get the messages coming from the failed stock and also from the honest parties. So it means because of the previous result, the adversary corrupts n4 plus one parties in round R minus one gets all the messages. So the previous conclusion leads us to determine that the adversary will learn the output in round R minus one. So the adversary gets the output in this execution. Why? Just to recap again, it gets all the messages in round R minus one and it corrupts a set of size n4 plus one, which is enough to get the output in round R minus one because of our previous analysis in the previous scenario. So now from fairness, because the adversary get the output, the honest parties have to get it as well in round R minus one. But once again, they don't receive any message. They don't receive any message from that round. So they only got the messages from this round. So conclusion, it means that the honest party knew the output already in round R minus two. My apologies for this time. So the party's already knew the output in the previous round. So here, so you can see that what we're doing, we can continue the pattern, we can continue, we can consider a new scenario in which the adversary corrupts not n4 plus one, but n4 plus two parties and then adapts the failed stock corruptions appropriately. And this will bring the adversary to know the output even in a previous round. And if we iterate this argument, we can show that the adversary already knows the output in the first round. So you can essentially collapse this argument down to the first round, but this is impossible. You cannot get a protocol in which the adversary learns the result in the first round because then it can always get the messages from the honest parties and use those messages to evaluate the function on any inputs of its own choice. And this leads to what we call the CDUL attacks. It's not something allowed by the MPC functionality. Okay. So this is essentially the general idea of a lower bound. I know a lot of details are left. I'm not discussed here, but I invite you to read the paper to learn about these details. Also want to mention, this is the overview of our results. This is a chart, this is a table from the paper. So we discussed a GOD as a fee in the statistical setting. And we also discussed that it requires in rounds. Some of the results are similar in terms of the upper bound, but all these results here, of course, I didn't have the time to discuss. So I invite you to read the paper for getting details on these additional results. With that, I would like to conclude. Thank you so much for your attention. Open to questions and invite you to read the paper also. Thank you so much.