 The last talk of the session is unconditional and secure, robust secret sharing with compact shares by Alfonso Cevallos, Serge Ferre, Rafa Lostrovsky and Joval Rabani. Please go ahead, Serge. Thank you. Thank you, Jens. Thank you to the audience for staying till this last talk of the day. Yeah, this is joint work with Alfonso Cevallos, Rafa Lostrovsky and Joval Rabani. As you all know with the secret sharing scheme, we can split a secret into n shares so that for some parameter t, any t of the shares give no information on the shared secret and any t plus one of the shares uniquely determine the secret and optimally allow us to recover the secret efficiently. A well-known example is a Shamir secret sharing scheme where the shares are computed as polynomial evaluations of a polynomial of degree at most t with the secret as constant coefficient. Here, privacy and reconstructability follow immediately from a Lagrange interpolation. Now here, and in general in secret sharing, the reconstructability requires that the shares are correct, right? However, in a malicious environment where players may be dishonest, this may not be the case. So this motivates the notion of robust secret sharing where the ordinary reconstructability property is replaced by a robust reconstructability property which requires that the set of all n shares uniquely determines the secret even if t of the shares are incorrect. In robust reconstruction, we take all the shares into the reconstruction but t of them might be incorrect. Note that I assume the dealer to be honest. If you want to take care of a possibly dishonest dealer, this leads to the notion of verifiable secret sharing. I do not consider this here. An immediate application of robust secret sharing is to secure data storage. Obviously with a robust secret sharing scheme, a user can store sensitive data on a set of servers so that if t of the servers up to t of the servers are corrupt, no information is leaked to the adversary and whenever he wants, the user he can recover the data from the servers even if the corrupt servers provide rubbish. So it's not too hard to see and I'll say a few words about that in a few minutes. If this parameter, if this threshold parameter t is smaller than n over 3, robust secret sharing can easily be obtained. You can just do plain Shamir sharing and use read Solomon decoding in the reconstruction phase. On the other hand, if t equals n over 2 or is even bigger than that, then it's easy to see that robust secret sharing is not possible. Now in this talk, I'm going to focus on this area in between where robust secret sharing is possible but it comes at some price. We will have some overhead in the share size and we'll have to live with a small but positive error probability. So I'm going to consider sort of the extreme case in this range where n is 2t plus 1. I'm going to consider unconditional security so we do not make any computational assumptions. So what is known about robust secret sharing schemes in this area surprisingly little. So there's a well-known scheme by Rabin and Beno which goes back to 89. Now their scheme has an overhead in the share size of order k times n, sort of ignoring logarithmic factors. Where k here is the security parameter and n is the number of players. There is another scheme due to Kramer and myself which is doing better in the overhead has only an overhead of k plus n rather k times n. The downside of that scheme is that it's inefficient. The reconstruction is exponential in the number of players so it's not really useful in practice if you wish. So our result here is a new robust secret sharing scheme that sort of combines these two positive points of these two schemes into one scheme. So it's a robust secret sharing scheme that has overhead of order k plus n and has efficient sharing and reconstruction procedures. Okay so this is a further outline of my presentation. First I'm going to briefly discuss the simple case where t is smaller than n over 3. I'll explain how the Rabin and Benor scheme works. I'll show how our scheme works and then I'll say a few words about the proof before I conclude. So first a simple case where t is smaller than n over 3 or in the extreme where n is 3t plus 1. So let's take a sum here sharing of a secret s but now t of the shares are incorrect. And of course we don't know which ones are correct and which ones are incorrect. Now to illustrate how and why the robust reconstruction works I'm going to divide these shares into three blocks. So the first block consists of t plus 1 correct shares by Lagrange interpolation. These t plus 1 shares already determine the sharing polynomial. The second block consists of the remaining t correct shares so these are sort of t redundant correct shares. And the third block consists of the t full t shares. Now Reed-Solomon decoding tells me that if the number of full t shares is not bigger than the number of redundant shares then the set of all shares consisting of the correct and incorrect shares uniquely determines the sharing polynomial and I can even recover it efficiently using Berlick and Welch algorithm. So this pretty much solves the case, the simple case where t is smaller than n over 3. Okay so now move on to the more tricky case where sort of in the extreme n equals 2t plus 1. Now in the Rabin and Benors scheme we also have a shamier sharing of the secret but on top of that every share comes along with a list of authentication keys and authentication tags with respect to some information theoretic message authentication code. Where the chaith tag of the share si authenticates the share si and can be verified using the if key that comes along with the chaith share sj. Okay so the security of the MAC then guarantees that an incorrect share si will not be consistent with the authentication keys except with small probability epsilon. Now there are different choices for such message authentication codes. A comment to all of them is that it's easy to see that if you want a narrow probability of 2 to the minus k then the size of the tags and keys must be at least k bits. Now because every share, does it work? Can you see it? Okay it doesn't matter. So because every share comes along with n keys and tags and every key and tag consists of k bits we get this overhead of order k times n. Okay the reconstruction of the Rabin and Benors scheme I mean the idea of the reconstruction is to try to fill throughout the bad shares and use the good shares with Lagrange interpolation to get the original secret back. Specifically every share is accepted if and only if it is approved by at least t plus 1 players meaning it is consistent with the authentication tags of at least t plus 1 players and then the accepted shares are used with Lagrange interpolation to compute the secret. Now it's easy to see that this way the shares of the good players will get accepted and the bad shares of the incorrect players will be rejected with a high probability. Okay so that's how the Rabin and Benors scheme works. Now I'm going to show you how our new scheme works and you have to watch very carefully to see what the difference is. Did you see it? There is no difference. So the sharing phase of our new scheme looks exactly the same as the sharing phase of the Rabin and Benors scheme. The only difference is that we use smaller keys and smaller tags. So we reduce the size of the keys and tags essentially by a factor n and this gives us immediately the claimed savings. Now of course if I reduce the sizes of the keys and tags I weaken the security of the MAC and indeed in this new scheme the MAC shares may be approved by some of the honest players with reasonable probability and then you see that the Rabin and Beno reconstruction fails. So in order to overcome that you have to come up with a new better reconstruction procedure which more carefully inspects the consistency graph that describes which player approves which share. Now to illustrate how our new reconstruction procedure works I'm going to discuss an example situation that could occur during the reconstruction procedure. So say that the first share as one is approved by all the n players the most likely is correct, we're going to accept it. Now let's say the second share as two is approved so means being consistent with the authentication keys by players one up to t plus one but it is not approved by the remaining t players. Now this could be because the remaining t players are dishonest so we still have to accept that share. Now say that the third share as three is only approved by the t players t up to t plus one. Now this means there is at least one honest player that has not approved that share and this means that the player three must be dishonest so we're going to reject that share. Now the important thing to note is that now that we've identified player three to be a cheater we can actually conclude that also player two must be a cheater because the second share was approved by only t plus one players but in the meantime we've realized that one of these players is actually dishonest. So there must be one honest player that does not approve the second share and therefore we can conclude that the player two must also be dishonest. Now in the Rabin and Beno reconstruction they don't take into account this reasoning once the share is accepted that's a done deal it's accepted whereas now a new reconstruction procedure we take into account this reasoning and to reconsider accepted shares we have gained new information on players being dishonest. In other words sort of the difference between the two reconstruction procedures is as follows the Rabin and Beno reconstruction accepts every share that is approved by t plus one players in our reconstruction we accept every share that is approved by t plus one players with accepted shares and on top of that we then use read Solomon decoding on the accepted shares formally the reconstruction looks like this things to note are we maintain a set of called good players to start with consists of all the players when deciding whether we want to accept the share or not we only count the votes of the players in the set good once we realize that the player is dishonest we kick him out of this set good and we restart deciding which shares to accept and which ones to reject and as I said before we do read Solomon decoding on the shares of the players that end up in this set good now the main theorem says that if the Mac is epsilon secure then our scheme is delta robust where delta is bounded by this expression here the important thing here is that delta is not in the order of epsilon as is the case in the Rabin and Benor scheme but it's in the order of epsilon raised to the power to some power that is linear in N this then means that we can sort of save factor N in the size of the keys and the tags and we still get exponential smaller probability okay so as you could see our new scheme is a very simple and rather natural adaptation of the Rabin and Benor scheme however proving the security so proving this theorem here turns out to be quite non-trivial and there are two reasons that makes the proof tricky one reason is that it's not clear what the optimal strategy is for the dishonest players in the Rabin and Benor scheme it's quite easy to see that the optimal strategy for the dishonest players is to hand in an incorrect share for every dishonest player here in our scheme may actually be advantages for some dishonest player to hand in a correct share because such a what I call a passive cheater is guaranteed to stay in this set code and therefore he can support meaning vote for the bad shares of his colleague dishonest players so this means the more such passive cheaters there are the easier it gets for the bad shares to get accepted because they need fewer votes of the honest players but on the other hand it also means that more bad shares need to survive because of the Ritz-Aulamon decoding so there's some trade-off and it's not clear where it's optimal now the other thing that makes the proof trick is there are circular dependencies whether a share as a bad share as I gets accepted or not depends on whether the other bad shares get accepted or not it depends on whether they get votes from these bad players or not and vice versa this means you cannot individually analyze the shares you cannot individually analyze the probabilities for the shares for bad shares to get accepted and apply union bound as you do in the Rabin and Benor if you try you run into a circularity so the proof in the end is going to look like this at least sort of that's the proof that we came up with first some notation I write A and P for the set of active and passive cheaters and I write H for the set of honest players and I'll write S for the players that survived the checking phase of the reconstruction procedure so this S is the set of players whose shares are going to go into the Reed-Solomon decoding and I use bold phase notation for S to indicate that I'm going to treat it as a random variable now because of the Reed-Solomon decoding it's easy to see that the error probability is given by the probability that more active cheaters survive than their passive cheaters because otherwise the Reed-Solomon decoding is going to take care of the few bad shares that survive this also means that we may assume that there are more active than passive cheaters to start with so the number of active cheaters is more than T over 2 okay now we're going to compute or bound this error probability as follows so first we're going to write this probability as a sum of the probability that exactly L active cheaters survive where L ranges over the appropriate range and now we pretty much have to write out what this probability is and if you think about it how our scheme works the probability put other way exactly L active cheaters survive if there exists a subset of size L of the active cheaters so that every member of that subset gets sufficient support from the honest players I mean if you do it carefully we get this lengthy expression here now the thing we have control over is this last part of the expression we know the probability that the bad share gets accepted gets approved by an honest player is upper bounded by epsilon in order to take account of this bound we have to strip off the quantifiers that we have in front of this expression we can strip off the existential quantifiers by using a union bound we can strip off the for all quantifiers by noting that the corresponding events are independent there's another level of existential and for all quantifiers and we end up with some nasty expression some nasty sum involving binomial coefficients so now we have to get our hands dirty and use some clever rewriting use some clever bounding make use of the lower bound and active cheaters that we have and in the end we get the claimed bound of course these last three steps are non-trivial if you want to see the details you have to look at the paper okay so summarizing we show the first robust secret sharing scheme it equals 2t plus 1 with a small overhead in share size overhead of order k plus n rather than k times n as was the case in the Rabin and Benor scheme and with efficient sharing and reconstruction procedures the scheme is a simple and natural adaptation of the Rabin and Benor scheme but the proof turns out to be non-standard and non-trivial now it's not clear still not clear if you can squeeze the overhead down to the proven lower bound which would be order k so far we have now two schemes that are getting close to it but both feature a gap that is linear in n to the proven lower bound they both have this linear in n gap actually for different technical reasons so it's not clear whether this is inherent or not that's what I wanted to tell you one there linear quadratic could you repeat the question please the complexity of the reconstruction phase you go through all the n shares and whenever you find a dishonest player you start again so you get something quadratic in n one question I haven't looked into what's the exact running time but yeah I guess probably n squared plus n times k or something like that I mean nothing huge so the question was yeah more detail on the complexity on the computational complexity and yeah I haven't looked into optimizing things but sort of my guess would be around quadratic running time any further questions okay thank you