 There's something wrong, I don't know. Oh, is it? OK. So the last talk of the session is essentially optimal robust secret sharing with maximal corruptions by Alison Bichon, Valerio Pastro, Ramiro and Raya Moran, and Daniel Vich, and Valerio will give the talk. Thanks for the intro. So I'll talk about robust secret sharing against as many corrupt players as possible. So what is a secret sharing scheme? I'm going to talk about threshold secret sharing. A secret sharing scheme is a couple of algorithms. One algorithm, the sharing algorithm, takes the secret and outputs a bunch of shares. Another algorithm, the reconstruction algorithm, takes the shares and reconstructs the secret. We want two properties, one of them is T privacy, meaning that any set of T shares reveals no information about the secret. And we also want T plus 1 reconstruction, meaning that as long as you add one share, you get the secret back, meaning T plus 1 shares completely defined the secret. So the textbook example is Shamir's secret sharing. In order to share, you just sample a uniform polynomial of degree T, which evaluates to the secret at the origin. And the shares are nothing but the evaluations of this polynomial at public points. In order to reconstruct, you can do polynomial interpolation or read Solomon decoding. What about robust secret sharing? So just add the extra property that I'm going to describe now. Suppose that T shares are selected by an adversary who can then corrupt them arbitrarily. Even in this scenario, we want the reconstruction to be correct up to some error probability. So what is known about robust secret sharing? So let's say that K is the security parameter. So that error delta is 2 to the minus K. The secret size is m, and n is the number of players. So here, I'm going to plot the share size against the number of corruptions. So it is known that with this honest majority, you can have robust secret sharing. So more than half of corruptions, it's not possible. Since we have a threshold structure, it is not possible to have a share size smaller than secret size. If you think about it, the threshold property tells you that T shares give no information about the secret. But if you add one, that one share will completely determine the secret. So that one share has to be as big as the secret, at least. Positive results, we have that Shamir becomes a robust secret sharing when the number of corruptions is small enough, because you can think about it as a read Solomon encoding, and it supports up to a third of errors. So what happens here? There is another positive result by Robin Benoit. This result is for the case of maximal corruptions. It also expands to fewer corruptions, but I'll just focus on this. And it has overhead k times n in terms of the share size. Another, OK, a negative result is a lower bound that says that in the case of maximal corruptions, you have to have an overhead of at least security parameter. Another positive result by Sevalasfer and others, it's an elegant refinement of Robin Benoit that shrinks the overhead to OK plus n. So here, what happens is this. You can combine a ramp secret sharing by Kramer, Demgore, and others with Shamir, and you can get a robust secret sharing for a constant fraction of corruptions that achieves essentially optimal share size. And the only thing left is this. So what we do is essentially fill the gap, and we have a scheme that achieves all tilde k overhead. OK, so I'm going to focus a little bit on the Robin Benoit scheme just to give you an idea of how it works and then move to our scheme. So the Robin Benoit scheme uses two building blocks. Shamir secret sharing to achieve privacy and message authentication codes to achieve robustness. So OK, here, just a notational thing. If I have a key and you have a message tag pair that authenticates with my key, I'm going to draw a green arrow between me and you. If your message tag pair is invalid with my key, I'm going to draw a red arrow. So yeah, just keep that in mind. So as I said, Robin Benoit, it uses Shamir first, and then this is what happens for authentications. Let's focus on two players, i and j. Let's sample a key zji for player j, and we give player i a tag via this key on this share. It means that player j authenticates player zji's share. So we do this for all j and for all i. OK, so this is how you share. How do you reconstruct? So there's going to be three types of players. Honest players, they're t plus one of them. They don't touch their shares, so they're exactly how the dealer defined them. And then the corrupt players are split into two. There are players who don't touch their Shamir share and those who instead corrupt it. So let's see how the authentication graph is colored. Edges from honest players to honest players are all green, of course. So if I'm an honest guy, I'll have at least t income in good edges. If I'm one of the red players, which I call active, well, all the honest guys are going to accuse me. So I'll have, at most, t minus one income in good edges, namely the ones that come from my other corrupt friends. If I'm the last type of player, which I call passive, well, I'll get at least t income in good edges because my Shamir share is valid. So one way you can reconstruct is, well, let's find a subset of h union p and use the Shamir portion to reconstruct. So if you can identify h union p, you're in good shape. So how do you do that? Think about it as a graph game. You're given a graph with these coloring rules and you need to find h union p. One way to do that is just select those players that have at least t income in good edges. That's it. So the high level idea is Shamir's secret sharing gives you privacy. And then the authentication graphs gives you robustness. And essentially, you're compiling this into a graph game. In order to reconstruct, you solve the graph game and retrieve h union p and then use standard Shamir to reconstruct the secret. We're going to use a similar approach for our scheme. And in this talk, I'll just talk about how to recover h union p from the graph. There is a lot in the paper about how to define the graph. So please read the paper. Good. So how does this graph interact with the overhead? In the Rabin-Banor case, the graph is complete and the max are long. Long max means that the overhead depends linearly on k. And the complete graph means that the overhead depends linearly on n because each vertex stores n edges. In the refinement by Sevalos and others, they managed to shrink the overhead by having short max. The graph is still complete, so you see this end here. So what do we do? We use a sparse graph and also short max. So OK, what's our graph game? So it looks like this, the dealer samples graph. And the adversary adaptively corrupts and decides which of the parties are active and passive. And then he has the ability to change the outgoing edges from the active players arbitrarily. We call this property zero. What about the coloring properties? Honest and passive guys authenticate each other. Honest guys accuse active guys. Active guys accuse honest guys. And notice that the edges that are going from H are hidden to the adversary. So that's another interesting and important property. OK, so let's play the game and retrieve HUNP. So OK, I'm the adversary. I'm corrupting T players, deciding these are active, these are passive. So property zero tells me I can change the active. The outgoing edges from the active players, so I'll do that. Then the coloring properties kick in. So honest and passive authenticate each other. Honest, accuse active. Active, accuse honest. And everything else that you see, black, can be colored arbitrarily. So how do we recover HUNP? Well, it turns out that HUNP, with this setting, is the largest self-consistent set. Self-consistent meaning players in this set only authenticate each other. So essentially, we have a reconstruction algorithm, more or less. But let's see why that's the case. So property one tells me that honest and passive, the set of honest and passive guys is self-consistent. So that's promising. What about other sets that are as large? Well, they must contain some active guys and some honest guys. And the probability that the set is self-consistent is bounded by the probability that the honest component of the set is disconnected from the active component of that set by property two. You can evaluate that probability by using the uniformity of the honest edges. And this number here is a very low quantity that survives a union bound. So HUNP is very likely the largest self-consistent set. Good. So we just solved the problem, essentially. But the algorithm to find the max self-consistent set may be inefficient. So how do you make it efficient? So the way we do it is just by essentially having a very short list of candidate sets. And the way we achieve this short list is by using two algorithms that I call budget and bisection. The budget algorithm is effective if the number of passive players is large. The bisection algorithm is effective in the complement case where the number of passive players is small. So a very high-level idea about how to prune in the case of large P. We define a budget of possible accusations for the active players. But just rejecting everybody who complains too much. Then since A is small, we have that active players are accused by more than a half of their incoming edges. This is because there are more than a half honest players. And conversely, if you are an honest player, you don't get many accusations because there are not too many active players. So well, this is an average, by the way. And you can define a quantity that stays in between these two and use the gap to just kill everybody who's above this threshold. So if you're accused too much, you're out. And this is more or less how the scheme works in the budget algorithm. What about in the other case when there are just a small number of passive players? So this is very interesting, I think. We just forget about bad edges, first of all. And then notice that the cut honest players and corrupt players has just a few cross edges. This is by property 2 and 3. So you could find an algorithm that bisects the graph into two sets of about the same size. And there is an actual algorithm by RECA that has a logarithmic amount of cross edges more than the minimal cut for sets of these size. This means that, well, since we have this property, we know that the number of minimum times log n is bounded by this. And this can be set up to be a constant fraction. This means that, essentially, one of these two sets is very honest. And it turns out that it's honest enough that there is a simple method to expand it to the whole set of honest union p. So essentially, we are almost done. So if you have the five properties that I told you at the beginning, you just run the budget and the bisection algorithm in parallel, get the full list of candidate sets, pick the max self-consistent set, and use Shamir to reconstruct from the Shamir portion. And so this turns out to be, essentially, what we do for the graph part. There are tons of other questions that you may ask, for example, how to define the graph. And the way we do it is by using list decoding, universal hashing, and private max. In a, I would say, list decoding and universal hashing are used to define something that is called robust distributed storage that really helps you for defining the graph in a robust way. What's the size of the shares in this instantiation? If you do it naively, it turns out to be with an overhead of k squared. But this can be optimized to be OK by just parallel repetition with lower security. So yeah, that's, I would say, pretty much it. So thanks. We have time for questions. Is this in the model adaptive, or is it just static? It's adaptive. If you settle for non-efficient reconstruction, is there a way to get rid of the tilde? To get rid of the? Of the polylog factors, depending on k. Not that I am aware of. Yeah. Any other question? No question? You're using a random graph in order to do your analysis. I have a gut feeling that if you use a constant degree expender, it might be a good substitute for the random graph, and it might give you some advantages. Have you considered looking at expender graphs? Yeah, I think so we looked into them, but I think the properties are more or less the same for us. So it doesn't quite give you an advantage. I would say it's, yeah, if you pick a random graph, it's fine, and I think probably an expender works. Most random graphs are expenders, but the expenders might give you something provable, whereas in your case, there is a certain probability that your construction will not work. You had some probability of error if you choose a random graph which has bad properties, and that's the main advantage I would like to have from the expender. Guaranteed expansion properties. But the graph can't be public, so there has to be some randomization. You'll have to choose from a sufficiently large collection of possible expenders the one you want. Then each one of them will have some good properties. I think that's probably a good point, yeah. OK, so let's find the speaker again. So this concludes the session, and now there are some announcements. Yes, wait, before you leave, please. So it is a quick announcement for the RAM session. RAM session is taking place today. This is your wonderful RAM session chair together with me this year. The RAM session will take place in this building. There was announced yesterday a small shift of time because of the IACR board meeting, so please check on the website when exactly everything will take place. What is a RAM session? A RAM session is about flash talks. Everybody can submit its own talk, so we have a submission system open. Please do submit your contribution. Flash talks means it's about two to three minutes. There are some constraints that are advertised on the website, so please check the details there. It can be scientific. It doesn't have to be scientific. Important is that it's entertaining to everybody. As I said, there's a couple of time constraints. Check the website for this, and there's also some technical constraints, so have a look and come tonight. Thank you.