 Hello, everyone. Welcome to this talk. I'm going to talk about constructing locally leakage resilient linear secret sharing schemes. This is a joint work with Maji, Hanat, and Tom. So in this talk, we're going to talk about locally leakage resilient secret sharing schemes. So in a classical setting, in your secret sharing schemes, you have a dealer that takes a secret S and sample N secret shares, S1, S2, and SN. The security notion guarantees that for any unauthorized set of secret shares, it is uncorrelated with the secret S. So what if the adversary leaks local information from every secret share? For example, it learns a bit BI from every secret share SI. So is the joint distribution of the leakage correlated with the secret S or not? Locally leakage resilient secret sharing schemes ensure that the joint distribution of the leakage is uncorrelated with the secret S. So why do we study leakage resilient secret sharing? It turns out that this is a very useful primitive and connected to many other fields. For example, in this fascinating problem called repairing error correcting code, where the objectives that you learn minimum information from every secret share, and you want to fully reconstruct the secret using this information. So intuitively, if you are able to reconstruct the secret S, then the secret sharing schemes is not leakage resilient. And on the other hand, the leakage resilience asks for much stronger security guarantee that not only you cannot reconstruct the secret, this information you learn from every secret share is actually revealed no information about the secret. So the leakage resilient secret sharing schemes has also been used as a building block to construct a secure multi-party computation protocols that is resilient to local leakage attacks. And it also has been used to build other primitives such as non-malleable secret sharing schemes. So since the introduction of leakage resilient secret sharing, there has been two main research directions. So the first research direction is to construct new secret sharing schemes that are designed to be leakage resilient. So we have a fascinating body of works that construct secret sharing schemes that is robust against many, many very sophisticated leakage family. However, these constructions usually incurs a significant overhead and loses some algebraic structure, such as linearity. And these crucial algebraic structures could be very important for the application of secret sharing schemes. So therefore, there is another line of works that investigate the leakage resilience of prominent secret sharing schemes, such as Shamir secret sharing and additive secret sharing schemes. Since the application of secret sharing schemes usually use such prominent secret sharing schemes, this line of work have significant impact on the real-world implementations, and our work belongs to this line of research. So let me set up the context for this work first. So in this work, we consider Macy's secret sharing schemes corresponding to a random linear code. So Macy's secret sharing schemes works as follows. So given any code C, which is just a subset of the vector space, you can construct the secret sharing schemes as follows. So let's say you want to sample the secret share of a secret S. To do this, you just sample a random code word from this code C, conditioned on that the first coordinate is identical to the secret S. Then the rest of the coordinates are the corresponding secret shares. So it turns out that if the code C is linear, then the Macy's secret sharing schemes is also linear. And it turns out that every linear secret sharing schemes is a Macy's secret sharing schemes corresponding to some linear code. So for example, the well-known Shamir secret sharing schemes is nothing but Macy's secret sharing schemes corresponding to Reed-Solomon code, and additive secret sharing schemes corresponds to the parity code. So we consider Macy's secret sharing schemes corresponding to a random linear code. So how do you sample a random linear code? You just sample the generator matrix G uniformly at random. And note that since we consider an exponentially large field F, when you sample a random matrix G over exponentially large field F, the random matrix is maximum distance separable with overwhelming probability. And when the code word G happens to be an MDS code, the Macy's secret sharing schemes corresponding to G is nothing but a threshold secret sharing schemes with N parties and K plus one reconstruction threshold where K plus one is the dimension of the code. Okay, so now we're ready to present our results. So let Namda be the security parameter which represents the size of each secret share. So remember that every secret share is an element from the field F. Therefore, the size of the field is roughly two to the Namda. And we assume M bits are leaked from every secret share. Our result is as follows. So you pick any N to be the number of parties. You pick any K such that K plus one is the reconstruction threshold. And let M be any constant. If you have the guarantee that K, roughly the reconstruction threshold is greater than half of N, which is the number of parties, then the Macy's secret sharing schemes corresponding to a random matrix G is M bit locally leakage resident except with exponentially decaying probability. So we do need the number of parties N to be less than the security parameter to ensure that the G is MDS with high probability. So as a representative example, we can set the number of reconstruction hold to be one-third of Namda and to be half of Namda. So our second result complement our first result where we show a bottleneck for the existing analytic approaches. So in their seminal work, Ben-Hermuda at all introduced this innovative Fourier analytic approach, which is adopted by all existing work, including this one, to prove leakage resilience. We show that this approach is bound to fail when the number of, when the reconstruction threshold is less than half of the number of parties. So in detail, this approach used a Fourier analytic expression as an, as a proxy to upper bound the statistical distance. We consider the leakage function to be the indicator function of quadratic resducity. That is the leakage function output one, if the corresponding secret share is a quadratic residue and zero otherwise. We'll prove that given any linear secret sharing scheme, this Fourier analytic proxy is at least the half, at least the one for this leakage function. So remember that this proxies used up to upper bound the statistical distance. If it is greater than one, then it always give an inconsequential bond. So due to this, the first result I just present to you is actually the optimal result. One can hope to prove use the existing technical approach and to prove leakage resilience when K is less than half of N requires it's significantly different ideas, even just for this one function, the indicator for quadratic residue city. So this is actually motivated because the Shamir secret sharing schemes, Shamir secret sharing based MPC protocol is multiplication friendly when K, the degree K is less than half of the number of parties. And in some ongoing works, we show that we use a combinatorical argument to show that you can actually prove leakage resilience when K is less than half of N, if the leakage family L is small. Okay, so before I present the technical highlight, let me summarize some of the relevant prior works. So in the original work by Ben Hamoda at all, they show that if you are given any MDS code G, the Macy secret sharing schemes corresponding to G is leakage resilient when M bit is leakage, when M bit is leaked from every secret share, as long as K is greater than a constant fraction of N, where this constant fraction depends on the bit leaked from every secret share. So roughly when one bit is leaked from every secret share, you need that K is greater than 0.85 times N. And this fraction increases as the number of bit leaked increases. In particular Shamir secret sharing schemes is one bit leakage resilient if K is greater than 0.85 times N. So to compare their work with ours, their construction is deterministic because their proof works for any fixed MDS code G. And they prove that K is, it is leakage resilient as long as K is greater than delta M times N, which is at least 0.85. While our construction is randomized with exponential failure probability, I will prove that K need only be at least half of N. So in another, our recent work, we consider the Shamir secret sharing schemes with randomly chosen evaluation places. And we consider a severely restricted leakage model that we call physical bit leakage. So in this model, every secret share is stored in their natural binary representation and the leakage function can learn the bit stored at specified locations. So for this restricted leakage family, we show that with overwhelming probability, Shamir secret sharing schemes with randomly chosen evaluation places is leakage resilient even if the reconstruction threshold is only two and the number of parties can be any polynomial in the security parameter and the number of bits leaked from every secret share can be an arbitrary constant M. So note that this work also employs the existing Fourier analytic approach and they prove a result, they prove that it is leakage resilient even when K is less than half of N. However, this result does not contradict the bottleneck we presented because they only considered physical bit leakage. In particular, the counter example that we raised that testing whether a field element is a quadratic residue or not cannot be learned through physical bit leakage and to put this work into perspective, they also consider a randomized construction but their code generator matrix is sampled not fully random but from this distribution and they consider a very severely restricted leakage family namely the physical bit leakage but they prove a very strong result that the reconstruction threshold can be a constant as low as two and an unbounded number of parties. Okay, so now let me give you a very brief technical overview. So recall that we want to prove that Macy's secret sharing schemes corresponding to a random matrix is leakage resilient as long as K is greater than half of N. So the major challenging part about this result is that a typical union bound will not have worked. So for example, consider this very straightforward proof strategy let's say you fix a leakage function L and you prove that most the code are secure against this L and then your union bound over all the possible choices of the leakage function L and that gives you the result. However, this won't work. Why? Because the total number of leakage functions is very large. For example, let's assume that you are learning one bit from every secret share. So the leakage function L is the domain is the field F the range is zero one. So the number of leakage functions is two to the power of F. The number of leakage function for every secret share is two to the power F and then the total number of leakage functions from N shares is the N's power of this. So the important point is that the number of leakage functions is doubly exponential because the field size is single exponential. In comparison, if you look at our family of constructions the constructions is solely determined by the generator matrix which is of dimension roughly K by N. So the number of constructions is F to the power of K times N is this much, which is single exponential. So the number of leakage functions is much, much larger than the family of constructions. So you cannot hope to use a union bound over the choices of L to prove leakage resilience. So the technical novelty of our proof is that we identified a new set of tests. So the proof chooses gamma, sigma and A and they are constant. So a test is specified by a product space V which is V1 cross V2 cross VN. So every VI is of a constant size gamma. So we say a code word C is bad or fails this test V if a large fraction of the coordinates fall inside this VI. So that is the number of the set of indices where CI is in VI. If this set is too large we say this code word is bad and we say that a code G passes the test if only few code words less than A which is appropriately chosen constant A A to the power N code words are bad. So to give you some intuition about this test so if you fix any leakage function L1, L2 and LN where LI is the leakage function on the IS share the VI will be the set of field elements that are who has large Fourier coefficient corresponding to LI and if a code passes all the test V we're going to prove that it is leakage resilient against all the leakage functions so the intuition behind that is that for any leakage functions this code will only have few code words that has many coordinates with large Fourier coefficient so this set of tests this definition is inspired by literature in pseudo-randomness so intuitively if an object has a low correlation with all the Fourier characters you can think of it as being pseudo-random so here we are saying that a code G is pseudo-random if it does not have too many code words that are not pseudo-random so the benefit of defining this set of tests is that number of tests is actually much smaller so here's a simple counting argument remember that a test is specified by a product space V where every VI is of a constant size gamma so the number of tests is roughly this much which is single exponential in comparison the total number of leakage functions is double exponential so given this set of tests we are going to prove leakage resilience as follows so you fix any test V and you prove that the most Gs will pass this test V this is based on the combinatorial argument and then you can use a union mount thanks to the fact that the number of tests is small to prove that most Gs passes all the tests and the third step is that finding code G that passes all the tests we are going to prove that G is leakage resilience and this step is based on the existing Fourier-analytic argument introduced by Ben-Helmholtz at all so this step is where we inherently requires K to be greater than half of N so finally let me say a few words about the bottleneck we proved so recall that we show that the existing approach cannot prove leakage resilience when K less than half of N and in particular it cannot even prove leakage resilience for this single function that is testing whether a secret shares a quadratic residue or not so if you are familiar with the Fourier-analytic approach the intuition behind that is that the quadratic residue function is the function that maximizes the L1 norm of the Fourier coefficient this is intuitively why this is the most devastating case for the current proof strategy so as I mentioned earlier in some ongoing works we partially make progress to resolve this by using a purely combinatorial argument to prove that for any small enough leakage family L a random call G is leakage resilient against this L well this L could potentially contain this quadratic residue function and we also have other ongoing works that identifies the optimal leakage attacks in appropriate settings so with that I would like to conclude my talk and I will refer you to the full version of our paper for the additional subtleties in the proof thank you