 Hello. This will be a talk about secret sharing. We're going to cover some basic concepts in the field. And we're going to see the best secret sharing scheme to date with complexity of three halves to the end. And this is a joint work with Danny Applebaum from the Tel Aviv University. So to begin with, let's introduce the concept what is a secret sharing scheme. So we have a randomized dealer, which holds some secret s, and it wants to share it with n parties. And it does so in the following way, it sends one message to each party. Here in the example we have six parties. And we want that every authorized coalition will be able to recover the secret the dealer had from the messages it received. We want that any unauthorized coalition will learn nothing about the secret from the messages it received. So we will call the list of all authorized coalition, the access structure we deal with. And throughout the talk we will think of access structures as Boolean functions, where every authorized set is assigned one by the function and every unauthorized set is assigned zero by the function. And we noticed that these functions will always be monotone. And that is because if some set here it's the set of parties one two and three is authorized. Then if we add parties to it, it will only it will remain authorized because it only has more information. And in this sense we will think of every access structure or function as the list of all of its minimal authorized sets. The other side to monotonicity is that of unauthorized sets, where if a set is unauthorized and all of its subsets are obviously also unauthorized since they have less information. And in this perspective, we will think of an access structure as the list of all of the maximal unauthorized sets. Moving on after these preliminaries, the most known and studied access structure is the threshold one. So with T out of N threshold function or access structure is that where every subset of the parties of size T is authorized and all sets of size T minus one are unauthorized. So, these threshold access structures were introduced by Shamir and Blakely 1979. And since then, they had this interesting property where the sizes of the shares was relatively small it was only logarithmic in the number of parties. So this is a very open problem. And this is the problem we're trying to solve here to know what is the worst case share size for any access structure, how much storage is required by the parties in any such protocol for any access structure. So this problem was first considered by it all in 1987, where they shown how to build a secret chain scheme for every access structure with share sizes to to the end. And it took over 30 years to improve this result. It was a great barrier it was only broken by you and like in Tunnet and three years ago, where they showed how to reduce this expand by just a little bit. But probably much more is possible. And it inspired the following line of work by Applebound by Mel Faraz, myself and Peter, this work included where we reduce the expand down to 0.585. And as in many information theoretic protocols, there is a huge gap between the upper bound and lower bound. And we don't know where the truth lies it could be that some access structures require exponential share size. And it also could be that secret sharing is a polynomial protocol. But in fact, there is a family of secret sharing schemes, or this gap is much smaller. It is a very important family of linear secret sharing schemes, where every share has to be a linear combination of the secrets and the random elements. So in the linear schemes, there is a much higher lower bound also exponential so the gap is only between the constants in the exponents. And we also improve the state of the art excellent by, but only by a little bit. And the techniques we use are perhaps more interesting here than the improvement itself. So we have a diverse of a secret sharing for general access structures. And in the rest of the talk will present how we get to the upper bounds will focus more on the first one. A brief on other interesting results but first we'll have to do a bit of preliminaries, how to do secret sharing via formulas. So you and my content that did their secret sharing scheme. And we will also talk about slice functions which are in the title of the paper and the talk. So we started secret sharing via formulas. And secret sharing has very useful closure properties. The first is for all gates, and it means that if we know how to deal with secret according to two functions at one and F2. It will be easy for us to deal with secret according to F which is the ore of both functions. And we do that as follows, say we want to deal with secret S, then we deal it twice independently once according to F1 and once according to F2. And then, if a set of parties is authorized by F1 or by F2, it will be able to recover the secret which is exactly the behavior that we want. So the property is over and gates which is quite similar. And in this case, if we say the secret is one bit, then we will deal the a random element a random bit as the secret according to F1, and the same random for S as the secret for F2. And then, if a set is authorized by F1 and also by F2, only then it will be able to recover the secret S. So, in both cases, we can see that the share size of F equals the sum of share sizes for F1 and F2. And we can do it twice. And we can do more complex decomposition decompositions like that for any function F to many functions. And this is an idea that goes through many of the known secret chain schemes today and also in the scheme of LV, which we will now talk about. Their scheme was based on two main ideas. The first one was a very efficient secret chain scheme for slice functions, which we'll define in the next slide. We had sub exponential share size for any such size function. And then they built a formula. They built a formula of size less than two to the end, where the leaves were represented by these slice functions we have which had efficient implementation of secret sharing. So, we're presenting this because we will soon do similar tricks with other kinds of functions. But first we'll define these slice functions. Probably many of you have heard of them before but the case slice function is one which outputs one for every input of size larger than K, zero for every input of size smaller than K, and it is unrestricted for inputs of size K. And we'll represent slice functions by these this rhombus on the right and generally represent graphically like this functions from now on, and the every vertical layer here represents sets of certain size. And if that layer is green it means that they received one and if it is read if they receive the zero from the relevant function. So now we're finally getting to some new ideas. And we saw that the formula by LV even though the constant in the exponents representing the size of the formula. It's amazing that it's below one, but it's still quite high. So we offer a different approach, where we define functions that are similar to slices with some important variations. The definitions will be much harder to implement in secret sharing their share size will not be sub exponential, and will be the highest exponent for any such slice will be three halves to the end. But it will be very, very easy to compose these special slices actually the size of the formula will be linear in N. So these special slices. Actually it's a, maybe a simple definition then expected they're actually for up slices they are monotone K DNFs, which means that they are functions where all of them in terms of arrest are of specific size K. So the definition, if we take a function F, then it's K up slice is simply a function which has the min terms of f of size exactly K. So if the inputs are sized below K, there the output will be the output will be zero, or inputs of size K, the case slice of F equals F. But everything above that is only defined by what happens in the case layer. So we will represent up slices with this shape where the two bottom parts are exactly like slices and the upper part is different. So this definition is useful, since a function F can be decomposed to its N up slices with one or gate with fanning N. And analogously we can define down slices, which are monotone K CNFs, or functions where all the max terms are of size exactly K. And again the case down slice of a function F is a function which has the exact same max term as F of size K. And then, analogously to before the last two items will be identical to slice function where small sets, the behavior of the function on them will be defined by what happens in the case layer, and here F can be decomposed to its N down slices with one and gates, which has fanning N. So how do we actually do anything with these up slices or down slices. And for us basic schemes for up slices and down slices can be derived from the work of apple by metal from last year. And we'll get the following share sizes which can be understood maybe better by this graph representation. So we will focus on down slices where the constant in the exponent increases as the parameter of the slice increases. And if we would have wanted to generate from this is secret sharing scheme for every access structure, the exponent will get in the end will be, and exactly because these high down slices cost too much. So what we do in our paper is we try to reduce the cost of these high down slices, and then do the composition of all down slices for a specific function. We use down slices because we see that for up slices, the situation is even worse, all up slices from half downwards already have an exponent of N. What we'll do is reduce the exponent of these high down slices. And then we'll realize all down slices of function below half with the abilities we had from the previous paper. And we'll do a reduction from higher down slices to the half down slice. The general idea of this new scheme. So we will now show this reduction. So, the idea is as follows. Say we want to realize some high down slice. We pick many functions which are half down slices and we pick these functions in a way that satisfies the following two conditions. The first is that every half down slice that we choose is always larger than the original function on every input, where the original function I remind you is a high down slice. If f of x equals one then for every TFT of x also must equal one, but we also require that for every max term of F, there will be some function that we chose that is equal to F on this max term. And why do we want these two conditions, because then the following equality holds F will be the end of all of these functions. And then we'll be able to decompose F with end gates as we've seen before for secret sharing, and we will be done. So why are these two conditions enough for this equality to hold. So if the F of x is one, then by condition one F T of x is also one for every T and the quality holds. And if f of x is zero, then at least one function will also output zero on this x. And since this is an end gate the quality will gain hold. So it is only left to see how we find these FTs, how we define them. So they will hold these two properties. We will show an example for this reduction from high down slices to the half down slice for a specific high down slice for the parameters to make sense so we'll observe a specific point eight and down slice. Since F T we will choose will be defined as follows, every function F T will be defined by a set T of point six and parties. And we will say that F T, when it receives the input x, some of it belongs to and some doesn't, then the function will output whatever f outputs on x, which is the same for the complement of T but only ones for the T part. So, for example, if there was a complex input here with many ones and zeros and F T simply puts ones in all of the part that belongs to T and outputs whatever f would have outfitted on this full input. We can first easily see that every such F T is larger than the original function. Since for every input it only increases the input it puts in more ones, and then and then responds with the original function would have responded. And since these functions are more on these means that the output will also be larger. And the point is that, if we have an input that contains all of the bits of T, it means that it already has ones where the set T lies, then the function F T answers exactly the same as the function f on that input it changes nothing and simply responds whatever f response. So that's the hint about how we will be able to satisfy the second condition, where for every input at least one function must be equal to the original function. And the way we do it is via combinatorial covers, which is an idea which was already used for secret chain schemes in the eurequist paper of apple bar metal. The way to do it is to pick many such functions and many such tease, such that every X of size 0.8 and that means that this includes all of the Max terms. And this will contain at least one of these teas. And if this is the case, then for every Max term f t such that these contain this that in that Max term will equal f. And then the two conditions we talked about in the previous slide will hold, and we will be able to do this composition of f from the f days, which is copied here. It will be a legitimate secret chain scheme for the high down slice 0.8 of F. So, only one thing is remain hidden here. And that is why are these f t is half down slices wire these f t is easier to implement more than any general function. So we'll claim that these f t's are actually half down slices but on a smaller input. That means that they are 0.2 and 0.4 and down slices. And the 0.4 and part is the easier one to understand since actually f t is defined only on the smaller part of the inputs as it always puts one in all other parts of the input. And we will be able to observe that all Max terms of any such f t are of size 0.2 n. And why is that say that we have a set here in x t bar of size larger than 0.2 n of ones. And f t will answer on it the same as f would for these ones plus these 0.6 and ones in x t. And the sum of these ones here and here will be bigger than 0.8 n, which is the down slice parameter of the original function and therefore the slice will always output one on such inputs. So we have the right with our graphical representation. If sets are, if the ones that are too large, the function outputs one. And if sets are says exactly 0.2 and in the input of f t, then they will be of size 0.8 and of the original function they could be unrestricted ones or zeros. So sets can only be zeroed out if they are contained in the Max term of size 0.2 n, which corresponds to 0.8 and in the original function. So this actually sums up the entire idea. The share size of such a scheme for this 0.8 and down slices will be the number of 0.6 and sets that is required to cover all the sets of size 0.8 n times the cost of the secret sharing steams for such f t's, which is the share size of down slices of over 0.4 n parties where the down slice parameter is 0.2 n. And this gets us to the graph we've seen before. So we see here that the most expensive down slice will be that of the two thirds down slice and it has the exponent we already mentioned many times the three halves to the end or two to the point five eight five. So now we will go in a faster pace through other results presented in the paper, the first one being the linear scheme, we mentioned. So in this case we had to beat the exponent of 0.77 and by applebaum et al. And if we try to do the exact same tricks as before with the reduction from high down slices to low down slices. The numbers just don't add up. It's a higher exponent than before. And to get to the 0.76 and we had to use other tricks, which rely on properties of dual access structures. So we'll briefly define those. And a dual of a function f is a function which on an input x looks at what f outputs on the complement of x, and outputs the opposite. So in a verbal explanation it might sound complicated but a few examples will show us that it's actually pretty simple. For example the dual of a threshold function is also a threshold function, but with a different parameter. So you can see that if all sets above size T are authorized by f, then their compliments, which are sets of size below minus T plus one will be unauthorized by the dual. And the same happens with the zeros which turns to ones and basically when we look at these shapes to move to the dual function we just have to flip the shape over on its head then the then replace every one with a zero and every zero with a one. So we can also see that the dual of a slice function will also be a slice function with a different parameter. And the dual of up slices will be down slices. So it will turn to one, the unrestricted layer remains unrestricted the in the dual and everything that was induced upwards in F in the up slices now induced downwards in the dual of f. So it is an old open problem in secret sharing whether the optimal share size for an access structure and it's dual are the same. The problem is resolved for linear secret sharing schemes. And we can use this theorem this idea to improve our linear schemes. And the things we do is we realize up slices more efficiently the duality because we know that down slices can be realized more efficiently and then we move to the dual which is up slices and realize them. And then we define a recursively formula for every function f over down slices up slices and even slices and other types of functions, and the size of this formula will be to do the 76 n. Another insight that we had is that the average case is easier than the worst case for secret sharing for down slices and up slices. So on the left you can see the expressions for the worst case and average case for up slices, but it will be again easier to understand things by looking at the graph and we see that the average case is much easier. For example for the middle slice the exponent in the worst cases and then the average case it will be half and the similar phenomenon exists for down slices and even for linear schemes we can show the same thing the graphs of course will look a little different for every case. And it also needs to be said that similar results were obtained by by Malin for us for up slices with a constant K. And here we give the full picture for every possible up slice and down slice. And the last thing we're going to talk about is this gap theorem, which is really nice because it connects all the things we've talked about throughout this presentation. And it basically says that one of the following three gaps must hold, and the gaps are the first one the average case to worst case gap, which we just talked about. The average case is the duality gap, which means that there exists a function, which has a better secret sharing scheme than the optimal scheme for its dual. And the non linearity gap which means that the general secret sharing has exponentially smaller share sizes than linear secret sharing. So that's one of these three gaps must hold by showing that if the first two are canceled out, then the third one holds. So why is that. First, we just saw that if there will be no gap between the average case in the worst case for up slices, then the up slice with the worst exponent will have an exponent of half. And for every function will be able to compose all of its up slices to a scheme with an exponent of half. Then if the duality gap is cancelled out, we can use more tricks which we're not going to talk about to bring down the exponent a little further. And this already implies a gap between general secret sharing and linear secret sharing as linear secret sharing has lower bound of two to the 0.5 and so this is basically it. And we've talked about the the upper bounds and the different types of scheme we show in the paper, but the main open questions remain open. Can we achieve sub exponential general secret sharing, or can we improve the lower bound and making progress in both of these questions should be a great achievement and we'll, I think, try to do it. So hope to see you next time. And thanks for listening.