 Hi, this is on computational shortcuts for information theoretic PRR and this is joint work really well decided to pick a color of and what's the line. MoMovic secret sharing is an information theoretic analog of a MoMovic encryption and it has many use cases including delegating computation and low-complication MPC. The very basic setting of HHS consists of a single client and case servers where the client holds a secret input to function F. But perhaps the function is too complicated for the client to compute on its own. It decides to apply a secret sharing that split it into k-parts and have the service computed instead. Each of the servers turn the input shares into an output shares by applying a local evaluation mapping that takes in a representation of function being computed. At the end they send all of these output shares back to the client and the client can apply a decoded procedure to recover the actual value. It is important to note that for MoMovic secret sharing for function families we actually mean for the representation for function because it is the representation that's actually used in the scheme. Several important metrics of the scheme that we care are the input shares as alpha, the local evaluation time tau and the other share size beta. And for the scheme to be useful it has to be correct that means at the end the client recovers the correct value without probability. And by t-privacy we mean that any set of t-shares among the k-shares should information theoretically hide the original secret. More formally this means that the distribution of the set is the same regardless of the actual value of the secret. And as in the case of computational MoMovic encryption we require the alpha-share to be compact. That means the size of the alpha-share should be small. And this is needed to avoid trivial constructions where the server simply append a representation from f to the share and postpone all the computation to the decoding step. In our talk let's focus on perfectly correct among private protocol. And we also require the protocol to have additive reconstruction where the decoding is a simple addition. And this is an extreme form of compactness. This is useful because for instance both we have an extra set for function f and for g. We can obtain an extra set for function f but g by simply telling the server to locally add up their alpha-shares respectively. Compared to the success in computational world where we have fully homomorphic encryption, there are very few known IDHSS in the literature. First we have HSS for linear functions which follow from any linear secret sharing scheme. And we also have HSS for functions represented by no-degree plain normials and these follow from multiplicative property of secret sharing scheme such as Shamir or CNF sharing. And the number of servers in such a scheme have to scale linearly in the degree and this is not what we wanted because we want to know what can be done for constant number of servers. By the way this means the size of the representation of the function and it's usually correlated with the computational complexity of f in our setting. And HSS for general truth tables are known as private information retrieval. PRR is the setting where a client holds an input to a truth table and it wishes to retrieve a single bit. Here let's use capital N to denote the size of the truth table which is 2 to the little n. And there are roughly three generations of PRR protocols. The first generation encodes the truth table in a manner similar to read-booker code and it achieves a share size of n to the 1 over k for k servers. And the second generation is based on recursion and it's slightly better but the input share size is still exponential for a constant number of servers. The third generation protocols are based on matching vectors and it hugely improved this situation and the share size is now some exponential for the minimal number of 3 or 2 servers depending on whether you want to over share size to be short. Now let's compare HSS with fully homomorphic encryption to see why it's an interesting primitive in addition to being unconditionally secure. HSS has its main drawback in that it requires many shares and the servers to be non-cruiting. And even worse for slightly more complicated functions the share size is already sort of an ignominal compared to the computational world where we have FHE and the cybertech size is the ignominal in the secured parameter and the input size. But HSS has many attractive features where which FHE does not have. As we'll see in the next slide concretely HSS has lightweight confusion and communication and this is due to the fact that no complex crypto operations are involved and there is no overhead in the security parameter. It also allows efficient and public decoding where the decoding is often just a single addition over the upper share which allows easy extension to settings where there are multiple clients and so on. The concrete communication complexity of HSS for point functions are given in the polling table. The communication complexity are worse syntotically as they are exponential but concretely for small domains they are competitive with the computational counterparts such as FSS functions secret sharing which in turn builds up from one way function. And in fact they are competitive for record size up to millions and billions. Therefore it is worthwhile to see if we can further improve the efficiency of our schemes. For unknown PR scheme the evaluation time is linear in the size of the truth table. And this state of the art show us a matching factor protocol implies a three-part HSS which shares our sub-exponential for any function represented by their truth table. And such schemes are already interesting because the share size is independent of the computational complexity of the function and this overcomes the so-called circumcised barrier in the information theoretic rhythm. However the local evaluation time is exponential regardless of the actual function being evaluated. And it is natural to ask can we make PR evaluate faster given the structure of F. It can be shown that for the structure F exponential time is necessary. And this motivates the notion of PR shortcuts which means alternative ways to carry out the same input of mapping in the evaluation local mapping. But in a time that's substantially better than a naive evaluation exponential time. And if we can obtain PR shortcuts then this will automatically give you non-trivial HSS for the corresponding families. Now let's take a look at our results. The first generation of written PR are where the share size is n to the 1 over k. We construct shortcuts for simple functions where the number of ones and the truth tables are easy to count. Such functions include for example truth tables where the number of ones form a total of L continuous segments. This applies HSS with better efficiency for those functions. And by the way there's not much help to construct HSS for those functions with better efficiency without actually giving better PRR if you want to improve the share size. When we move to slightly more complicated functions where the number of ones are hard to count in truth tables we can encounter fine-grained complexity hardness. In particular any shortcut would imply a better counting algorithm CNF formula which would falsify the exponential time hypothesis and a strong barrier of the SCTH which has extended assumptions in fine-grained complexity. The scenarios are perhaps surprisingly very different where we can see that matching regular protocols. We can show that if for even the old functions this computation cannot be sped up to so exponential time unless the ETH fails. Therefore we can see that the hardness arises from the structure of the matching vector protocol and in particular from the structure of the combinatorial object matching vector family and not from the function being evaluated as we had in the first case. And this it builds on the hardness of graph counting problems. Finally we present possible ways to circumvent the hardness of matching vector protocols. However disadvantages do not actually give shortcuts because we are rebuilding protocols and it comes at significant costs such as increasing the number of servers. Let's take a closer look at the positive results. Here we shall use the three server regular protocol with square root n communication size as an example. The first family for which we have shortcuts are functions whose truth table consists of a bounded union of continuous segments at once. And for such functions we obtain the so-called strong shortcuts because these are virtually the best to hope for. It's linear in the input's chair size and also linear in the representation size. And here is an example of a union of these segments. Such segment functions are useful because they can encode multiple comparison relations at once. And its generalization to higher dimension are straightforward and here is an example of a union of four 2D intervals in a square. To understand how the shortcut works let's look at how the evaluation map are defined in the three server regular PRR. The server treats the truth table as a truth 2D square and is given two vector variables each of the square root n. And it has to compute the following degree to play nominal where each monomial corresponds to the entry at row i1 and column i2. And here each monomial is multiplied by an extra coefficient that is the value of the function on that point. And at the end it has to sum over every entry giving an evaluation time of capital N. So for what functions can we speed up this computation? It is natural to consider the combinatorial rectangles because they are regular in such shapes and in fact we can factor out a polynomial if there are combinatorial rectangles. And if we compute according to this second expression by sum then multiply this takes a square root n time which is a huge improvement over the original capital N time. And we can generalize this to a union of this joint combinatorial rectangles because the HSS is additive and as the rectangles are disjoint they won't interfere with each other when we add them together. Perhaps surprisingly this is the basic observation that gives rise to all of our shortcuts for Ritwell and PRR. 2D intervals are apparently combinatorial rectangles therefore there are shortcuts that run in square root n times L. A further observation is that we can actually improve this computation changing this multiplication sign to additive sign because now the summation has continuous ranges. And we can first take the previous sums over the vector variables and then answer each of these summation in constant time. And if we consider one dimensional intervals they are first mapped to two dimensional intervals in the square. And you can see that each one dimensional intervals is mapped to a most three to the intervals. Therefore a strong shortcut for intervals automatically gives us a strong shortcut for segments. And if we generalize this approach to more servers our shortcuts only work for high dimensional intervals only when the dimension divides the number of servers minus one. And it is an interesting but perhaps taxing open problem to understand can we get shortcuts for any dimension for a constant number of servers. The second family of functions are more of a computational flavor. We show that for any decision tree over variables with L-lips we have shortcuts. And the shortcuts that we obtain despite being much better than the naive ones are not as strong as in the previous case so we call these shortcuts weak. A decision tree is a computational model which has a tree structure and each internal node tells you to go either left or right depending on the value of the interval variable. And at the end it outputs either one or zero. And every decision tree can be converted into a disjoint DNF formula where each of the term has disjoint support. Meaning that any two terms cannot be simultaneously satisfied by a single assignment to the variable. And our shortcuts work for any such formula easily follows from combinatorial rectangles since each conjunctive term maps to a combinatorial rectangle in a truth table. And the positions of ones are disjoint by the assumption that the DNF terms are disjoint. Therefore we have a shortcut that runs a square root of n times L. And by further observing the structure of these summation ranges which are specified by DNF term we can improve the complexity to either this or that. Where in the first improvement we can improve the multiplicative constant by trubic root and in the second improvement we can do more preprocessing and improve the constant to one. But in either case these are not strong shortcuts. So an interesting problem is how do we retrieve this summation specified by conjunctive term faster. And this is a data structure algorithm flavor. In the paper we also consider geometric families such as convex shapes and approximation of this and these fine applications in such as deciding private the weather client is being reached from the list of locations. And for a brief summary of our positive results we have shown the existence of shortcuts for several simple but useful function families. And the next natural question is what about other functions. What specifically are the functions that are miss shortcuts in regular PRR? This remains an intriguing open problem and we are able to partially answer this problem. In the case of regular schemes one cannot hope to get shortcuts for functions with number of satisfying assignments are difficult to count. This is because any shortcut supporting the function implies counting the number of satisfying assignment to the truth table in roughly the same time complexity. Therefore as a corollary of the exponential time hypothesis and its strong variant counting the number of satisfying assignment to a DNF formula or a general DNF formula is difficult to be done in some exponential time. And so we obtain the following corollaries assuming the exponential time hypothesis for large number of servers. There is no strong shortcut for non disjoint general DNF formulas and if the strong variant of exponential time hypothesis is true. And for any number of servers no weak shortcuts exist at all for such DNF formulas. Note that this does not contradict with our positive results for this strong DNF formula because indeed for disjoint DNF formulas the number of satisfying assignments are easy to count. Now we have a rough understanding of the landscape of shortcuts in the regular schemes. Let's move on the state-of-the-art MV schemes. Our negative results for metric vector schemes can be summarized as follows. Performing the required computation for even the all one function in the scheme is impossible in some exponential time. Note that in the scheme the impulse share size is exponential unless the ETH fails. So note that our result only applies to a specific instantiation of the metric vector family. And why do we care about our function? It tells us that it is hopeless to obtain similar shortcuts as we did in the regular case because this trivial function is a special case of all the functions that we considered. And this impossibility result signifies that the hardness comes from the structure of the metric vector instead of function being evaluated. Also we should note that by the required formula we mean this formula here which moves over capital N entries. And we mean this specific input of mapping has to be carried out. We are ruling out shortcuts for the original protocol and the service cannot cheat by computing another function. Otherwise the all and two table has a trivial HSS. We prove our hardness by reducing from the fine-grained context of the induced subgraph counting problem which is parameterized by graph property and a size parameter. The impulse consists of a graph with out nodes and you want to know the number of induced subgraphs with W nodes such that it satisfies the graph property. And you only care about the parity so that there is this parity sign at the beginning. In our setting we would be interested in this predicate which test is if the number of edges in a graph is congruent to delta module of N. And as an example let's look at this 8 vertices graph. We want to know how many four induced subgraphs has an even number of edges and the naive algorithm is to enumerate every four tuple in a graph that would run in roughly out to the k time. And the result from parameterized complexity states that for many non-trivial predicate this problem cannot be solved in time or to the little of W unless ETH fails. This states that if the ETH is true then the naive algorithm cannot be improved substantially. For our purpose we have to refine the analysis of the theorem and it is shown that for a specific predicate the test if the number of edges in the graph is a module of 511. And assuming the ETH we can prove that this problem is hard for some parameter of function of this order. And this is already enough to imply the hardness for Kevlin's instantiation of the matching vector family. However this makes our result a bit restricted because it only applies to the Kevlin's instantiation of the matching vector family. But there's actually not much reason to believe that this is not the case to a more general instantiation of the matching vector. Therefore we made the following conjecture which essentially states that for any reasonable choice of M and delta and W the problem cannot be solved in time much better than the naive implementation. And it is an open problem whether this is conjecture is true and there are some partial evidence for parameterized complexities put in this conjecture. But for our purpose we are mostly interested in the case where M and delta is chosen to be 511 and 0 and W is on the order of square root of R because this corresponds to the realm of the matching vector family with the best asymptotic complexity. Finally let's briefly describe how we can modify the matching vector protocols for shortcuts. These are actually not shortcuts of the original schemes because we have to change the protocols themselves. But they may be of independent interest because these methods apply to a broad class of PRRs. First we observe that there is this tensoring operation implicit in Ripple of PRRs which composes PRs with itself D times to create a hypercube structure in the computation. We have to increase the number of servers in such a setting and we can look at these matching vector protocol as an example. You can see that there is a blow up in the number of servers and the input share side is still sub exponential. And now we can virtually support all the shortcuts that we had in the remover setting in the same complexity. And it is still exponential and not sub exponential because we have shown that the matching vector structure is hard to utilize and what we are utilizing here is the structure present in the Ripple case instead. Secondly we introduce the technique of parallel compositions of PRR. This technique enables us to evaluate special form of DNF formulas. The crucial observation here is that each conjunctive term is a DNF formula corresponding to a point function in a restricted domain. And these point functions can be super fast to evaluate in the PRR scheme. Therefore the client can set up multiple PR queries corresponding to different restricted domains and the function can be quickly evaluated if there are not too many restricted domains. This is already powerful enough to express segments but not decision trees. We will also need to incorporate randomness and sacrifice the perfect correctness in the scheme but we can see that the computation now is really as sub exponential complexity. To summarize we studied the notion of shortcuts in PRR and we showed that they are possible in the ring ruler setting for sample use for function families. These results are clean and all followed from a single shortcut for rectangles. But when we move to harder function or more complicated schemes, the fungary complexity of counting comes into play. The hardest arise for different reasons but they can both be based on the SETH or the ETH. We also proposed tensoring and parallel composition in order to obtain a shortcut. For more concrete compactity comparisons and extensions please refer to the paper. Thank you for joining our presentation.