 of Square with the Program of Fuscation by Boas Barak, Tzvika Brackersky, and Ilan Komogotsky, and Prava Skotari, and Ilan is going to give the talk. Hi, thanks for having me. I'm Ilan, and I'll talk about limits on low-degree Surrounding Generators. This is a joint work with Boas, Tzvika, and Provesh. So even though this talk is in the Fuscation session, I will barely talk about Fuscation. I won't even define it, but I'll mostly talk about Surrounding Generators. So what is a Surrounding Generator, or in short, a PRG? So a PRG is a function that expands n bits to n bits. We will denote it by G. So G goes from n to m. And it's convenient for this talk to think about each output bit as a function. So G sub i will just be the function that gets the x bits in the input, the n bits in the input, and outputs a single bit, 0 or 1. So G sub i goes from n to 1, and G goes from n to m. And it's a collection of m functions, G sub i. The Surrandleness of PRG, or the security of this primitive, is defined by saying that any computationally bounded adversary cannot distinguish a G on a random seed versus a really uniform string of length n. This is the security definition. It's a basic primitive in cryptography. It's the basic building block in constructing Surrounding functions and GGM construction. And we also know that, assuming one-way functions, we can build a PRG with arbitrary polynomial stretch by the hill construction. So this is really one of the basic photographic primitives that's used everywhere. And a natural question, given that this is such a basic primitive, is how simple can this primitive be? What do I mean by simple? So there are many ways to define simplicity of a primitive. Specifically for a PRG, one way to think about it is just saying, what's the circuit size that computes it? Second one, what's the depth of the circuit? And there may be more. So one central notion of simplicity that was considered in the past two decades is something called locality. What is a local Surrounding generator? So it's the same thing. It's a function that maps from n bits to n bits. But it's simple, in what sense? Each output bit, so each such function G sub i is a local function. So G sub i doesn't really look at all the n input bits, but it only looks at these specific ones. So there is some mapping that we'll call i sub i that maps the indices that are relevant for G sub i in order to compute G in the i sub bit. So this is a local Surrounding generator. What do we know about them? Do they exist? Maybe they don't exist. What's the trade-offs? Can we get an arbitrarily small d with an arbitrarily large expansion? All of these questions are of central interest, both due to practical implications because these are really easier to design pure Gs and they have lots of theoretical applications that we'll discuss. So we have positive and negative results about this primitive. The positive results are summarized as follows. If you assume that one way functions are in NC1, namely, there exists a one-way function computable in bounded depth, then there is a local Surrounding generator with constant locality that expands from n bits to n plus n to the epsilon, or some small constant epsilon. This is up about metal from 2006. The second result is actually not a real construction, but it's a candidate, a very generic approach towards constructing local Surrounding generators. It was initiated by Goldwright in 2000, and it's a family of assumptions or candidates saying that, look, maybe it's plausible that there is a construction that maps n to some specific polynomial. Think about it like n squared that has only constant locality and it is a pure G. So Goldwright suggested that this function may be one way, but follow-up work suggests that actually it even might be Surrounding. We also have negative results. The negative results are the following. If you want locality two, there's actually nothing you can do. There's not a pure G that expands even by a single bit. If you want locality three, Cri-Milterson showed that you can have only linear stretch, namely you can stretch n bits to maybe two n bits, but not much more than that. For D equals four, same thing. That's mostly done. And most surprisingly, for general D, they gave an upper bound on how much the pure G can stretch. They show that it's at most something like n to the D, n to the D over two. That's the best you can hope for for a local pure G with locality D. And as I said, this primitive has tons of applications. So here are just three of them. One application is very efficient constructions of public encryption schemes. That's upper bound in 2010. Later, they were also used to construct efficient MPC protocols. And most relevant to this work is the recent work, is the recent line of works of constructing indistinguishability of this case from this primitive. So I won't really tell you what IO is. All you need to know is the following bit. It's our dream. If we can get it, we will solve all of our problems and we can all go home. So the theorem that Lin proved, Lin and Nantens are high, roughly two years ago, is that this magical primitive that we all hope exists can be based on the following two assumptions. The first assumption is that there is a local pseudo-random generator with locality D that maps n bits to n to the one plus epsilon. The second assumption is another creature that we'll call degree D multiliner maps. I will not define that, but just remember that for D equals two, this is what we all know and love called bilinear maps. So let's see what's the implication of this theorem. So first, let's plug in D equals two and see what happens. So on the one hand, we have bilinear maps. We have candidates. We believe that they're secure in some sense. We're fine. Unfortunately, as we saw, there's no PRG with such stretch. What about D equals three or four? The situation is even worse. Not only there's no such PRG, but we also don't have satisfying candidate multiliner maps, three or four multiliner maps. So we'd have to go for the equals five. But even there, we don't have candidate multiliner maps, but we do have PRGs. So that's that. So the situation is not so good. So this is where you think things end, but luckily we have the beautiful work of Lina Dussaro from last year showing the following result. It's actually the same thing as the previous one. There's an extra word, bloc, localize, DD. So they show that IO exists based on the following two assumptions. Again, degree D multiliner maps and a PRG with the stretches and bits to enter the one plus epsilon with the property that is D bloc local. What is bloc locality? So it's the same thing as locality, except now the inputs to the PRG are not bits, but are actually, you can think about it as coming from a large alphabet, sigma. So now the PRG maps from sigma to the n to zero one to the m. And one convenient way to think about it is that each bloc is not a symbol in an alphabet, but actually b bits. So the alphabet is of size two to the b and each bloc is of size b bits and the locality means that each output bit depends only on few blocks, but inside the bloc it can do whatever it wants. So in terms of locality, the locality of this thing is pretty big because you can touch two to the two b inputs on every output. So what B and the sorrow needed is a PRG with bloc locality two that or D in general that maps n times b bits because this is the size of your input now, two to the three b times n to the one plus epsilon. This is what they need for their construction in order for it to work. And their observation is that all the lower bounds that we had of most of the tar and crying milters and do not apply as far as we know when they wrote the paper, this primitive might even exist for D equals two, which is kind of spectacular because it will imply IO from bilinear maps. So our results in natural are that the discriminative does not exist for D equals two. So you need a different approach to construct IO from bilinear maps. And we also have maybe a positive result that we believe that this primitive actually exists for D equals three, meaning that the only thing between us and IO is the three linear map, which sounds not so hard. So here's our results in slightly more details. So we have a couple of attacks depending on the model. The strongest result or the hardest model is the first line in this table, saying that if your PRG stretches from n times b bits to two to the two b times n bits, then no matter what predicates you use, no matter what underlying graph you have, and no matter if the predicates are different or equal, this PRG will be broken. We have an algorithm that distinguishes random from the output of the PRG. If you relax the predicates to be the same on all output bits, so these GIs are the same for every I, then we can even have a better attack and rule out such a PRG even with stretch two to the b. And if you think that the predicate will be random, the graph will be random, and you allow different predicates, then we can also rule out two to the b. Two to the b stretch. So these are the results. So all of these results are, the first one already breaks the assumption of linear to the linear to the sorrow, and the second ones are with better parameters, but we don't know how to get IO from this range of parameters. I'll talk about it in the summary. So as a bonus, we also give a candidate a very simple and appealing candidate for three block local PRG with very small block sides, just off one, and polynomial stretch. So you can look at the paper for more details about that. So these are our results. In the rest of this talk, I'll give you an overview of how we achieved the first result, but not with two b, but with three b. This will be enough to show you the main ideas. I should also mention that the second result was also obtained independently by Lombardi and Lakutanathan in DCC last year. So we're going to break PRG, right? So what's the game in a PRG? There's an adversary that gets G to the random or uniform, and it's to distinguish. What does it mean to distinguish? It should output the probability that it outputs one on the image of a PRG. It should be noticeably different than the probability that it outputs one if you sample the input from the PRG and not from uniform. So this is the game, the Surand in this game. We will actually do something even harder than that. We'll break it even stronger, even in a more strong sense, in a stronger sense. We'll do something we call image refutation. What is image refutation? We will require the adversary, or this is what we'll actually achieve, to on inputs from the image of the PRG. We'll always say one. We'll just always say one by default on all images of the PRG. But if the input comes from the uniform distribution, we'll say one only with a very small probability. You can think about it as sort of one sided analog of Surand in this. So this is what we're actually going to do. And as I said, this is stronger than distinguishing because just think of Z as the uniform distribution. This is much stronger. We identify the elements that are not in the image. And another upside is that we can actually handle preprocessing on the inputs because we're doing image refutation. This is actually useful for Lin and Sauer's paper. They actually used it. So it's actually another advantage for us. So here's the proof idea. We'll work in two steps. The first step, what we're going to do, we are going to take our local PRG and we're going to massage it to get a sparse low degree polynomial where we think of it as an algebraic polynomial and not as a function, as a Boolean function. So we'll take a local PRG, we're going to massage it and get a sparse algebraic polynomial with low degree. That's what we're going to do. Once we do that, we have a polynomial which is sparse. It has only S monomials, let's say. And this is just a collection P1 to Pm of polynomials. Each of them has low degree and this is the polynomial that we get. What we're going to do, we're going to compute this program. We're going to compute value which is the maximum over, you go over all inputs X and you look at the sum XI, Zi given an input Z. This is the input you want to decide whether it's in the image of the PRG or just random. You're going to compute the following program. You're going to find the maximum X, the X that maximizes the sum of Zi, this is the i to the b to the z, times the polynomial, that's polynomial Bi on the input X. We're just going to maximize this computation. Let's assume for a second we can do it in polynomial time. I will explain how we do it in a second. A priori it seems like it takes exponential time because we have to go over all x's. But we can actually do it in polynomial time. I'll show you in a minute how. Assume that we do these two steps. I claim that we're done. Y, Z is in the image of the PRG. It's going to be the case that value is large. If we're not in the image of the PRG with high probability, it's going to be small. So we'll only look at the value of val if it's high we'll output one, if it's low we'll output zero. This is the algorithm, super simple. So let's see first step two. So we're given an input Z and we need to decide whether the value of val is large or small. Let's see why this is enough. So assume for a second that the polynomial is sparse because we said we'll have a transformation that will result with the sparse polynomial. So the first observation, if Z is indeed the image of the PRG there is some x that maps to it, right? That x will satisfy the following. If you plug in this x, pi of x is equal to Zi. So you get that this sum is just the sum of Zi squares which is m because Z is plus minus one, just for simplicity. Instead of writing things in the exponent. So the sum will be m if this is in the image of the PRG. If it's not in the image of the PRG what we have is a sum that we can use concentration in the qualities to prove that it has good concentration namely we have a sum of random values plus minus ones times this pi of x if Z is random. If Z is random some sum of them are plus some of them are minus. Most of them will cancel out. And you can use Chernoff to show that this sum is bounded by the square root of n times s times m. So n is the input size, m is the output size and s is the number of monomials which is just used to bound the highest value of pi. This is a Hoeffding bound for non-pullian random variables. So you see when Z is in the image of the PRG we get m when Z is not in the image of the PRG and in sum of random we'll get with high probability something like square root of n times s times m. So this is enough for us because if m is bigger than n times s we'll have a gap and we'll just check and decide. So this is step number two. How do we compute this program? So let's see. This is where we use SOS sum of squares. We're just using it in the black box. We're using a result of Charcot and Weirth, the Charcot and Weirth that says that you can take any degree to polynomial and look at this value which is the maximum of a piece of x. You can approximate it up to a logarithmic factor. This is a generic result that you can efficiently approximate any such degree to polynomial up to a log n factor. So this is what we're going to use. I'm not going to talk about this logarithmic factor, it just goes into a stretch of the PRG. I'll just ignore that in the rest of the talk. So this is how we do step two. How do we do step one? So remember what we're trying to do is to take a local PRG and translate it into a sparse polynomial with low algebraic degree. A priori, as I said, you might think it seems like it won't work because if you take a two-block local PRG, any function, any two-block local function, when you look at it as a polynomial, the degree of the polynomial is huge. It's two to the two b. So what we're going to do, we're going to preprocess x. That's why I said preprocessing is important for us. We're going to preprocess x in the following way. We're going to look at any block size b and just expand it out, tensor it out, and define a new x prime on two to the b times n variables where you can think of each variable as like an indicator function of which b bits you had in the input. So we will take x, which is of size n, we'll expand it to x, which is of size two to the b times n, and we'll work with this PRG, with this modified PRG. So this is what we're doing. And now you can say that it's easy to observe that if you started with block locality L, then you'll end up with algebraic degree L, not more. And the number of monomials is g in g is not too big. It's only two to the two b. You might think there's a caveat. We preprocessed the input. So maybe the new PRG, right, is not really a PRG, because once you feed g prime with random input, it's not like applying g on a random input. But this is where our image refutation kicks in. The image of g prime, if you think about it for a second, is contained in the image of g. And our refutation says that on strings which are in the image, we will always output one. Only on the outside, we will output zero with high probability. This is where we use the fact that we are doing image refutation. So overall, what we got, plugging in the parameters that I just described in construction. So if you remember, it was n times s times m. So it's two to the two b, which is the sparsity. And the new n is two to the b times n. So we show that n must be at least two to the three b times n. Let me summarize. And leave me with two open questions. So to summarize, it seems like we ruled out plausible constructions of IO from degree two, block local exchanges. There are two very nice questions. The first one, we didn't really get the tightest result we hoped for. We didn't really get two to the b times n. In the worst case, worst case, different case. This is open. And it also leaves some room for playing in the constructions of IO, if you can construct IO from this small stretch. And the second question, which I find really interesting is come up with new ways of constructing IO. It seems like we're almost exhausted. One path, let's find new ideas. These subjects that you come up with at the end is polynomials that you said that maybe they're not a PRG. But actually the image of the PRG is included in them. So what are they exactly? Maybe they are interesting objects in their cells. They could be used maybe instead of a PRG. I don't know, do you have an idea? Can you repeat the question? Like these polynomials that you constructed at the end, you said they might not be a PRG. Yes. But what they are, maybe they are interesting in their cells, like for some other. Maybe. Okay, and my question is about this block size PRG. So what are they, for example, can the blocks be overlapped? Would that make sense or is it like in the end? So for the construction from IO, we need a real locality in not overlapping blocks. So we need real locality like I defined it. If you can do it from a relaxed notion of PRGs that have some sort of, that would be great. So in terms of the original, did you have a question? So it depends how you define overlapping blocks. What's overlapping, how much they overlap. So we didn't analyze it. I think it's a good way maybe to overcome our lower bounds. Yes. In terms of the original motivation for locality, so I'm just wondering like, okay, locality was supposed to have few to some, like for the implementation of the PRG. I don't know, like intuitively, like we go from locality to deep block locality to block locality that has become different assumptions. So I don't, it depends on the size of the block. So I don't know, like if this notion of deep block locality for example, could be used in the same type of applications as normal locality. I assume it could. Okay, another quick question. Yeah.