 Hi, my name is Normazor and this talk is about simple constructions for regular running functions. This is a joint work with Japan Zang. Running functions are the minimal assumption for complexity-based capital. We can use running functions in order to construct menus for primitives, including pseudo-random generators, pseudo-random functions, encryptions, zero-knowledge proofs, universal running hash functions and more. Yet the efficiency of these constructions do not allow us to implement them on practice. For efficiency, we often want to consider the seed or the key length, the number of calls to the running functions and if we can, we want to make the calls to the running function non-adaptive. In this work, we are going to focus on PRG and universal running hash functions constructions for which there are gaps between the best glower and upper bounds. Let's start with presenting the parameters of the best constructions of PRGs. The efficiency of a pseudo-random generator construction out of runway function can be dependent on the exact assumption we have on the structure of the function. For example, if we assume that the function is a permutation, we can use the Goldrach-Levin-Art code predicate in order to construct a PRG using one call and linear seed length. If we only assume that the function is regular, meaning that every image has the same number of pre-images, we can construct a pseudo-random generator using omega of one calls and omega of n seed lengths. This is by generalizing the above constructions from permutation. However, in order to implement this construction, we need to know the regularity parameter r or the number of pre-images every image has. If we do not know this regularity parameter, we say that the function is un-on-regular, and in this case, the best construction we have uses a linear number of adaptive calls to the runway functions and linear seed lengths. This is due to a work of Eichner, Arnick, and Rheigold that uses a method called random as iterate introduced by Goldrach, Gravzik, and Luby. Juli and Wen later improved the seed length by a log factor with a minor cost in the call complexity by using a different but yet similar approach that converts any un-on-regular function into a non-regular function on a different domain. Lastly, a study in Paliazio, Levine, and Luby showed that any runway function can be used in order to construct pseudo-random generators. The parameters were improved by Eichner, Rheingold, and Wadan, and Wadan and Zeng up to n-cube calls and n-cube seed lengths in the case that the calls are adaptive or n to the fourth seed lengths in the case that the calls are non-adaptive. Before moving on to presenting the best constructions of universal runway ash functions, form runway functions, let me remind you what an universal runway ash function is. So this primitive is a family of functions indexed by a key k such that the functions are shrinking and yet it is hard to find a collision, meaning that for every x with high probability on the key k, it is hard to find x prime such that ck of x is equal to ck of x prime. So if we want to construct a universal runway ash function out of runway function and we know that the function is a permutation, we can do it using one call to the function and key of linear size. And as in the PRG case, this can be generalized toward for regular runway functions. If the function is un-on-regular, the best construction we have uses n-adaptive calls to the runway function and key of linear size. Interestingly, this is done with the same method of Frandom Asiterate that was used in the best construction of PRGs. Lastly, Rompel proved that we can use any runway function in order to construct universal runway ash functions and Eichner, Olshtern, Reingold, Wadan and we improved the parameters up to n to the 7 key length and n to the 13 calls. This is yet far from the best constructions of PRGs and the only construction we know is adaptive. Let's now see the best lower bounds we have. So we have linear lower bounds on the number of calls for both PRGs and universal runway ash functions, black box constructions from runway functions. This should be compared with the best upper bounds we have, which are n cubed in the case of PRGs and n to the 13 in the case of universal runway ash functions. However, these lower bounds all, also if we assume a stronger assumption on the runway function, that is that the function is a non-regular runway function. In this case, we know that the lower bounds are tight as they match the constructions that are based on the Rompel Asiterate method. These constructions, however, are a bit too. So we can ask if this adaptivity is necessary. In other words, can we construct a PRG or universal runway ash function from a non-regular runway function with linear number of non-adaptive calls? In this work, we are giving an answer to this question by showing non-adaptive constructions of both PRG and universal runway ash functions based on non-regular runway functions. The constructions are tight with respect to the number of calls and are relatively simple, but the seed or key length is longer. The seed or key length in our constructions is n squared, which should be compared with linear seed or key in the adaptive constructions from non-regular functions. So let me show you the constructions. Let f be an non-regular runway function, and we start with the PRG construction. So let h be a universal ash family from 2n bits to n plus log n bits. And we need additional properties from this family, for example, in order to use Goldash Levine. So think about random matrices. So the PRG is defined as follow. It takes as input a description of our ash function, h, and t inputs to the runway function x1 to xt. And it's going to output h. And for every pair of inputs, xi and xi plus 1, we are going to output h applied to xi and f of xi plus 1. Notice that while the input of g contains t blocks of n bits each, the output of g contains only t minus 1 blocks of n plus log n bits each. This is the reason that we need to take t to be large enough, larger than n over log n, which is tied with the lower bound. Let's now see the universal runway ash functions construction. In order to construct the universal runway ash function, it is enough to construct a single function that is shrinking, and for which it is hard to find a collision for a random input. We are going to define such a function. This time, let h be our universal family from 2 n bits to n minus log n bits. And the construction is very similar to the PRG one, except for two additions. The first is f of x1 in the beginning of the output, and the second is the last input xt at the end of the output. Again, in order to get a shrinking function, we need to take t to be large enough, which is larger than n over log n. In the rest of this presentation, we are going to see the security proofs of the constructions. Before moving on to see the security proofs of the primitives, let me show you the main observation behind the constructions. Let f be an r regular function from n bits to n bits. So we know that given f of x, there are exactly r possible values for x. This is because the regularity. But moreover, observe that there are exactly 2 to the n over r possible values for f of x. This is because there are 2 to the n possible inputs, and every output is mapped to exactly r distinct inputs. So we can ask how many possible values there are for the pair x1 and f of x2 given f of x1. Our main observation is that there are exactly r times 2 to the n over r possible values for this pair, which means exactly 2 to the n such. And this is cool because it means that r can sense out. This fact turns out to be very useful when considering unknown regular functions. We are now ready to see the security proof of the PRG. So let h be appropriate to universal family and let g be our definition of PRG. Recall that we want to show that the output of g is indistinguishable from uniform. By a simple hybrid argument, it is enough to show that the last block of g looks uniform given the previous blocks. But the only dependency between the last block of g and the previous ones is h and f of xt minus 1. So we only need to show that given f of xt minus 1 and h, the output of h on xt minus 1 and f of xt looks uniform. By the observation we saw in the previous slide, given f of xt minus 1, they are exactly 2 to the n possible values for the pair xt minus 1 and f of xt. Thus the pair xt minus 1 and f of xt has exactly n bits of mananthropy given f of xt minus 1. So we can extract n minus o of log n statistically close to uniform bits. So the first n minus o of log n bits of the output of h are statistically close to uniform. To show that the entire n plus log n bits of the output of h look uniform, we need to extract additional omega of log n bits. For that we can use Godrach Levine and get that the last omega of log n bits are sudo uniform as well. This concludes the proof, as like we said by a simple hybrid argument, it follows from this that all the blocks are sudo uniform and thus g is a prg. Let me now show you the security proof of the universal runway hash function. This proof is a bit more technical. Let h be a proper to universal family and let c be the function we defined. Recall that we want to show that it is hard to find a collision for c on a random input. As in the prg security proof, the proof reduces to show that it is hard to find a collision for one block of the output of c. That is, given h x1 and x2 it is hard to find x1 prime and x2 prime such that f of x1 is equal to f of x1 prime and h of x1 and f of x2 is equal to h of x1 prime and f of x2 prime. We'll call such x1 prime and x2 prime a collision for h, x1 and x2. We should also require that f of x2 is not equal to f of x2 prime as otherwise it may be easy to find such a collision but I'm going to ignore it for this talk. Note that if we are able to show that it is hard to find a collision for h, x1 and x2 it follows that it is hard to find a collision for the next block as well. This is because as it is hard to find a collision for f of x1 and h of x1, f of x2 one can only find a collision for the part which ends in the next block for which f of x2 prime is equal to f of x2. And so one needs to find a collision for the same function again this time with x2 and x3. This must be out from the same argument and so on for the other blocks. As we can see that to show that it is hard to find a collision for h, x1 and x2 is enough in order to show that it is hard to find a collision for the entire function c. So we now turn to show that it is hard to find a collision for h, x1 and x2. Let us assume toward a contradiction that there is an efficient algorithm A that given h, x1 and x2 can find a valid collision x1 prime and x2 prime. We are going to use such an algorithm in order to invert the running function which is of course a contradiction. The idea is that given an image byte that we want to invert we can construct an input for h which is some h, x1 and x2 for which it will hold that one of its collision is x1 prime and a prime image of y for some x1 prime. And so as this collision contains a prime image of y, A cannot find it. The next step is to show that the number of possible collisions is small and therefore the probability that A outputs the specific collision x1 prime and a prime image of y is i. This is of course a contradiction. So let's start by counting how many valid collisions there are for random h, x1 and x2. By the observation we saw before there are exactly two to the n possible values for the pairs x1 prime and f of x2 prime such that f of x1 prime is equal to f of x1. Since h is the two universal h functions from two n bits to n minus log n bits the probability of a random h to map x1 prime and f of x2 prime to the same value of h of x1 and f of x2 is n over two to the n. So we have two to the n possible values for the pair x1 prime f of x2 prime such that f of x1 is equal to f of x1 prime. And for each such pair the probability that h maps x1 prime f of x2 prime to the same value as x1 f of x2 is n over two to the n. Overall on average we have n such values that collide with h, x1 and x2. Since there are only n such values that collide with h, x1 and x2 we can hope that a will output the pre-imager file with probability of 1 over n. However we still need to construct the input for a which is h, x1 and x2 such that x1 prime and the pre-imager file is a possible collision. To do so we also need to use a so overall we are going to use a twice once in order to construct the input and in the second time in order to find the pre-image of y. So let's now see the inversion algorithm. So the inverter gets as input an image y that we want to invert and in the first step it is going to choose a random hash function h and random x1 and x2. Next it is going to apply a on h, x1 and x2 in order to get x1 prime and x2 prime. Now x1 prime is a pre-image of f of x1. In the third steps the inverter is going to choose h prime such that h prime on x1 and f of x2 is equal to h prime on x1 prime and y. So x1 prime in the pre-image of y is a valid collisions for h prime, x1 and x2. Next the inverter apply a on h prime, x1 and x2 in order to get x1 double prime and x2 double prime. If you are lucky x1 double prime is equal to x1 prime and in this case x2 double prime is a pre-image of y. So we are going to output x2 double prime. In the paper we are able to show that x1 prime and x1 double prime are equal with non-negligible probability which concludes the proof. To conclude in this work we introduced non-adaptive constructions of both PRGs and universal 1A hash function from un-regular 1A function. The constructions are simple and are tight with respect to the number of calls but use as long seed or key. Lastly we have a few open questions. The first one is is the input length in other constructions optimal. In other words can we construct a PRG or universal only hash function with a linear number of non-adaptive calls and yet to use only a linear seed or key length? The second question is can we find a better constructions of universal 1A hash functions from arbitrary 1A functions? Maybe to get the parameters to be closer to the best constructions of PRGs or to construct a universal 1A hash function without using adaptive calls. The last question is to find better low bounds for both PRGs and universal 1A hash functions constructions? Thanks!