 Thanks for the introduction. This is joint work with my colleagues in China. And I will be talking about constructions, the universal one-way hash functions from several, from specific to more general class of one-way functions. I will skip this because it's too fundamental for this conference. So it's one-way functions that are easy to compute and hard to invert in the average sense. And actually, before I introduce universal one-way functions, I would like to mention that the universal one function only guarantees a weaker version of collision resistance, which we call target collision resistance. You can see here I highlight the difference between standard collision resistance and the target collision resistance. So you know, in the previous case, in the standard sense that the collision-resistant hash function is collision-resistant, if it's hard to find any collision. But in this case, the adversary has no control over the point. So it's hard to find the client onto a random point. So this is a weaker notion. But it suffices for many applications. I want to also mention that we are talking about a family of one-way functions. Because it's not a single function, otherwise we can do very trivial tasks. For example, a non-uniform attack can simply hard-code a pair of x and x prime to collide onto a single function. And it's strictly weaker because collision-resistant hash functions are, by definition, already universal one-way hash functions. And we can do universal one-way hash functions from one-way functions. But we know that there are separations that we cannot construct based collision-resistant hash functions on one-way functions. And these are very useful objects because universal one-way hash functions are sufficient for many important applications, such as basing digital signature from minimum assumption, and many more, in PKEs instead of the score-hiding scheme. So and a more interesting phenomena is that universal one-way hash functions and PRGs, they are dual objects. I will call this term woofs because it's very, very, very time-consuming to pronounce them. So these two are dual objects. If the equivalence established by Nanoyan and improved recently in the Eurocrypt 2010 paper. So the duality is that we know one-way function and there is no necessarily relation between input and output. But if we were talking about PRG, it's always expanding. So the output is greater than the input. And this woof, it's shrinking. Otherwise, it's trivial. So we know the result, the feasibility result was established by Heel in the 1999 paper and recently improved such that in such a way that any one-way function or input of length implies a PRG of seed lengths roughly n cubed. But in the case of universal one-way hash functions, output length is very, very, very much longer, like n to the power of 7. But we are not going to improve this in this paper. We are doing a slightly different line. We are going to construct woofs from special class of one-way functions. And here I tabulate the result and compare with the literature. In the case that the underlying one-way function is a one-way permutation, everything is already done optimally. But if we generalize the one-way function a little bit, if it's a one-to-one one-way function, then this term is not optimal. And in this case, in this paper, we improve it to optimal. And also for non-regular one-way functions, we improve also by a log factor. And let me say that this super constant can be any super constant, for example, log log n. And we believe it's hard to improve, and it would be just artifact of the definition. So if we additionally assume that we know the hardness of this underlying one-way function, we can actually get optimum parameters. And with almost secret preserving reductions. And then we move on a little bit, generalize a little bit. If the underlying one-way function is any regular one-way function, then things are already done. In the recent Asia Crypt paper by AGV, and we have linear output lens, almost linear key lens. This is due to the soups domain's standard. And it's hard to improve by this log factor, I believe. And this also match a recent lower bound in the number of cores to a one-way function. If the underlying regularity is not known. And so our last construction, we do not improve anything, but we generalize the assumptions. We weaken the assumptions. And we show that we can base the class of function we call weekly regular one-way function, which arguably behave more close to arbitrary one-way function. We'll come back to this later. And here is a few standard tools, like universal harsh functions. I would skip the introduction. And for example, a harsh function can just be a universal harsh function can be just being implemented by the multiplications of a finite field. And then apply a truncating function. And we know that the well-known hashing properties, the left of a hash number, lemma says that when we have some source which has entropy slightly more than the output lens of this harsh function, then we will get a very close, a statistical close uniform randomness. This is where I know. And another hashing lemma which is less known is when we have some random variable with some max entropy here, max entropy is defined as the logarithm of the support size. So when this max entropy is less than the output sides, it's bounded away by some d, then we will get unconditionally targeted collision resistance. This means that even unbounded adversaries when you sample random h, random x, cannot find a collision because this function is almost always injective except for a very negligible fraction. Here you can also observe the duality, the definition max, a mean entropy and max entropy. And the difference between the output lens and the entropy, they are in different directions. And then first of all, we introduce the construction by Nanou Yang based on one-way permutations. Suppose we have a one-way permutation and we have a universal harsh function. It's a lens preserving. We also have a truncinating function that outputs only the first n minus bits. Then the construction is very easy. We just compose f with followed by h. And then apply this truncinating function. That's it. And this edge where we describe the constructed one, oof. And this is the quantitative statement about the result. And you know that here we have the shrinkage. The number of bits at compress is reflected in the security term. So you can at most shrink log n bits in general. But you know that this reduction didn't generalize to one-to-one-way function. So suppose we have a one-way function, a one-to-one-way function, and it is expanding. Then this output size may be arbitrarily polynomial long. So if you want to, if you discuss many bits, then you will incur exponential loss, and which will essentially cure this term and make your reduction useless. That's the difficulty of this construction. So that's why they come up with another construction, very close construction. So the assumption that we have arbitrarily one-to-one-way function, and it is expanding. And we have many hash functions. Each hash function can compress the input by just one single bit. Then we just apply the expanding one-way function, and we are composing all these hash functions one by one. And here eventually we get the output size is smaller than the input size. Then we get the hash function. And this is a statement. But yeah, it's LSD parameters. We have linear output size. But the key length of this roof is not ideal, because we have so many hash functions. It seems that the use of many hash functions are very artificial just to facilitate the proof. So our question is, can we de-randomize this ideally to linear sites? So that comes out of our first construction. The assumption is the same. We have a one-to-one-way function, and we also have a universal length preserving hash function. And also a truncated function, that outputs only the first n minus bits. Then what we do is we're just composing all these three functions. We discuss as many bits as we need, and we claim that this already is a family of roof. So let's see the proof. The assumption equivalent to the one-wayness of F, the T epsilon one-wayness of F, is that we can also assume that for any PPT, the probability that this PPT invert, oh, sorry, for any PPT, it should be this algorithm. So the PPT that invert this Y is bounded by the product of these two terms. So where does this term come from? It's actually defined by this game that we, here we are not sampling a random image, but we are sampling actually a random air-bit string. You know that the output of this F of X is only sparsely distributed over this set. So the chance that, oh, sorry. The chance that the fraction of valid images in the set of all air-bit strings is only this fraction. So that's why we have to multiply by this factor. It's essentially two to the N over two to the L. So then we have this lemma. So any algorithm that breaks this TCR implies almost the same efficient invert that breaks the one-wayness. So then we reach a contradiction. Okay, so. And so TS proof sketch. We define the inverter. We will go with success flows. This is the Y style randomly sampled. And this is the image we want to invert. And we sample all these X and swap all these H. And we make sure that the X of these two strings, the first N bits of their XOR sum is zero. So that when we apply the truncating function to both of the strings, they collide to each other. And the rest, they can be randomly sampled. Then if we return X, if F of X already equals to Y star, but it's unlikely, otherwise we are too lucky. Then we invert the collision vendor and we will return the X prime. Okay, and the trick here is we use this term to cancel the security loss because here in this case, in this construction, we discussed N minus L minus S bits. So that will incur a very huge security loss. But the good thing is that the trick here is that we use this term to cancel this security loss. Okay, and yeah, but it remains to show that the invert this Y star with this probability. This can show that as follows. So first we claim that the dis-sampling as defined in the algorithm is equivalent to dis-sampling. The first example is X, H and V from uniformly at random and then we determine Y star. And this proof is given in the paper. And then you know that the probability is that the inverter inverts Y star is equivalent to this term. Then if we consider these first two terms, it's exactly the target collision resistance because the first N minus S bits, they are all zero. So if you apply the truncating function to these two strings, it's actually a collision. And so this term is bounded by the assumption by the TCR, the target collision resistance. And you know this second, if you want to make these two equal by comparing with this one with this, you know that the first string is exactly the same. If you want to make this second string to be the same, we have to make this V prime equal to V. And this V is uniformly sampled. So the probability would be this term. So this concludes the proof. And our next construction is a regular one-way function. We've known hardness and we construct the moves. Okay, so we have a regular one-way function. We know the hardness and we know the regularity. It's two to the R21. Then we also have two harsh functions. In this case, we not only harsh the input, we also harsh the output. Sorry, we not only harsh the output, we also harsh the input. Because in this case, let's see the construction. In this case, a function F is no longer injective. So we could already find X and X prime that collide onto F of X. So that's why we need also harsh the input. So we use the two harsh functions. And the sum of their output will be equal to N minus S to make these functions ranking. And here this primary to S prime is not decided yet, but we already have this quantitative statement. And you know, that's why, actually that's why we need the knowledge about hardness. Okay, if we set this S prime to be depending on that knowledge about the hardness, then we can make this term to be a negligible term. And it's almost optimal except for the square root. So that's for the second. And the above construction can be adapted to handle almost regular functions. And here's the proof. Actually the proof is not that hard because if we find the collision, then either this collision we already, since this is a composition of F and the edge, the collision either it's already collide on F or if it's not collide on F, but eventually collide on edge. Okay, so it's bounded by two terms. So, and then we move on to our next construction. We assume a general regular known regular one-way function. Now we know the regularity, but we don't know the hardness, okay? Let's see if our previous construction still work. It's not work anymore. Here's the difficulty. So for example, we don't know the knowledge, we don't know epsilon, but we need to decide S prime. If we are conservative, then I just send, let this one to be log n. Then, but then this term would not be a negligible term. It's only one over some polynomial. It's not only polynomial, it's more, okay? So then S prime has to be super log n. Super log n, then if we set S prime to be some super log n, then we might add it up that this two to the power of S prime we are care, eventually care this term, okay? So that's the difficulty. The solution is we just, it's a standard trick. We run a few super constant copies, and then this super constant will be reflected in the final construction, essentially, okay, sorry. So we have now the secret term, we have two terms. One is n to the minus q, okay? So n, the second term is a negligible term. So we have to set q to be a super constant, and any q constant can do it. Okay, that seems to be very artificial. Okay, then we achieve almost optimum up to some super constant factor. Okay, then yeah, and then we move on. What about any regular one way functions? What can we do? But the good thing is that it's already done in the recent Asia credit paper. They already have this construction. So here the construction x is no longer the input. The input is just all these b's, b1, b2, all the way to bn, plus one, and output is y. And this hash function is keyed by two strings, x and s. And this is sup's generator. This is essentially sup's randomization technique. So these odd hash functions, they do not, they are not necessarily independent. They can generate it using a very short seed. Okay, so we have a family of the oofs, where the seed length, a key length is n log n, and it can press the input by one single bit. Okay, so parameters, what we have, it's output length, key length, and the linear number of calls. So a price, if we do not know the regularity, a price to pay is we have to run almost linear number of calls to the underlying one way function, regular one way function. Then we are going to generalize this a little bit because it's, we can certainly generalize just to almost regular one way functions, but that's not good enough. So then we introduce this notion of quickly regular one way functions. We actually, we introduced in this CSTCC paper. So if we group all the images into different sets, okay, by their pre, group the images into different sites, according to their pre-image sites. So let's say YJ, this one is dead. So YJ is actually corresponding to all those images whose pre-image size is roughly two to the J, okay. Then we can group all images into different sets. And you know, in this language, using this language, regular function means that all the images actually consent, consent on one single set, okay. For example, Y max, okay. This is the definition of regular one way function. And if almost regularity, the whole images kind of like concentrate on almost a few sets that are neighboring to each other and the index are bounded by some log in, okay. So these two cases are the one way functions, the regular one way functions that AGV 12 handle. And we are going to assume much less, seemingly, at least seemingly much less. We're going to assume that these one way functions are weakly regular. That is, there exists some set such that the whole images has like a noticeable fraction over this set. And all the rest images, they have pre-image size smaller than that. So all the sets beyond that has like zero weight, okay. So this seems to be much weaker. And of course we can also further generalize, okay. To the almost, to weakly almost regular case. But then we only prove this, we only prove our construction under this assumption. Let's see, the overall idea is that when we have a weakly regular one way function, first we just construct a family of almost the regular one way functions. And we just plug this function into the previous construction because this previous construction already handle almost the regular one way function. So by doing that, we reduce the problem to construct almost the regular one way functions from weakly regular one way functions, okay. But here this case, in this case, this one way functions, they are a randomized object. It's a family of one way functions. And in general, like randomized one way functions are not make not much sense. But in this case, it only serves as an intermediate object. Okay, so we can get key length which is n log n and output length which is linear in n and then make polynomial many one way function calls, okay. And we, how to construct such a almost regular one way functions from weakly regular one way functions. We are used randomized iterate by HHR. So we just iterate this function followed by hash function for many times. And here we use a bounded space generator to generate all these hash functions with a very short seed which is n, okay. We're having dinner anyway. So yeah, and improve the one wayness and improve the almost the regularity. So to conclude, we still didn't solve the problem how to construct the moves from any one way function. We efficiency more than that, okay. So actually, you know that the best, currently best known new wolf is Duro to the PRG like 10 years ago. But you know the PRG has been recently improved. But so the question is really, can we efficiently construct the wolves using like a Duro technique to the recent, recent constructions? You can, this is the other two recent stockpapers. But I'm not very optimistic about that, despite the expected symmetry and the duality, there are also some complications, that's it. Bon appetit. Thank you very much.