 Hi, I'm Veronica Slivova, and I will be talking about average case hardness in TFNP from one-way functions. This is joint work with Pavel Hublacek, Chetan Kamat and Karol Kraal. The class TFNP was defined in 1991 by Medigo and Papadimitriu. It's an abbreviation for Total Function Nondeterministic Polinomial. It's a complexity class of total search problems which can be solved in nondeterministic polynomial time. We can define a verifier for the TFNP class. Similarly, as there is a verifier for NP class, it is a polynomial time algorithm C such that C when running on two inputs, string i, which represents the instance, and string s, which represents a possible solution, returns 1 if s is a solution for the instance i and 0 otherwise. In contrast to the NP verifier, it satisfies one more condition. It's called totality. It says that for every instance i, there exists some solution s of polynomial length such that it is a solution, so C when running on i and s returns 1. The class contains a lot of interesting problems for which we do not know polynomial time algorithm. So there was a lot of effort for proving average case hardness in TFNP from various cryptographic assumptions. As you can see on the slide. But the question we have been interested in is whether one can base the hardness of a TFNP problem on an unstructured assumption such as one-way functions or injective one-way functions. Let me go briefly through the previous work. In 2015, Bitanski, Panett and Rosen construct a hard-on-average distribution from one-way functions and in distinguishability obfuscation. Two years later, in 2017, Komar Godzki and Segev come up with a construction from injective one-way function and private key functional encryption scheme. In the same year, 2017, Hubaček, Naur and Jogev construct a hard-on-average distribution from one-way functions and randomization style assumption. But all of these assume not only one-way functions but also some other assumption. And we were interested in basing the hardness just on the assumption of one-way functions or injective one-way functions. On the other hand, there is also an impossibility result. It was proven by Rosen, Segev and Shaha in 2017 that if you are able to construct a hard-on-average distribution in TFNP from one-way functions, then the instance must have many solutions. But note that their result holds also for collision-resistant hash functions. And for them, the construction is known. So this does not really rule out the possibility of constructing a hard-on-average distribution from one-way functions. In our work, we rule out the possibility of such a construction at least for simple constructions. So now, let me define the fully black box construction of a hard-on-average distribution of TFNP problem from injective one-way function. It consists of two algorithms, C and R. C is the TFNP verifier. It gets as an input two strings. I, the instance, S, the possible solution, returns one if and only if S is the solution of the instance. R is the security reduction. It gets as an input a challenge Y, which is a string from image of F and should return the pre-image of Y under F. Both C and R get F in a black box way, which means that they can query it as an oracle. R gets an access to another oracle called solve. This oracle solves the TFNP problem. So when R query the instance I, it should return some string S such that the verifier on input I and S returns one. So what should these algorithms satisfy so that it is a black box construction of average case hard TFNP problem? First, R and C are polynomial time algorithms. Second, C as a TFNP verifier should satisfy some correctness requirement. It says that C is always total. This means that for any injective one-way function F and any instance I, there should exist some string S, which is a solution. So C, when running on input I and S with oracle access to F, returns one. R as a security reduction satisfies security. It says that if we give R access to any oracle solve, it should invert its challenge with non-negligible probability. This means more precisely that there exists a polynomial P such that for any injective function F and any oracle solve, if for every instance I solve, which means that solve returns on input I some string S such that this verifier C, when running on I and S, returns one, then for infinitely many lengths N, if we choose X uniformly at random from strings of length N and run the security reduction on input FX, it should return the prime image of FX with probability at least one over PN. Note that this is actually a definition of a black box construction of a worst case half TF and B problem from injective one-way functions. As I have said, we have been thinking only about simple reductions, so now let me define what simple means. It means that the reduction satisfies three additional conditions. First, it is many one. This means that the algorithm R sends at most one query to the oracle solve. Second, it is deterministic, which means that R is deterministic. Last, it is F oblivious. It means that the queries R makes to algorithm solve are independent on the function F. This means that you can imagine it as a two-stage algorithm. In the first stage, R just submit its query to solve and once it gets the result, it can start querying the oracle F, but it no more queries the oracle solve. We can actually slightly weaker these conditions and still consider the construction to be simple. We can allow R to make more queries to oracle solve, but they have to be non-adaptive, which means that the query is independent on the answers to the previous queries and still should be F oblivious, so it still should hold that R first makes its queries to solve and only after them it starts querying F. We can also allow R to be randomized. So what are our results? They say that there is no simple reduction. More precisely, our main theorem is that there is no randomized fully black box non-adaptive F oblivious construction of average case-hard TFMP problem from injective one-way function. Let me note once more that we can actually show it for worst case-hard TFMP problem. A special case of our main theorem is that there is no deterministic fully black box many-one F oblivious construction of average case-hard TFMP problem from injective one-way functions. And this is what I will be talking about. So how do we prove that there is no black box construction? We use a proof technique for black box separation which is called the two-oracle technique and it was introduced by Xiao and Reisen in 2004. In this technique you define an oracle O which consists out of two oracles. F, which is an implementation of an injective one-way function and Solve, which is a solver for the TFMP problem. And you prove that injective one-way functions exist with respect to this oracle O but TFMP is easy with respect to this oracle O. We slightly differ from this technique and I will mention it later in the talk. Now let's look at what is the difference between one-way permutation and injective one-way function. Because for one-way permutation we know that given any one-way permutation P we can construct a hard-on-average TFMP problem. Simply we can ask for finding a preimage of some Y under P. And this is a simple reduction. But for injective one-way function we have shown that such a simple construction is impossible. So what is the difference? For one-way permutation we know that for any Y of the correct length n bits we can find the preimage under P. But for injective one-way functions this is not true. If the injective one-way function is from n bits to n plus 1 bits and we choose a string of n plus 1 bits Y then it has a preimage under F if and only if Y is in the image of F. So we need to somehow use this to prove the non-existence of the construction. Now let's think about how would a possible construction look like. So it consists of this algorithm R as it given an input Y to compute something and at some point it queries the oracle solve on an instance IY. Now let me remind the correctness which says that for any injective one-way function F there should exist a solution S. This means that this solution S must exist even for a function G such that Y is not in the image of G. Now imagine that we have such a function G. Let's look at the computation of C on input I and S with oracle access to G. It queries some preimages A and B. Imagine that we have another function F which have the same answer on preimages A and B but somewhere for some X different from A and B it returns Y. Now as both computation C on input I and S when given oracle access to F or when given oracle access to G query only these points A and B they both return the same. Now which means that S is a solution for both functions F and G but this solution S does not hold any information about X or just that X is different from A and it is different from B. So it is somehow useless for the reduction for inverting the challenge Y. The idea is to create a solve which returns this useless solution but the problem is that the solve does not know the challenge Y on which the reduction is running. So how we can identify a useless solution here? The definition of security comes into play. It says that the reduction is successful in inverting given access to any algorithm solve solving the TFNP problem. Note that as it must be inverting for any algorithm solve we can actually make the algorithm solve dependent on the reduction and this is the point where we slightly differ from the previous use of the two oracle techniques. So what we do is that we try to identify the challenge Y on which R is running just by simulating R on all possible challenges Y and looking at on which challenges it queries the instance I. So now let me define what the algorithm solve does more precisely. So imagine it is running on an instance I. First it computes set Y which is the set of all Ys on which the reduction might be running. So this is the set of all Ys such that the reduction when running on Y query the instance I and we will try to protect this Ys from returning their parameters. Second it computes the set of all solutions. So these are just all strings S such that the algorithm C when running on instance I and string S with oracle access to F returns one. This is simply a set of all strings we can return. If there exists a solution S such that no pre-image of any Y from the set of protected Ys is squared then we can return this S. This is simply because this is useless solution for any challenge Y on which the algorithm R might be running. But this might not be the case. It might be the case that all solutions are actually querying something from the protected set. If this is not the case we carefully remove some Ys from the set Y. So we stop to protect some Ys but we do it carefully so that we do not give our much information about the pre-image. And we repeat this until we will be able to return some solution. So now we need to prove that given access to oracle F and this oracle solve the TFMP problem is easy. This is also easy part of the proof because solve always return a correct solution and you might already see it from the algorithm. The harder part of the proof is that the reduction R does not invert F. For this we use incompressibility argument by Genaro and Trevisan from 2000. We show that if R would be able to invert F with non-negligible probability then we would be able to compress a random function more than it is information theoretically possible. So what are the conclusions? If it is possible to construct a hard TFMP problem from injective one-way function then the reduction must be quite involved. Let me just note here that most of the known reductions from different cryptographic primitives we have right now satisfy our definition of being simple. So we would really need to come up with something new. What are the questions which remain open? Can we get the same impossibility result even without the F obliviousness requirement? Let me note that there are two main places in the proof where we use this requirement. First, during the incompressibility argument we need to be able to simulate the algorithm solve. This means that we need to be able to compute the set of protected wires. The F obliviousness gives us that this set is independent of the function F. Second, again during the incompressibility argument when we are bound in the size of the encoding we need the F obliviousness for technical reasons during bounding the probabilities. The second open question is whether we can allow adaptive queries to solve. But note that here the set of protected wires is dependent on the answers we written from solve for different instances. So we get some circular dependency. This is all from me. I thank you for your attention. If you find our paper on Eprint our Eprint number is 2020-1162.