 Hello, everyone. Today, I would like to talk about my work on the possibility of basing cryptography on X is not equal to PPP. And this is a joint work with Professor Raphael Paz at Cornell Tech. So today, we're going to talk about the notion of my functions, which is proposed by Diffie and Hammer in 76. When we function is unequivocally the most essential notion in cryptography. And it's we're known that my function is both necessary and sufficient for a branch of private key crypto primitives, such as a private key encryption, sovereign generators, digital signatures, and so on. And we also know that's public key encryption, oblivious transfer would imply the existence of one functions. So without my function, there is really no non trivial computational crypto. So we really need one function to exist. However, proving the existence of one function is a very hard problem and would imply MP is not equal to P. Therefore, in the absence of a formal proof, people have just come up with those when function construction candidates, which are based on different computational assumptions, including the harness of factoring problem, the harness of discrete logarithm problem, and the harness of lattice problem. However, we know that if we have quantum computers, then the factoring assumption and the disacquired logarithm assumption are just broken. And so we really need to prove the existence of one function. So in this work, we ask, can we prove the existence of my function based on very weak assumptions about physical computation? And perhaps the most believable and it's embarrassing open conjecture is that X is not equal to BPP. So let us look into this minimal conjecture. Record that X denotes the class of exponential time decidable languages. And we don't know that X is not equal to sub X. So even a sub exponential time algorithm cannot emulate exponential time computation. And on the other hand, BPP denotes the class of randomized polynomial time decidable languages. And it is believed that BPP is equal to P. In other words, the power of randomized polynomial time algorithms are as powerful as the deterministic polynomial time algorithm. So our unproven minimal conjecture would be X is not equal to BPP. Informally, randomness does not exponentially speed up computation. And note that it's a really weak conjecture in the sense that many other assumptions would also imply it. Note that if NP is not equal to BPP, then we have X is not equal to BPP. And if BPP is equal to P, then we have X is not equal to BPP. Furthermore, even if we assume randomized polynomial time algorithms are much more powerful, and even if they can emulate sub exponential time computation, we still have X is not equal to BPP. So it would be really crazy to assume that randomness can speed up algorithms exponentially. And it's really an embarrassment that such a conjecture has not been proven. So this is a really weak conjecture. So in this work, we ask, can we base the existence of one function on such a weak conjecture? So we show that there exists a standard nature computational problem called MKGP, such that if this problem being hard on average with respect to two-sided error heuristics is equivalent to the existence of one function. And also this same problem being hard on average with respect to errorless heuristics is equivalent to X is not equal to BPP. And as we shall see later on, this MKGP is really a standard problem related to comical capacity and has been studying since the 60s. And the notion of two-sided error and errorless average case hardness are the two standard notions used in complexity literatures. So therefore, the only gap between the existence of one function and X is not equal to BPP is a seemingly minor technical distinction between two standard notions of average case hardness for a specific problem. And to introduce our main theorem, let us first introduce the notion of one function. So a function F is one way. If it's easy to compute, so F should be able to be computed in polynomial time, and F is hard to invert, so no PPP machine can invert F. Informally, given the input X, it's easy to go from X to F of X, but it's hard to go back to X from the output F of X. And more formally, we say that F is a web function. If given a random string X of n bits, we let Y be F of X. And for any PPP algorithm A, the probability that A of input Y outputs a preimage of Y is at most a negligible amount. And for standard runway functions, we ask all attackers fill out all input as n. So here, we also consider a notion of infinity open web function, where the inversion requirement is relaxed, and we ask all attackers fill out infinity open many inputs as n. So therefore, our main theorem says that the existence of infinity open web functions is equivalent to this MKT problem being hard on average with respect to two-sided error characteristics. And X is not equal to BPP if and only if MKTP is hard on average with respect to errorless heuristics. And here MKTP is the language of pairs of X, K having the property that X has living comorgo capacity at most K. And we remark here that we can also characterize standard runway functions by considering an almost everywhere notion of every case for this. And let us introduce the notion of comorgo capacity. Given the following two strings, the first is like one, two, three, one, two, three, and so on. And the second one is one, seven, three, something and something looks random. So the question here is that which of the above strings is more random. And the notion of comorgo capacity proposed by comorgo in 68 is used to measure the amount of randomness in a fixed string. So here for any string X, we let K of X denote the length of the shortest program that outputs X. And more formally, we fix the universal tree motion U and we're looking for the length of the shortest program higher, which consists of M,W such that U on input M,W will output the string X. And the notion of comorgo capacity has a lot of amazing applications such as proving Godot's incompetence theorem. But unfortunately, it is un-computable. And we instead look at Devon's comorgo capacity, which is proposed by Devon in 73. So we let KT of X to be the minimal over all programs that output X of the sum of first the length of the program pi and second the logarithm of the running time of pi. For example, if the machine pi outputs X within time 2 to the power of n over 10 and the length of machine is at most n over 10, then KT of X is at most n over 10 plus n over 10, which is n over 5. And the intuition of doing this is that we charge for the size of the program and the running time simultaneously, but we only charge logarithmically for running time to capture the intuition that's polynomial time computations are relatively cheap. And the key observation here is that the KT of X is at most the length of X plus all of n for any string X. So just consider a true machine where on its tape, we leave the string X on its tape and the machine itself is a machine that holds immediately. So if we run the machine, it will terminate immediately and output the string X on its tape. And this machine takes, you know, constant time. So therefore, the KT capacity of X is at most less of X plus all of n. And here MKTP denotes the language of pairs of X comma K such that KT of X is at most K. So we note that this MKTP language no longer seems to be in MP. Since here, for example, this pi will output X in time 2 to the power of n over 10, and we don't know how to verify this in polynomial time. Then let us introduce notions of average case harness used in this work. So we first introduce the notion of two-sided error here as average case harness, and we say that the language L is in here SQP. If for all polynomial P, there exists a deterministic algorithm H such that H of X equals L of X with probability 1 minus 1 over P of n over random X. And we also consider another notion of standard average case harness which called errorless average case harness. And we notice this is really a standard notion in capacity theory literature. So we say that a language L is in average P. If for all polynomial P, there exists a deterministic algorithm H such that for all inputs X, H of X will output either L of X, the correct answer, or BOT. And second, the probability that H of X outputs BOT is at most 1 over P of n. So these two notions of heuristics, they both succeed with very high probability, namely 1 minus 1 over P of n. And the difference between these two notions is that for an error, two-sided error heuristic, it is unaware of any mistakes. So if H of X outputs 0, it's unclear whether L of X is also 0. However, an errorless heuristic knows it when making a mistake. So if H of X doesn't output BOT, it is guaranteed that L of X is equal to H of X. And in this work, we consider the notion of randomized heuristics and we consider the BPP analog of heuristic P and average P. We have to modify the definition accordingly, and the difference between these two notions states the same as we mentioned before. So we are ready to present our material formally. So first, we show that the existence of infinite open-wired functions is equivalent to MKTP is not in heuristic PPP. And XV is not equal to PPP if and only if MKTP is not inside average BPP. So we show that spacing when we function on XV is not equal to PPP boils down to a seemingly minor technical problem between two standard notions of average heuristic hardness. And proving MKTP is not in average PPP if implies MKTP is not in heuristic PPP is exactly what we need to base infinite open-wired functions on X is not equal to PPP. So one may wonder, okay, what are the consequences of this implication? So the next theorem tells us that if we can show MKTP is not in average PPP implies MKTP is not in heuristic PPP, then we prove MP is equal to A is not equal to P. And there are two ways to interpret this theorem. And the pessimistic interpretation would be that closing this minor gap will be very hard. And on the other hand, the optimistic interpretation would be that this is a new algorithmic approach proving MP is not equal to P. And so to prove this implication, it suffices to decide the language MKTP errorlessly on average, having an access to a two-sided error heuristic that's deciding that decides the same language. So if we can come up with such an algorithm, then we prove MP is not equal to P. And also in the paper, we have shown some other characterizations of bio-functions with additional properties. For instance, the computability in log space or Unifine NC0 using different other notions of resource-bounded combative capacity. And we want to focus on these results in this talk, and we refer you to the paper for more details. And let us mention some related work here. A element at all shows the x completeness of MKTP with respect to P-poly reductions. They show that if x is not inside P-poly, then MKTP is not inside P-poly. And this will be our starting point for our theorem too. And the UNPASS20 shows that the existence of one function is equivalent to the average case hardness of time-bounded combative capacity. And this problem is another combative capacity problem, and it is not known that this problem is x complete. However, what we show today, this MKTP is x complete. And this will be our starting point for our theorem one. Concurrently and independently, the NSN-Tartner also shows an equivalence between bio-functions and mild average case hardness of MKTP. And more recently, the UNPASS20-21 shows that bio-functions is equivalent to average case hardness of conditional time-bounded combative capacity problem. And they also show that this problem is MP-complete. Taking together, bio-functions can now be characterized through average case hardness of x complete languages, this work, and an MP-complete languages, LP-21. So let us look into our proof. So let's first present the proof for theorem one, which states infinity-open-ware functions exist if and only if MKTP is not in a heuristic BPP. So let's first look at the proof for the first direction. And we want to show that if MKTP is not inside heuristic BPP, then infinity-open-ware functions exist. And one may wonder, okay, why we only obtain infinity-open-ware functions? It's because the assumption MKTP is not in heuristic BPP is only an infinite notion of average case hardness. And if we start with an almost everywhere notion of hardness, that is, MKTP is not in infinity-open-heuristic BPP, then we will obtain standard-ware functions. So we want to show that wire functions exist, assuming MKTP is not inside heuristic BPP. And it's weird to know that it's sufficient to show that a weak infinity-open-ware function exists, which is a efficiently computable function F, such that no PPT machine can invert F with probability one minus one over p of n for all n and some polynomial p. And by the famous hardness application lemma says that if a weak infinity-open-ware functions exist, then a weak infinity-open-ware function exists. So we just need to construct a weak-ware function. And let us show some intuitions for how to construct a weak-ware function assuming the average case hardness of MKTP. And we will ignore many important steps in this proof, which makes the proof really complicated. So our idea here is to just highlight the central idea. So let's look at our construction. We first let C be the constant such that MKT of x is at most less than L of x plus C for all string x. And as mentioned before, KT of x is at most less than L of x plus L of 1, and here C is just the L of 1. And we define a function F on inputs pi prime comma L, where pi prime is an m plus dp string, and L is a log n plus dp string as follows. So the first F is let pi be the first L base of pi prime. So we truncate the machine pi prime to the first L bits. And then we let, we basically interpret this pi as a machine, and we let x be the output of pi and we let t be the running time of the machine. And finally, the function F will simply output L plus log t comma x. And to give some intuition, let's assume for simplicity that we can inverse the function F with probability 1. And then we can easily decide MKT P and how on inputs K x comma K, which is to check whether there exists a K prime smaller than K, such that the inversion succeeds on K prime comma x. And if the inversion succeeds, then we know that they give this a machine pi of length L that outputs x within t steps, such that L plus log t equals K prime. And we can, and we know that we should output yes on this instance. So therefore, this, this function seems to be weakly one way. However, the issue here is that F requires exponential time to run. As mentioned before, it's possible that the program that witnesses the KT complexity of x takes exponential time. So, namely, here we want to compute the output of pi. It may require exponential time. So the solution here is to cut off the motion surrounding time after polynomially many steps. So here we let x be the output of pi after and to the c steps. And if pi does not hot after and to the c steps, we simply output both. And to see why this solution works, let's first define some notions. We say that the program pi is a KT witness for the string x. If pi generates x in t steps while minimizing the length of pi, plus log t, among all other programs. And our key observation is that for any epsilon from 0 to 1, except for for an epsilon fraction of mb string x, x has KT witness pi with running time at most of 1 to over epsilon. So most strings have, have a KT witness that has a small running time. And to see why this observation helps, just pick epsilon to be 1 over n to the c. And we know that except for n, 1 over n to the c fraction of mb string x, and x has KT witness pi with running time at most of n to the c. So therefore we can just cut off the running, machine's running time. And so why this observation is true, let's see how we prove it. Recall that for any string x, we have KT of x is at both the length of x plus all of y. So for any string x with KT witness pi that runs at most larger than 1 over epsilon steps, it's justified that the length of pi plus log 1 over epsilon is at most KT of x, which is at most length of x plus all of y. So therefore the length of pi is at most m plus all of y plus log epsilon. Note that epsilon here is something smaller than 1. So this concludes that there are at most epsilon times to do the n such programs pi, which concludes our observation. And in the paper, we will use this observation to show that f is indeed a weak one function. And the proof will be much more complicated, especially when dealing with the fact that we can, the inverter only inverts with probability 1 over P of n instead of 1. And, but we can leverage from our previous work LP 20 to overcome this barrier. So we have seen the proof that next let us talk about the proof for the other direction. So we assume infinitely often when functions exist, then MKT is not insights here at CPP. So our higher level idea is as follow. First, we use the function to construct a so-called conditionally entropy preserving PRG. And a count AP PRG is a PRG that has high channel entropy condition on some events. And we can use the MKT here here as a catch to distinguish the output of PRG from random, which is a contradiction. So our uniform string, we know that's uniform string has high comogal capacity. And on pseudo-random string, we know that pseudo-random strings have small comogal capacity. In the sense that to output the string y, we only need to hardwile the seeds of length n, and the program of PRG, which costs constant bits. And the running time of PRG is polynomial, and therefore the KT capacity of y is at most m plus all log n. And we remark here that G has to have high entropy to ensure that the oracle works on pseudo-random strings. So we have seen the proof for theorem one, and let's move on to proof for theorem two. And we want to show that the X is not equal to BPP if and only if MKTP is not inside LGPP. And the trivial direction is easy to see. Now let us focus on the non-trivial direction, which says that X is not equal to BPP implies MKTP is not in average BPP. And this is a non-constructive proof. And our proof outline is the follow. So we first use the results in NPGAzo and the Wienersen 98 to show that if X is not equal to BPP, then there exists an inefficient infinity often PRGG. And then we prove that if an efficient IOPRGG exists, then MKTP is not in average BPP. So the way we prove this lemma four is similar to how this second direction of the zero one was proven, proved, proved. But since this PRG is not entry preserving, we can only get the arrowless average case hardness. So in conclusion, today we have seen that the existence of infinite often well functions is equivalent to MKTP being a harder on average with respect to two-sided error heuristics. And X is not equal to BPP. It's equivalent to MKTP being average case hardness with respect to arrowless heuristics. So the only gap for basing infinite often well functions X is not equal to BPP is a seemingly minor technical problem between two standard notions of average case hardness with respect to this MKTP problem, which has been studied since the 70s. However, proving this implications would show that MPA is not equal to P. And this gives us a new algorithmic approach to us proving MP is not equal to P. So we just need to solve MKTP arrowlessly on average, having the access to a two-sided error heuristics. And thank you for your listening.