 annoying missing arrows in this picture. So there were things that seemed like it should be possible to construct from one-way functions and most of the picture had been completed in the late 80s and early 90s, but this was something that was kind of standing out as not known. We had a very complicated construction of these things from one-way functions. And we're so unsatisfied with it that we really forced us to look for the right abstraction to find a simpler construction. That led to this notion of inaccessible entropy that I'll talk about. And what we were trying to do there was make this construction at least as simple as the Hassel did all work, but it turned out that the ideas from there inspired kind of simplifying things on the other side of the picture as well. Anyway, so let me say a little bit about one, just on one slide, what is inaccessible entropy by example. Suppose I have what's called a collision-resistant hash function. So that's a function that's shrinking. So it takes n bits to some say n minus k bits. Okay? And it has the property that even though it's shrinking, so there must be lots of inputs that collide. And what collision resistance means, even though there are lots of inputs that collide with each other, it's hard to find any collisions. They technically, h should be chosen from a family of functions that are shrinking. And given a random function from the family, it's infeasible to produce a collision in it. All right, so suppose I have, just let's imagine it's a fixed collision-resistant hash function. I choose an input uniformly at random. And now let's look at the entropy of x given h of x. Well, that must be at least k because x had n bits of entropy, it was n random bits, and I've conditioned on something that's, I've revealed only n minus k bits of information about it, the output of the hash function. Okay? On the other hand, if I consider any efficient algorithm that generates h of x, some output value of the hash function, once it generates h of x, its hands are tied to a particular value of x. It can all, it can't produce any value other than x, other than a single value x that, whose output is h of x, even though there are many of them. Okay? So intuitively, given h of x, x has a lot of entropy in the Shannon entropy, but it's somehow inaccessible. No efficient algorithm can, can generate lots of random x's that map to h of x. Okay? And so, one interesting thing, so this can be formalized into some notion, other computational notion of entropy, which we call inaccessible entropy. And the interesting thing here is that this seems dual to the notion of pseudo entropy that I talked about. In particular, in this example, and when this notion is interesting, is when the computational form of entropy, this inaccessible entropy, is smaller, much smaller than the real entropy. Okay? And one can generalize, come up with some general notion, which we maybe should now call next-bit accessible entropy. And what we can prove is that if f is a one-way function, I look at a very similar construction as before. So I look at f of x and x. So with the difference that I break f of x into bits, and then I don't need to break x into bits, that this thing has this next-bit accessible entropy, smaller than its real entropy, n minus something super logarithmic and n. So this, not only the notion seems dual, the construction seems very dual to the other one that we had. Yes, we're thinking of x as picked at random. Now, when you think of an adversary here, so that's weaker than the definition of collision resistant. So an adversary need not generate things at random, and that somehow has to come into the definition, which I'm not giving formally. Okay. So to conclude, sort of the overall message that seems to be emerging is that complexity-based cryptography is possible because of gaps between real, that is information theoretic entropy, and computational entropy. And depending on what type of cryptographic primitive you're interested in, the notion of computational entropy may be different, and the relationship to the real entropy may be different. And in particular, secrecy in cryptography roughly seems to correspond to having pseudo entropy larger than Shannon entropy, and these sort of unforgeability type primitives seem to correspond to having accessible entropy smaller than the real entropy. And all right, so in terms of some things that there's left to do in the future here, so one, I mentioned this seems to be this duality and analogy between inaccessible entropy and pseudo entropy, but we don't know a formal way to say that yet. Right now, it's more intuition. Okay, so can one formally connect the two? This question I mentioned earlier, can one get a really efficient construction of pseudo random generators from one-way functions, one that is hope of being practical? We don't know right now. My current guess is negative, but it changes from time to time. And this notion of inaccessible entropy, pseudo entropy has found connections with and use in other places in complexity theory, and actually there can be related to notions being used in additive combinatorics in other areas of pure mathematics. And one may wonder whether this inaccessible entropy notion has useful applications or analogies as well in other areas. I'm not at the point of having a conjecture, I don't even know how to formulate a conjecture about it. One may need to look some, so one maybe can look at what we call black box reductions in cryptography, so certain limited ways of constructing things. So we talk about constructing pseudo entropy generators, inaccessible entropy generators in some kind of black box way, maybe there's some automatic way to move from one to another. I'm not sure, maybe that's the problem, but it's been eerie how much the two have seen the parallel of each other, I mean it should be some explanation. What do you think you're trying to do on the internet, because you don't know this yet? Well, we're working on it, starting, it's a blue-dyed-worn column you've been thinking about about that. So there is a new result of Thomas Hollenstein others, which talks not about the seed length, but the efficiency of the construction, that from a general lonely function or even a regular lonely function where you don't know how many pre-images there is, you must make a lot of invitations. So that would be kind of a starting point to try and improve these lower bounds on efficiency and then somehow translate and say, okay actually not only do you need to invoke the lonely function a lot, you need to evaluate the independent inputs, so you need lots of parameters, so anything. Results will seem very asymptotic to me, and because you want to be protected against all polynomial time algorithms, is there any effort at trying to get some more manageable, described for the class of algorithms and trying to get some finite version of anything you need to do? Okay, so there is a good question, so there, it is possible to do things less asymptotically than I have done and what's called concrete security and cryptography, and where you fix a model of computations and Boolean circuits or something, and you really kind of reason about concrete numbers and you can even get rid of the big O's and everything and say for a fixed value of n, what's this? What? It's just ugly. It is more painful. These things, I think, are still very far from having reasonable, the things I'm talking about, having reasonable statements in a concrete sense, but other things in cryptography have been optimized to give quite reasonable things in concrete security, and I think even constructing pseudo-random generators from one article, one-way permutations, so these are one-to-one and onto one functions, you want a hostod and someone else did a concrete security analysis of that that was not unreasonable. Does it make sense if I don't make sense? It does make sense for non-uniform security, but there is a way of making sense of the definitions for concrete numbers, so it should be possible to formulate. But as far as the other questions, so still what I was talking about are still referring to general models of computation, like Boolean circuits, not kind of going into some kind of class of statistical testing that you really can get your hands on. Okay, I'll ask one more close-up of the question. So do you think one-to-one functions really have a lot farther for these instructions, or are they similar for the low one-to-one functions? Obviously, if you want to get rid of it, do you think it's only two? It's a, so there is a, so it is known that all these things imply one-to-one functions. And sometimes said to say, well, that means low functions are minimal. But that's not really a completely satisfactory answer because all these things are equivalent, so why not get one of the more complicated ones and claim that that's a minimal assumption? Oh, no, there's an easier assumption. Oh, I see, I see. Okay, but low one-to-one functions. So there is a, yes. There's this very interesting question. How do you relate now the assumption of low functions to things like the P versus NP questions? Things in between the P versus NP and one-to-one functions like whether NP has problems that are harder than average. And people have certainly thought a lot about that. They're mostly, mostly negative results saying that with the following kind of restricting yourself to certain kinds of black box, production of the black box, construction of the different kinds, you can't, like, you know, like Luca's work with Andre Bogdanoff. But certainly not everything is rural out there, so it's usually something, I don't know. There is a reception question.