 Great, thanks Ron, and thanks everyone for coming. So the context of this talk is the landscape of cryptography from one-way functions. So famously, Impaglizzo and Lubi showed that one-way functions are necessary for essentially all of complexity-based cryptography. And on the other hand, a beautiful series of works have shown that, in fact, it suffices for a large number of cryptographic protocols and applications, including, for example, most of private key cryptography and also some basic public key primitives like digital signatures. And the way these constructions of these more useful primitives and protocols work is that they don't actually work directly with one-way functions, which are very unstructured and hard to manipulate for construction. They instead deal with elements of this bottom layer here, which are sort of more structured primitives that are easy to use in constructions and easy to reason about. And they thereby isolate the need to deal with one-way functions and that hardness to this bottom layer of reductions. And it's this bottom layer that we're going to focus on today. So a recent line of work has been dedicated to trying to get simpler, more efficient constructions of these basic, more structured primitives. And along the way, it's shown sort of surprising similarities behind the constructions and the analyses of these seemingly very different primitives. And it's raised sort of the very intriguing possibility that there may be a way to formally unify the constructions or the analyses. But so far, this dream is still far out of reach. So in this work, we make a little bit of progress in this direction by introducing a new framework to reason about and manipulate the hardness of one-way functions in the way that's needed to get both statistically hiding commitments and pseudo-random generators from one-way functions. And to say a little bit more about what this means, we first need to understand a bit about how these constructions work, and they go via manipulating different computational analogs of entropy. So first, a brief reminder in the information theoretic setting, what is entropy? The Shannon entropy of a random variable A is just the expected surprise, the expected log 1 over the probability. And really the only thing we need to know is that for an n-bit string, the entropy of a random variable A is between 0 in the case that A is just a point mass and solids mass in one point, or n, where A is the uniform distribution over all n-bit strings. So now let's understand these primitives of pseudo-random generators and statistically hiding commitments from this sort of entropy theoretic view. So most basically, a pseudo-random generator, let's just say, is a function from n-bit strings to two n-bit strings. Now, the Shannon entropy of the output of the pseudo-random generator is at most n. And this is just for trivial reasons. The input is an n-bit string, has entropy at most n, and you can't increase the entropy by applying some function. But on the other hand, from the point of view of a computationally bounded adversary, the whole point of pseudo-random generator is that the output looks uniform. So in some sense, the computational pseudo-entropy of the output is as if g of u n had full entropy. And what this means formally is that there is some distribution x computationally indistinguishable from the output of the pseudo-random generator that does have true Shannon entropy to n. And this, of course, is the famous notion of pseudo-entropy introduced in the seminal work of Hassad and Pogliac 11 and Louie. And something to note here is that in this case, the distribution n is simply the uniform distribution on two n-bits. And so a pseudo-random generator is sort of a maximal form of a generation of pseudo-entropy. Not only is the pseudo-entropy more than the true entropy n, it's actually as large as it possibly could be. And it turns out there's a dual way to think about this in the context of, say, statistically-hiding commitment based on unforgeability. So to say what this means, let's think of a statistically-hiding commitment of a message drawn uniformly at random from n-bit strings. And where you execute the commitment protocol, you get some transcript T. Now, by the statistical hiding property of this commitment, the entropy of the message m given the transcript T is supposed to be completely uniform or statistically close to uniform because it's statistically hiding. And in particular, the true entropy of the message given the transcript is essentially n. But on the other hand, the binding property of the commitment says that the computationally-bounded sender should not be able to decommit to more than one message. And in particular, this adversary should not be able to access any entropy in the message given the transcript. The sort of computational entropy, the entropy left to a computationally-bounded adversary is much less than the true entropy and is actually negligible. And importantly, this holds even if the sender, the committer, is malicious and does not just sample a message uniformly at random and execute the transcript, but instead executes the transcript and generates the message sort of simultaneously in a way trying to access entropy in m given T. The concept here is that you still can't do that. And this is the recent notion of inaccessible entropy due to heighten around gold, vadan, and wean. And again, much like pseudo entropy here, we see that not only is the computational entropy, ooh, sorry, much less than the true entropy, it is in fact zero or negligible. So again, this is sort of an extreme form of inaccessible entropy. And motivating this, it's perhaps not surprising that the way these best constructions work now is they first go and produce some form of pseudo entropy or inaccessible entropy that's not necessarily maximal, but has some gap between the computational entropy and the true entropy. So these are pseudo entropy generators and inaccessible entropy generators, and then you convert that into the full primitive that you want. So it turns out that in these constructions, once you have your base pseudo entropy or inaccessible entropy generator that has this gap between computational and real entropy, the step to turn it into a full, the full primitive is actually quite similar. You do some simple things like repetition to amplify these gaps, to turn them from Shannon entropy to min entropy, things like this, and then some sort of hashing or extraction step to get true randomness or true commitment. But on the other hand, the sort of steps to actually get this first generator, computational entropy generator, have been mostly ad hoc. And so the sort of question behind this work and the series of works is to just try and get a better understanding of whether these things can be unified. And so if we zoom in even further for getting the eventual primitives, let's look at the constructions of these computational entropy generators. If we go to 2010, the best works is of Heitner, Reingold, and Padan in the pseudo entropy case and still the same inaccessible entropy generator. You can see here that the constructions are somewhat similar. The actual generators look somewhat similar, but the techniques are quite different than ad hoc. But now, interestingly in 2012, Vadan and Zheng in work that has the best-known pseudo random generator construction from one way functions, introduced a new notion of relative pseudo entropy that allowed them to amazingly both simplify the eventual construction and make it essentially the same as the inaccessible entropy construction. But also give like a simpler, tighter proof. And the key idea is that this notion of relative pseudo entropy is a very simple, beautiful information theoretic notion that is easily derived from one way functions. And also it lets you manipulate sort of this hardness in a very information theoretic way and allows you to isolate the computational part of the proof to just this one step. So in particular, like you isolate the information theoretic and computational aspects of the proof and you get a simpler, tighter proof of this reduction. On the other hand, that work was entirely for pseudo entropy, so it left the sort of inaccessible entropy leg of this diagram unchanged. So in this work, what we do is we introduce a few new notions, most importantly, hardness and relative entropy, which like relative pseudo entropy is a very sort of simple information theoretic notion that is easily found in one way functions. But on the other hand allows us both to recover relative pseudo entropy in the case of pseudo random generators, but also allows us to make this proof also of inaccessible entropy generators more modular, simpler, slightly tighter and do most of the work entirely with information theoretic manipulations, isolating again the computational aspect to one step. And we'll actually see that in fact a lot of the steps on the top and bottom will mirror each other, but we need to first understand a bit more about what these notions actually are. So the preliminary talk had mentioned KL divergence, the new notion is called hardness and relative entropy, so let's define of course relative entropy or the KL divergence, which is a form of distance between probability distributions. So the relative entropy of A with respect to B is the expected where the expectation is over the distribution on the left, relative surprise of A, the expected log ratio of the probability of A over the probability of B. And just for intuition, some basic properties, the relative entropy is not negative with equality if and only if the distributions coincide. The relative entropy is finite if and only if the support of A is contained in the support of B. Or another way to think of this, if A ever puts mass on an element where B has no mass, then the divergence of relative entropy is infinite. And for intuition, if B for example is uniform over its support, then the relative entropy of A with respect to B is minimized when first the support of A is contained in the support of B, but then also the support of A should be as large as possible within that set and it should be as close to uniform on that support as possible. Okay, so, and also just very briefly, a one-way function more for notational purposes is just a function F, computable in polynomial time, such that for any polynomial time adversary, the probability of inverting a uniform image of the one-way function is negligible. Okay, so now before we talk about the new notion of hardness and relative entropy, let's first understand the VZ notion of relative pseudo-entropy that they use for pseudo-entropy generators. So we say that X, a distribution X has relative pseudo-entropy delta given F of X if for all polynomial time adversaries, A, the relative entropy of the joint distribution of the input X and the image F of X with respect to the adversary run on F of X and F of X is at least delta. So that was a lot of words. Let's try and understand a little bit more briefly. Notice that the second component here, the F of X component is the same. So this divergence is made zero if the adversary A, in fact, samples exactly from the pre-image distribution of X given F of X. And on the contrary, this relative entropy will be large if the support here is much smaller than the support here. Or in other words, if A puts a lot of mass outside of pre-image of F, if A fails to invert F. So given this intuition, it's not surprising that VZ proved that in fact if F is a one-way function, then the uniform distribution has super logarithmic relative pseudo-entropy given the image. And now, as you normally, such a very queen-looking theorem will have a very ugly proof because we're dealing with one-way functions, but one beautiful thing of this notion is that in fact, I'm gonna give the proof on this slide. So the key is that the relative entropy is very nice to work with. In particular, we have the data processing inequality, which says essentially that the relative entropy is non-increasing under the application of a function to both sides of the divergence. So what function are we gonna apply? We're just gonna ask if the first component of this joint distribution, a pre-image of the second under the one-way function. And when we do this, we see that in fact on the left-hand side, by definition, the first component is always a pre-image. But on the second side, by the one-way function property, because an adversary can invert only negligibly, it's only negligibly often a pre-image, and thus this relative entropy is super logarithmic. It's the log of the security parameter of the one-way function, log one over the security parameter. Okay, so this talk is about unifying the entropies. So how are we gonna try and put inaccessible entropy into this framework of relative pseudo-entropy? So first, let's remember that relative pseudo-entropy we see has this one-versary A here on the right-hand side of this relative entropy, and its purpose is to invert the one-way function. But on the other hand, for inaccessible entropy, the adversary is really not an inverter. It's a malicious generator or a sampler. And to explain what I mean by that, let's recall this example of a statistically hiding commitment in which a sender is supposed to choose a uniformly random message, execute the protocol and get a transcript T. The security notion we wanna look at is a malicious adversary, which is not just doing this process, but is somehow jointly executing the protocol and sampling a TNM in such a way that they wanna access more entropy in their adversarial message M tilde given the transcript T tilde. The binding property we want is sort of quantified over all these malicious generators or samplers and saying that no matter what you do, if you're still poly-time bounded, you can't actually access any entropy in the message given the transcript, even if you maliciously try and execute the sampling process. So our goal is to sort of capture now the hardness of maliciously sampling this pair, X tilde, Y tilde, supported on X, Y, going back to the one-way function example. We're trying to get inaccessible entropy out of the one-way function. And so the first and most natural attempt, perhaps, is to look at the relative entropy of this output distribution of the maliciously sampled X tilde, Y tilde with respect to the true X, Y. And we'd like to say something like this is large for any polynomial-time sampler. Unfortunately, this is not true. In fact, the divergence is zero for the simplest thing you can do if you're just honest. Then you just pick a uniformly random X, execute F of X, and by definition sort of this distribution will be the same as the one on the right. And the problem here is that this relative entropy isn't capturing the sort of usefulness of the generator. It's not capturing the fact that it's not the output necessarily that's weird. It's the guts of the sampling process. This doesn't capture the usefulness or the maliciousness of the generator. So the key idea is sort of we're gonna look not just at the input of the sampler, we're gonna look at the randomness of the sampler. But to do that, we need to accompany the sampler by some sort of simulator. And what that simulator is going to do is given an arbitrary image of the one-way function, its goal is to generate compatible coin tosses for the sampler. In the sense that given Y, my simulator is supposed to abuse randomness for the sampler or generator, that when you generate back, you get back Y. And then we're going to compare the uniform randomness used by the generator in the sort of forward direction and the randomness you get from the simulator. So let's try and both be a little bit more rigorous and use some pictures. So we start with just with the one-way function F that maps X to Y and we're going to introduce two adversaries. The first is this malicious sampler or generator G tilde which maps randomness to a joint distribution of X tilde, Y tilde. In such a way that it's consistent with the one-way function, meaning that Y tilde is always equal to F of X tilde. And now the simulator I talked about, we introduced as a second adversary that goes from Y back to R in such a way that if you follow the diagram from Y to R to back, you should get back to where you started or perhaps to some special fail symbol if the simulator isn't able to accomplish its task. And now with this model in place, we can define our main notion of hardness and relative entropy and say that this joint distribution X F of X has hardness and relative entropy Delta if for all polynomial time adversaries G and S of this form, the relative entropy of the randomness used by the generator, uniform randomness used by the generator and the output image of the one-way function with respect to the sampled random, simulated randomness given a true one-way function output and the true one-way function output is large. So again, this is a lot of notation to unpack. This will be made zero if both this generator G tilde is able to perfectly approximate the distribution of Y as we've seen. And also if the simulator is able to perfectly sample from the randomness used condition on getting the output Y. So a little bit more intuition. First, let's note that if we look at the example that defeated the earlier first attempt where we just consider the honest sampler that picks X uniformly at random and computes F of X, we recover exactly the notion of relative pseudo entropy defined by Vadan and Zheng. Because here we will just get that the randomness is the X, the Y tilde is F of X and if you just replace Y with F of X, this is exactly what we wrote down for relative pseudo entropy. But on the other hand, we also have not generalized too far. Like we haven't given the adversary too much power because if F is a one-way function, then in fact this joint distribution still has hardness in relative entropy. It'll be super logarithmic in N and again log one over the security parameter and the proof is still one line and it is essentially the same as the proof for relative pseudo entropy. The key thing to note is that essentially if you take Y, simulate the randomness and sample X, you have inverted the one-way function if you succeed. Okay, so finally let's talk a little bit about this picture now. So we've gone through what hardness in relative entropy is, we've defined it. And now, excuse me. And now let's look at this diagram that shows these steps you get to get pseudo entropy in an accessible entropy. Unfortunately, I don't have time to go in detail over these steps. I recommend you take the paper if you're interested. But what I wanna point out is that in fact these steps between the two branches are actually quite dual to each other. As we mentioned, if you start with hardness in relative entropy and you fix the generator G tilde to be this honest sampler, you immediately recover relative pseudo entropy. And then if your goal is to obtain the sort of next block pseudo entropy, it turns out the next step you do is you sort of make this adversary online in some way and make some sort of block-wise and split into some sort of block-wise notion with primarily information theoretic considerations. And similarly here, you again do most of your manipulation information theoretically. You just swap the order a bit. You first make your generator online and then fix not the generator but the simulator. Again, I don't think I can convey it too much other than these steps, if you look at them are actually extremely natural in the context of once you have this hardness and relative entropy notion and are very sort of dual and similar to each other. And yeah, and sort of give this one step towards unifying these entropies and perhaps even later the constructions are more of the analysis. So I'd like to conclude with just some future research directions. First, if you recall, this bottom layer of permatives in the diagram at the beginning, there was also universal one-way hash functions. It would be nice to be able to use this notion or something like it to also get more efficient constructions of Wolfs. In fact, the best known constructions of Wolfs does go by some slightly different notion of an accessible entropy. So there's hope that this may also be able to unify that one as well. But more broadly, I think the sort of key idea here was this using relative entropy or KL divergence or just information theoretic notions in general as a way to sort of capture the hardness in cryptographic permatives and reductions. And in a way that's very simple and in particular very much isolates the computational and information theoretic parts of these reductions. And with that, yeah, so I hope I hope that maybe useful in cryptography and with that I'll conclude the talk. So thank you. Great, thank you. Questions? Okay, thank you again. Thank you. Thank you.