 Thank you for the introduction. I'm going to start off with a bit of motivation for this work. Take the following encryption scheme. This is a very standard way of converting a trapdoor permutation into a public key encryption scheme. Choose some randomness R, apply the trapdoor permutation to this to get the message header. Then you hash the message R using some hash function and use it as a one-time pad to encrypt the message M. Now, suppose that I ask you to prove the CPA security of this scheme. I want you to be able to handle many bit messages using a single randomness R, and I want your proof to work for arbitrary trapdoor permutations. The one thing that I do allow you to play with is the actual hash function, H. The first idea these days might be to try to use random oracles. So the random oracle model introduced by Valar and Ragh away basically we pretend that the hash function H is actually a truly random function that sits up in the sky, and the only way that we can interact with this function is by querying it on inputs of our choice. So I send an input, I get the output. We can easily rewrite the scheme as a random oracle scheme that obeys this restriction, and the random oracle model says that the adversary is also required to interact with the random oracle in this way. All right, random oracles are actually very interesting objects. They are great extractors even for computational unpredictability. So for example, if I have some unknown input, and I give you some leakage about it, maybe a one-way function on the input, and now I also hash the input using the random oracle, the output of the random oracle is pseudo-random. It's also hard to find outputs of the random oracle that have known trapdoors, and this is useful for, say, using hash functions to generate common reference strings where everyone believes that no one knows a trapdoor. Another easy application of random oracles is to convert selective to adaptive security for primitives like signatures and identity-based encryption. In the case of signatures, basically I just hash the message before signing. There are many other uses of random oracles, but this is just a couple. Now unfortunately, random oracles don't actually exist. There's no object in the sky that contains a truly random function that people can query. Instead, everyone really does have the code for a hash function. They evaluate it for themselves. And therefore, a proof in the random oracle model is really a heuristic argument for security in the real world. And there's a sequence of works showing that this heuristic actually does fail in many settings. All right, so the next approach, if we don't want a heuristic, we actually want a security proof, is to look at some standard model definitions for the hash function H. We could start with the standard definitions. These are things like one-wayness, pseudorandom generator, collision-resistant hash function, and so forth. These definitions have the advantage of being very simple and easy to state. We can use some of the weakest assumptions used in cryptography, and we can base them on a variety of number-theoretic assumptions. Unfortunately, they have limited usefulness for actually instantiating random oracles, and for the scheme I described, even if you assume H is one of these sort of standard definitions, we don't know how to prove security. So then we might turn to more exotic definitions, such as universal computational extractors, or UCEs, or some variants of very strong one-wayness and pseudorandomness. And these definitions are actually very useful for instantiating some random oracle constructions. For example, by picking the right one of these definitions, we can prove security for the scheme on the first slide. Unfortunately, though, for most of these, the only way we know how to instantiate the security properties is just to make the assumption that the security property holds a given hash function. We don't know how to reduce most of these to standard cryptographic assumptions. Okay, and why is this a problem for these exotic security definitions? I'm going to highlight the problem with an example. So consider the following notion of a strong pseudorandom generator that strengthens some other strong one-wayness definitions in the literature. More or less, the requirement is that the function is a pseudorandom generator, even if I give you arbitrary leakage of the seed. The only condition on the leakage that I asked for is that given the leakage, you can't actually compute the seed. So as long as the seed remains computationally unpredictable given the leakage, the output of the hash function is pseudorandom. Now, how do we gain confidence in this assumption? Suppose I want to assume that SHA-256 is a strong PRG. Well, I might post some challenges. I might go on the web, post an output along with some leakage, and say if you can solve this, I'll give you $1,000. Now, unfortunately, this might give me confidence after several months no one breaks this that the hash function is a strong PRG for that particular leakage I gave you. But what about other leakages? Now, if I want another leakage, I have to post another challenge for that leakage function and so forth. And it's not clear that this approach will work for general arbitrary leakages because there's infinitely many possible leakage functions. And it might even be that for some particular really pathological leakage functions, maybe those involving obfuscating programs and stuff like that, that this definition is actually unattainable. All right, and this is not just limited to this issue, it's not just limited to strong PRGs. It's really an issue inherent to a lot of these exotic security definitions that take the form of assumption families. Really, this definition is a family of assumptions, one assumption for every possible leakage function. Okay. Now, these definitions, these definition families are actually very useful as security properties. The literature defining these, the whole point was to show how useful they are. But they are, the point I want to make is that they're highly undesirable of starting points, assumptions to start with for achieving tasks. The ideal scenario is instead to use a single, simple, well-studied assumption, one that I can post challenges on the web, gain confidence that it's secure because no one can break it, and take this assumption and use that to derive these strong security properties. And that's where this work comes in. Using a new tool called Extremely Lossy Functions or ELFs, we show how to take a single, simple assumption and derive interesting consequences for instantiating random oracles. All right, so what is a standard lossy function? A standard lossy function is a function that comes in one of two flavors. There is an injective mode where the function is actually one-to-one, there are no collisions. And there's a lossy mode where the image size is much, much smaller than the domain size, so there are many, many collisions, and it's called lossy because the output of the function necessarily loses information about the input. The security requirement for standard lossy functions is that the injective mode and the lossy mode are indistinguishable to any efficient adversary. Just a couple notes on standard lossy functions. Even though the functions are very, in the lossy mode, are very lossy and have a small image size, it's still typically exponential, just a much smaller exponential size than the input. And generally, standard lossy functions also include trapdoors, which are required for many of the applications. For this work, though, we actually don't need any of the trapdoors. So an extremely lossy function is a standard lossy function on steroids. Basically, there is an injective mode, like before, that I'll represent with the big human size elf, and there is a lossy mode that I'll represent with the little dobby size elf that is so incredibly lossy that the image size is actually a polynomial. The total number of image points is just some polynomial. Okay, now, as I've described this, this is not an attainable security notion. I can't possibly have computational and distinguishability between these two modes because there's a sort of trivial attack. Basically, I can just try to evaluate the function I'm given on a bunch of inputs looking for a collision, and this attack runs in time roughly equal to the size of the image in the lossy mode. And since our image size is polynomial, we can't possibly get security against this adversary. Okay, so instead what we do, instead of having this one lossy mode, we produce, there are a bunch of lossy modes of varying image sizes. And the security property is that for any adversary, I can choose a lossy mode that fools that adversary. So I don't just have, again, because every lossy mode has a polynomial size, I can't have one lossy mode fool all possible adversaries, but I can reverse the order of quantifiers and say that for any adversary, I can choose a lossy mode that fools that adversary. Okay, so how do we construct these objects? The first step is to build something that I'll call a bounded adversary elf, and this is one where we go back to actually just having a single lossy mode. And as we said before, we can't possibly have security against all adversaries, but instead I'll insist upon having security against a priori bounded adversaries. All right, and the details of this don't matter for the purposes of this talk, but basically if you take the DDH-based lossy functions in the literature and shrink the group size down to be a polynomial, you get a lossy mode that has a polynomial image size, and by making the assumption that the DDH problem actually takes exponential time to solve, we get security against bounded time adversaries, exactly as we needed. Okay, so I need to make this assumption, this exponential DDH assumption that DDH takes exponential time to solve. It's definitely a non-standard assumption in cryptography. We usually like the polynomial hardness assumptions or maybe sub-eximensional hardness assumptions. But I want to argue that this assumption is still very reasonable. It's a nice assumption. The complexity assumption framework of Goldwasser and Kali, and it also appears to be true. On elliptic curves, for example, the best known attacks take exponential time, and in fact, even in practice, the parameters for elliptic curve schemes are set assuming that it takes exponential time to solve. So if exponential DDH is false, we actually have much bigger problems to worry about because a lot of the crypto on the web would be insecure at the current key sizes. All right, from bounded elves, we can get to unbounded elves. Basically, we just iterate at a bunch of security levels. So I just string them together a bunch of bounded adversaries elves where the lossy image size are growing polynomials. And when I want to invoke lossiness against a particular t-time adversary, I just invoke the lossiness at the i-th elf where i is large enough to fool that adversary but still have a polynomial image size. As I've drawn, this picture actually doesn't quite work because the output size might grow too fast, but using pairwise independent hashing between the iterations, we can keep the output size small. Okay, so now let's move on to how do we actually use elves to accomplish interesting things? So using elves, I actually can give you a construction of a PRG satisfying the strong notion of leakage resilience that I defined before. Basically, what's going on here is from the input X, we derive a bunch of different Goldreich 11 hard core bits, all independent. Now, these bits are hard core if I give you just one of them, they're not hard core anymore if I give you all of them. So instead, what I do is I feed the hard core bits into this alternating sequence of elves and pairwise independent hashing. Now I'm going to prove that this actually satisfies the definition of strong pseudo-randomness. Recall the setup, we have X, some leakage about X. X gets fed through the hash function to produce some output Y and the leakage and Y are given to the adversary. The guarantee that I'm given is that X is computationally unpredictable given the leakage. And I want to prove to you that Y is indistinguishable from random to this adversary. I'm going to do this using a handful of hybrids. In the first hybrid, I'm going to invoke elf magic on the last elf in the sequence and replace it with a lossy mode elf. As we've discussed before, this necessarily requires knowing the adversary's running time, but using the guarantee of the elf, I can do this. The second step is to invoke Goldreich Levin. To simplify the picture, I'm going to take this part of the construction and just call it L. So L just takes this input X, produces some output. The only property I care about of elf for now is that the last part of L is to apply a lossy mode elf, which means that L has a polynomial image size. And because L has a polynomial image size, it's very easy to prove the following lemma, that if I give you the original leakage and I additionally give you the leakage that's the output of L, X still remains computationally unpredictable. And this falls from a simple argument just using the fact that L has a polynomial image size. So what this means is that even now that I give you this additional leakage, I can apply Goldreich Levin on this last hardcore bit and replace it with truly random. Okay, so let's open things up again. The next step is to undo the elf magic, replace the little lossy elf with an injective elf. And now we see that I actually have the exact same picture I showed you before, except that I replaced the last bit with true randomness. So now what I do is I just repeat this. I move down the chain, take the second-to-last elf, replace it with lossy. Because it's lossy now, I can apply Goldreich Levin to the second-to-last bit, replace it with true randomness, undo the elf magic, and now I'm back where I started, except the last two bits are truly random. And I just repeat going down the chain. All right, so at the end of the day, I have now all of the Goldreich Levin bits are replaced with truly random. And what this means is that the Y the adversary sees is actually completely independent of the CDAS. And moreover, it is straightforward to show that Y is actually statistically close to random when all these bits are random. So putting things together, what I have is that for any computationally unpredictable source that the output of the seed is pseudo-random, as desired. And also, as is useful for certain applications, this function H we can show to be injective. All right, so what are the applications? So here are just a few immediate applications. One is an instantiation of an injective one-way function satisfying the definition of the Tanskin penna. Previously, basically, we had to make these annoying assumption families to get any sort of injective one-way function. We also get a very simple point function office skater, secure against auxiliary input. And again, this more or less just required the injective one-way function from before. So the previous constructions had the same limitations. One way of looking at the strong PRG is that what it actually is, it's a way to extract polynomially many hardcore bits from any computationally unpredictable source. This was previously known from universal computational extractors, though this really universal computational extractors is more of a generalization of this definition. Or we had constructions from very strong forms of obfuscation. And these have several limitations, including being, again, annoying assumption families. They only worked for one-way functions and they're incredibly inefficient. And then lastly, the CPA scheme that I gave you on the first slide actually is secure using this hash function H. And this is really just a special case of the hardcore bit extraction in the case of injective one-way functions. You see that H of R is just extracting hardcore bits from the trapdoor permutation. Okay, so some other results that we get from ELFs. So this is selected to adaptive security conversion using random oracles. We can actually just use an ELF instead. And the proof is very simple. So the construction is to just hash the message that you're going to sign using the ELF and then sign with whatever selectively secure signature scheme you had. In the security proof, first I switch to a lossy mode ELF. This is indistinguishable to the adversary. And now because the lossy mode ELF has only a polynomial image size, I can guess the output of the hash that the value that actually gets signed by the selectively secure signature scheme incurring only a polynomial loss in security. We also give a definition of output and tractability for hash functions. And this captures the case where I want to use the hash function to generate common reference strings and ensure that no one knows a trapdoor for the output. And the construction is extremely simple. It's just composing an ELF with a pairwise independent hash function. All right, so to conclude in this work, we show that the exponential hardness of DDH is useful for constructing this object called an ELF. And ELFs are useful for a variety of interesting applications involving instantiating random oracles. There are a handful of open questions that we leave. Sort of the biggest in my mind is can we build ELFs from other assumptions? Right now we need the exponential hardness of DDH. Could we perhaps base it on the exponential hardness of learning with errors? Unfortunately, it doesn't seem to be the case. The current lossy functions based on learning with errors just don't seem quite lossy enough for our purposes. Related question is can we get close quantum ELFs? It's known that quantum computers can break DDH, so can we get an ELF that is secure against quantum adversaries? This is very related. If we could do it from LWE, that would be fantastic. And then lastly, can we get more applications of ELFs, more interesting applications? All right, that's all I have for you. Thank you very much.