 Hi, thanks for tuning. I'm Willi and I'll talk about targeted lucid functions and applications. This is joint work with Brent Waters and Daniel Wicks. So what do we do in this paper? We define a particular form of lucid functions. So what's a lucid function? It's a function such that given an output of the function, it's hard to recover what particular input you started from and that should be hard information theoretically. So we define such a variant of lucid functions that we call targeted lucid functions and we build such targeted lucid functions from mini-crit assumptions and we also show that this primitive is useful by showing some applications. Okay, so what's targeted lucid functions? We'll go a bit back in time and start with the notion of lucid trobler functions introduced by Packers and Waters. So it turns out that lucid trobler functions has been an incredibly useful primitive in the sense that it has led to many many applications. One notable one was in the original paper. Lucid trobler functions gave the first construction of CCA secure encryption from lattice assumptions, but it has also led to many many different kind of applications. Okay, so what's a lucid trobler function? It's a family of functions that features domain and range and is indexed by a function key. So this little fk over there will be a function key and that will define a function as such as a treat. So there will be a way to sample function keys in the so-called injective mode so that the resulting function will be injective. And there will also be an alternate way of generating function keys that will that we call a lossy mode such that the function is lossy. So again, that means that the function after being applied should lose information theoretically some information about its inputs. So in particular if I give you the outputs of a function x it's it should be hard to tell what x you started from because there are many of them, many possible So in particular it's hard to guess the exact pre-image you started from. As the name suggests the lossy trobler function also features a trobler. So the trobler will only be defined in injective mode and this trobler will allow you to invert the application of the function. And the last property that ties everything together that makes it a very beautiful primitive is this computational indiscretibility. So if I sample a key in injective mode or lossy mode and I give you the key you cannot tell the difference. So in particular you cannot tell whether a key is injective or lossy and that's extremely crucial for applications. So the natural question to ask given that lossy trobler functions have many applications is how do we build such lossy trobler functions? It turns out that we actually know how to build from a wide variety of public assumptions. Essentially almost all public assumptions that we think of we know how to build lossy trobler functions from with some notable counter examples and actually an amazing open problem to build lossy trobler functions from either LPN or CDH. But it's actually not extremely hard to see that lossy trobler functions would actually imply public encryption. So the question that we ask in this work is whether we can relax lossy trobler functions in a way that allow us to instantiate with a substantially weaker assumptions. So let's do that. Let's relax the notion of lossy trobler functions. And as mentioned before one of the reasons lossy trobler functions imply public encryption is because of the presence of a trapdoor. So that will be our first relaxation. We won't require any more the existence of a trapdoor in injective mode. We also introduce a second relaxation. That's what will give the name to our relaxation, targeted lossy functions, in the sense that the lossy mode will be respect to a particular target. So here I'll call the target X star and the lossiness will only be with regard to this particular target X star. So what does it mean? If I give you the particular outputs of the function in lossy mode respect to X star, apply it on X star, then it's hard, information theoretically hard, to come up with the exact input I started from. Furthermore, because the function key is now tied to X star, we require this hardness to hold even given the function key fk. So it should be a little bit more precise. This is defined over an experiment where X star is simple randomly in the domain. We can be a bit more formal if you want and rephrase what I just said in terms of conditional average minotropic of X star. But that's the intuition. A note that the main conceptual difference with lossy travel function is that in lossy mode only might only be constraining over a very small set of input points. So in particular for it is possible to imagine that a targeted lossy function would have almost all of its domain and its image be injective except from a very small set that is defined with respect to X star. And last, we also strengthen the computational indistinguishability of the modes by requiring that the function keys either generated in injective mode or targeted lossy mode should be indistinguishable even given the target. So that's targeted lossy functions. Furthermore, we consider for applications some strengthening of targeted lossy functions as follows. So the first addon that we may consider is the presence of tags or branches. So to define functions with respect to branches, we'll add a new input that's a tag that will define an execution branch in some sense of the function. And now we'll define several variants of it. So in the targeted all lossy but one function or talbo, most of the branches are lossy and there's a special branch that is injective. So that's in lossy mode. In injective mode, everything is injective. So what this means is that if I give you an output on any lossy branch, then it should be hard to recover X star. But what we want is actually something slightly stronger. We actually want that it should be hard to recover X star even given the image of the function on all the applications of the function on the lossy branches. Namely, X star should be hard to predict given all the evaluations there. So all of these correspond to tags that are that are lossy and the function key that depends on X star. We also define all joe of this notion that we call target all injective but one where again in injective mode everything is injective. But in lossy mode, there's only one special branch that is lossy or the rest being injective. So through this talk, we'll mainly focus on this notion of targeted all lossy but one functions. So that's what you should keep in mind. So another variant that we consider in this paper is some targeted lossy functions or relax the injectivity requirement. So here the intuition is that in some applications we actually don't need the full power of injectivity. It's enough to consider an injective mode that only preserves some kind of special information about the input such that if we switch the function key to lossy mode then this information information should be lost in lossy mode. So in particular, this is a strict relaxation as long as this information does not fully characterize the input. If it did, it would be equivalent to standard injectivity. So here's the main example that we'll use through the applications. The idea is that this information will actually be in the context of targeted all lossy but one functions. This information is actually the output of the function on the injective branch. So if you collapse this balance of the new injectivity and lossy mode described here, what you end up is what's written below, namely given evaluations on all the lossy branches and the function key, then the output of the function should be hard to predict as opposed to the input. So that's a relaxation. And it turns out that actually relaxing injectivity is for free given this different formulation of lossiness. So now that we define targeted lossy functions on some parents, I can quickly describe our results. So first are some applications of targeted lossy functions as follows and some constructions of targeted lossy functions from mini-crypt primitives. So here I won't talk about this particular application. It's a construction of CC encryption from trouble functions. And I refer to the paper for more details. Okay, so let's move on to applications. So targeted lossy functions look like a pretty nice primitive, but I still have to convince you that it's actually useful. And the first application that we describe would be in the realm of leakage resilience. So it turns out that the targeted all lossy but one functions can be seen as a form of what's called a pseudo entropy function. So that has been defined by Breverman, Hasidim and Kali. And previous constructions notably required quite strong forms of public assumptions. So because we obtain, finally obtain, in the end we obtain T-albows from one-way functions. What we obtain as a direct application of constructing T-albows are deterministic leakage resilient max from one-way functions. And this is pretty notable because previous constructions of leakage resilient max from one-way functions were randomized. And we also get leakage resilient symmetric encryption where the ciphertext size is somewhat small in the sense that it doesn't depend on the leakage bound that we want to support for security. So another application that we have is in the realm of extractor dependent source, like randomness extraction. So that's a mouthful. What is it about? The goal here is to extract randomness from a source. But where the source might be actually very weakly correlated with the seed and the extractor that you're using. So in particular, for instance if you're using a machine which gets randomness through timing of interrupts, these timing of interrupts may depend on execution of algorithms run by the machine and the algorithm may be randomized and using randomness output by the extractor. So in particular, the source fed through the extractor may depend on the output of the extractor at some previous point in time. So that's modeled as having a source that has some kind of oracle access to the extractor. And so that's how we model this weak dependence. So these type of sources were originally defined by this vicarathon in Wix, which also built extractor dependent source extractors assuming public assumptions. And they actually have some argument that indicates that building such extractors from one way functions might be hard. And it turns out that again, using the link with pseudonatropic functions, we actually show that we can build this type of ED extractors from one way functions. Next we'll kind of change register and focus on selective opening security. So what's selective opening security? It's a security where the concern is our so-called selective opening attacks. So what's the setup of selective opening attacks? An adversary say gets many, many ciphertexts under different keys and auto ciphertexts decrypts various different messages. The adversary given these different ciphertexts can choose to open any of them. And opening any of them will allow him to learn all the secrets used to either encrypt or decrypt the messages. So in particular, opening a ciphertext will make him learn both the randomness used to encrypt and the underlying secret key that used to decrypt the ciphertext. And the question is, if we're giving an adversary to open some ciphertexts, do the other ones, the unopened ones, still provide any meaningful security guarantees? And it turns out that the question is actually surprisingly subtle and not that easy to achieve a security and selective opening attacks. And as far as we know, all previous work on selective opening security has mainly focused on the public setting. So in this work, we show as an application of targeted lucid functions that assuming one-way functions, we can build symmetric key encryption that is secure against selective opening attacks. And furthermore, for those who are familiar with the area, the security is actually simulation-based. So that's a rather strong form of security. Okay, so that was for a quick glimpse of the applications that we get. And now let me describe how we actually construct targeted lucid functions. So the theorems that we get in the end are stated as follows. First, assuming injective PRGs, we can build targeted lucid functions and the variants of the branch variants that I described earlier. Alternatively, if we're willing to relax the injectivity, then we can actually build targeted lucid functions from assuming only one-way functions. In particular, using this theorem, we'll build the main tool that will allow us to get all the previous applications that I described above earlier from one-way functions. So that's what I just described above. Okay, so how do we now build targeted lucid functions? So let me now focus on the plain version, so without tags for now. The construction is actually extremely simple. So the construction will simply chain a PRG, sorry, simply chain a PRG, and a pairwise independent hash function. So the function key will be the pairwise independent hash function, and the application of the function will simply correspond to applying the pairwise independent hash function on the image of the PRG. Okay, so in injective mode, the pairwise independent hash function will be sampled honestly. In particular, as long as the composition is sufficiently expanding, even though the pairwise independent hash function will be compressing, so it will have many collisions, we can argue that none of them will be in the image of the PRG. And in particular, the composition will be injective with high probability. In lucid mode, we'll program a collision in the pairwise independent hash function. We'll need a special kind of hash function for that, but it's not hard to come by, such that there will be two different, another randomly chosen input that will collide with the partial target we started from. In particular, because there's a collision, X star will have one bit of entropy given the image. So given the image, we'll have no idea whether it comes from X star or X prime. And we show that by PRG security, these modes are actually indistinguishable. So that's actually the construction. So what I described just now is just a version where the lossiness is just one bit. When it turns out that we can generically amplify by repeating this construction on separate blocks of the input. So let me move on to branches. And I'll focus on the harder case that's targeted all lossy but one functions. So first, we need to embed semantics of tags in the evaluation. So how do we do that? We'll consider for every tag position two different function keys. So we'll sample a whole bunch of function keys like so. And so now to evaluate, we'll take a tag and we'll read the input one by one. So the first bit will be zero. So we'll consider, so here the top function key that's associated to the index zero. The second bit is one. We'll choose the one and so on and so forth. And in the end, that's a zero. So now the function evaluation will be defined as a composition on everything that has been highlighted here. Okay, so how do we define our injective and lossy modes? So injective mode will be pretty easy. We'll just sample all the function keys in injective mode. So here, all the evaluations will be a composition of injective functions. So it will be injective. Yeah, so that's done. What about lossy mode? So here, again, we want a special tag, say tag star, to be injective while all the other tags will be lossy. So as before, because we want evaluation and tag star to be injective, we'll define all the function keys on the path defined by tag star to be injective. Then we'll set all the other components to be lossy. So what this will achieve is that every branch different from tag will have at least one lossy function in it. And the intuition is that because it will be a composition of function with a lossy function, the presence of the lossy function will be enough to lose information about the input. So it turns out that it's technically a slightly more subtle. And this is because we first want, we can only deal with targeted lossiness. And second, we also crucially need for applications cumulative lossiness. And in order to do that, we'll need to carefully balance the handle the lossiness targets across all these functions. So I refer to the paper for more details. Okay, so as a quicker side, we notice that actually Talbo's are actually very similar to distributed pen functions. Even though distributed pen functions were originally introduced and still used in extremely different contexts than lossy functions. So originally in the context of PR. And it turns out that Talbo's looking at it the right way are essentially equivalent to distributed pen functions. So that's a pretty surprising conceptual connection. And it turns out that the construction that I just described of Chaining is actually extremely similar to the constructions of distributed pen functions from one way functions, even though they might look different visually. Okay, so let me wrap up. What do we do in this paper? We define a new cryptographic primitive called targeted lossy functions. We build targeted lossy functions from either injective PRGs for the standard version or just one way functions if we're willing to relax injectivity. And we show many applications that put some various primitives in mini-crypt. So as for open questions, I quickly sketched earlier how to amplify the absolute lossiness of our functions, but our lossiness rate is actually pretty bad. So it's a pretty natural question to ask whether it's possible to get a better rate without using public assumptions. And because this is a new primitive, I still expect that there will be more interesting applications of this primitive. So feel free to stare at our primitive and find new applications. And that's it. Thanks for listening. That's the link for the e-print paper.