 OK, thanks a lot. Let me start with a bit of motivation for this primitive, namely in a regular encryption setting in the real world, so what we call the real world, maybe. So we usually speak about many parties and we have many ciphertexts. That means we have a bunch of people, each with an individual public key, that may send a bunch of ciphertexts to any other person in the system. So an actual adversary, again, of course it's an abstraction, but an actual adversary would get a bunch of public keys and also a bunch of ciphertexts. But what we usually do when we try to construct public key encryption schemes is that we simplify things and we're saying that, OK, we really only have one user or sender or one public key and we have one challenge ciphertext. For instance, this is the case with in CPA security or in CCA security. So in this setting, we have a security experiment in which the adversary gets just one ciphertext. And of course, the adversary then has to derive some information about the information about the plaintext and this ciphertext and he wins or loses if he finds out something about this particular ciphertext. And the justification, why we look at this simplified setting and not that the whole deal with a bunch of public keys and a bunch of ciphertexts is that usually we can say that a hybrid argument works. What this means is that if we have one challenge setting and one user setting, then usually we can argue that if this is the case, then in a larger setting the scheme enjoys the same security property. So even the adversary that gets many ciphertexts won't derive any information, any reasonable information about any encrypted message. So this has several drawbacks. In particular, I want to highlight two drawbacks that this view or that this simplification has. First of all, the connection to the real world is not tight. So if we have a scheme which satisfies this simplified security notion, then in translating this experiment to the real world experiment, we lose a factor in the security reduction. That means our scheme gets less secure if we use it for more users. So it may get less secure. Of course, this is only a defect of the proof technique. This is not a defect of the scheme or not necessarily. And this is what I really want to focus on is the second problem of this simplification is that certain settings where we have encryption schemes with specific properties such as selective opening security, I'm going to be a bit more specific about this in a second, then we simply cannot restrict ourselves to a one challenge setting or one user setting simply because the usual way hybrid arguments work fails in certain settings such as selective openings. So for specific settings, this is problematic. And also there's a factor we lose in the security reduction if we simplify things when we analyze the security of, for instance, encryption schemes. So just to have a brief example of this, the security notion for selective openings which somehow should capture adaptive corruptions of senders in an encryption setting is that the adversary gets a bunch of ciphertexts, a whole vector of ciphertexts in fact. And then this is a simplified notion selects a subset of these ciphertexts. So the adversary tells you a set of indices from one to n, which he wants to corrupt or this corresponds to users he wants to corrupt. And then he gets openings of all the ciphertexts where opening means the adversary does not only get the message that was encrypted, but also the randomness that was used to encrypt. So this corresponds to really an opening of a commitment for all the indices that he selected. And then in the end, he should find out something or the adversary's goal is to find out something, of course, about the unopened ciphertexts. The open ciphertexts, well, he knows all about them already. So in this situation, the adversary gets the public key, all of the ciphertexts and the openings of some of them. And it turns out that hybrid argument in this setting simply fails. We cannot somehow argue that the scheme is secure in the one challenge case. And now we somehow conclude security in this multi-challenge setting. So in the, if you want, in the real world. The reason is that we wouldn't even know how to translate or how to build a one challenge setting for this particular game. Okay, so far, the motivation, why we should look at the multi-challenge, multi-user setting in some sense directly and not deal with simplifications in certain cases, at least. So in this talk, I want to present a technical tool that is specifically designed to treat situations in which you have multiple challenges and multiple users, multiple public keys. And you want to treat this setting, you want to tackle security proofs in this setting directly. And this is called all but many lossy trapdoor functions. And also I want to look at a construction of all but many lossy trapdoor functions, which is very, so technically this is very close to water signatures in fact, but with a twist. So let me start with the definition of a technical tool that is specifically designed for the multi-challenge, multi-user case. So first of all, a recap, what's a lossy trapdoor function in the first place? And from this we will generalize then. So a lossy trapdoor function is first of all, a keyed function, so it's actually a family of functions where you have an input X and you have a key, EK, evaluation key, I call it. And you can evaluate that function on X and you get some output. And the whole point or the interesting property of lossy trapdoor functions is that you can have evaluation keys that lead to invertible functions and you can have evaluation keys that lead to lossy functions. And what this means, I'm gonna tell you in a second. So the properties are, if you operate that function in invertible mode, which means with a key which is drawn from a set of invertible keys, then you get an injective function and in fact you can invert that function using a suitable trapdoor, EK inversion key that was sampled initially together with the evaluation key. And at the same time, you cannot efficiently distinguish evaluation keys which are invertible from lossy keys, EK prime. And the nice property or the specific property that these lossy keys have is that they actually lead to a lossy function which means the image set, F, EK prime, so the function operated in lossy mode is much smaller than the pre-image set, so we really lose some information. So we have efficient constructions of these creatures from LWE, from DDH, from DCR, and I want to highlight one particular construction here from DCR, so this is really a non-construction because it's efficient and it's the basis for what comes next. So the evaluation key in this DCR-based lossy trapdoor function is the public key of a additively homomorphic encryption scheme, so this can be Damgold-Jurick encryption, for example, or Paier encryption. Well, Paier, it has to be Damgold-Jurick, the generalization of Paier encryption, really. And also in the evaluation key, we have ciphertext, which is the encryption of a bit B. This can be either B equals one or zero. And if we have a bit, which is one, then the evaluation, which simply consists of deterministically using the homomorphic properties to derive an encryption of B times X, then this becomes an encryption of X, of course, and we can invert the function simply by using the decryption key. On the other hand, in lossy mode, we have B equals zero, so if we operate that function in lossy mode and we evaluate it, we get an encryption of zero all the time. Of course, we lose some information here, or we actually leak some information through the encryption randomness, or if this is a deterministic thing without any re-randomization, but we can upper bound the amount of information we leak through the encryption randomness. So essentially here in lossy mode, we get an encryption of zero all the time. And this fulfills all the properties here, invertibility, indistinguishability, and lossiness. And indistinguishability simply follows from the fact that it's a secure encryption scheme, so under the DCR assumption. So how do we use this primitive to argue for the security of an encryption scheme? We simply, well, essentially we encrypt a message by evaluating that lossy trapdoor function. So because we leak some information in lossy mode about the pre-image, we have to do some universal hashing or some randomness extraction here to somehow condense the uncertainty that the adversary has, but essentially we can use this function directly to encrypt messages. So the security proof simply consists in switching the lossy trapdoor function to lossy mode and then the adversary gets from a ciphertext, almost no information about the message. So the ciphertext become completely lossy, almost completely lossy. So the thing is, as soon as we have a decryption oracle here, we cannot decrypt or we cannot simulate the decryption oracle when we're in lossy mode. So either all ciphertexts are lossy and contain no information about the message or all ciphertexts are invertible and in the invertible case, we cannot argue that the adversary learns nothing about the message. So if we want to prove CCA security with such a technique, then Packard and Waters, they introduced something called all but one lossy trapdoor functions. And now we're already quite close to the main point of the talk. And all but one lossy trapdoor function simply takes an evaluation key along with a tag T. So they called it branch because they had a tree-based intuition about it, but I want to just call it more generally a tag which additionally selects a function out of a pool of functions selected by the evaluation key. And so we simply evaluate the function f e k comma T on whatever we evaluated here with on some random pre-image and this function becomes lossy if and only if the tag is a specific tag T star, which means the adversary in the security experiment in the CCA security experiment gets a lossy ciphertext but all the other ciphertext that the adversary could produce for decrypt queries have to be non-lossy and we can invert them. So in particular, the lossy tag corresponds to a single challenge ciphertext that the adversary gets. And in particular, this does not work with many challenge ciphertext. So if we want to take this route to achieve in CCA security, we get into trouble when we have many challenges or many users. So to cope with settings such as selective opening security where we have more than one lossy tag and we have more than one challenge ciphertext in particular, there is already a thing that is called all but n lossy trapdoor functions. And essentially it's the same thing as an all but one lossy trapdoor function. Only there's a bunch of lossy tags and I want to explain or I want to highlight one construction that Hemelway, Liberiostrovsky and Van Gogh gave last year at Asia Crypto Belief. And it's a generalization of the lossy trapdoor function from DCR. So what we have here is we have an evaluation key that contains a dumb god URIC encryption public key along with the coefficients encrypted of a polynomial that has well it has a bunch of zeros t one star up to t n star. And if we evaluate that function and first of all we compute this inner value here which turns out to be an polynomial, the evaluation of the polynomial f of t and then we multiply it by the pre-image. So we end up with an encryption of f of t times x. In particular if t is one of t one star up to t n star then f will have a zero there and we will encrypt a zero value in which case we end up with a lossy tag. So we lose information. The function for t being one of t one star up to t n star loses information. The problem that this construction has is that the space complexity, the description of the evaluation key is linear in the number of challenges. And if you think about this for a second then this is kind of inherent because it's information theoretically and encoding of all these texts t one star up to t n star if you look at the evaluation key if you just find out with an unbounded algorithm on which points this function is lossy and on which points it's not you just have a very complicated way of writing down t one star up to t n star. So if we follow this route to having multi-challenge in CCA security or multi-challenge selective opening security then you can do that but you end up with a large public key essentially. So what they proved is also that you get selective opening CCA security but you have to pay a price and the price is that you have a large public key. Our goal in this talk is lossy chapter functions with many lossy tags. So the intuition or the sketch of a definition is that there are many in fact super polynomial in many lossy tags and also there are many invertible tags and they're computationally indistinguishable and if you want to have some kind of intuition about it and all but one lossy chapter function has exactly one lossy tag. So there's a bunch of functions and one of them is lossy all but n lossy chapter functions there's a whole lot of functions and exactly n of them are lossy and with all but many lossy chapter functions actually lossy functions are all over the place but it's hard to find them. That's the point. So with a special trap door you can of course sample lossy lossy tags here but it's hard to find them without a special trap door. Okay, now I'm gonna be a bit brief about the construction how to derive these creatures and the idea is that we can start with the observation that there is some correspondence to blinded signatures. So this is really not blind signatures but this is blinded or hidden or encrypted signatures. A valid signature corresponds to a lossy tag. Even if you saw many lossy tags already you are unable to produce another one that should be the property you want to achieve. So let's simply encrypt signatures because this already smells like signatures it just doesn't have to have an efficient verification. So we sign some unique value this can be a hash function or something this is really not important but the point is that we have a tag which contains an encryption of a signature and the evaluation of this lossy trap door function somehow magically verifies the signature and should end up with an encryption of zero if and only if the signature is valid. And then we use the trick from before that we simply say that the image is well the encryption of X times whatever we start with. So if the signature is valid we end up with an encryption of zero and we get a lossy function. If the signature is invalid then we end up with an encryption of something presumably not zero and we end up with a function which we can invert by inverting D. So the problem is that how does this magically verifying a signature how does that really work because we only have an additively homomorphic encryption scheme and how do we verify a signature in ZN or ZN star if we only have addition. Now we use two tricks in order to accomplish this. The first trick or the first idea is inspired by Puget and Waters the original lossy trap door function construction and it's to use matrices instead of single values. So what we do is we have the tag contains an encrypted matrix instead of an encrypted value and if we evaluate that function we simply use like matrix vector multiplication. So we take the encrypted matrix take a plain text vector which is the pre-image and we compute an encryption of M times X and this can be done using the additively homomorphic properties of the encryption scheme. So the point of this is that this mapping becomes lossy if and only if the matrix is invertible. Or no, sorry, if and only if the matrix is not invertible if the matrix is invertible then of course we can by decrypting using a decryption key we can decrypt M and we can derive E of X or we can derive X from an encryption of M times X and why did we do this? What's the payoff here? The payoff is that a matrix is lossy if and only if the determinant is invertible or not invertible rather. So, but the determinant is a value which can depend in a cubic way on the encrypted values in particular we get a bunch of multiplications all of a sudden. So the verification or the signature verification here if we want to somehow make a tag which magically evaluates a signature verification then this corresponds to computing the determinant which again can be used to embed some multiplications. So the second idea is once we have that in mind that we can afford a small number of multiplications we can use a variant of water signatures only in a different domain. So what we do is water signatures are CDH-based signature scheme in pairing groups and we want to translate this to Zn. So what we do is we simply replace exponentiation by encryption. So this is again a PAI or a Darmgat-Jurick encryption additively homomorphic and we replace G to the A by an encryption of A and now the pairing essentially becomes a multiplication of exponents or multiplication of ciphertexts or of the plain text values encrypted in ciphertexts. In particular CDH becomes the computational Deffie-Hellman assumption becomes the assumption that you cannot multiply PAI ciphertexts and this is in fact the computational assumption that the construction is based on. It's based on the assumption that PAI is not multiplicatively homomorphic or not fully homomorphic if you want. So if we do this and if we translate things then what we end up with is a scheme where each tag an all but many loss of chapter function scheme where each tag leads to a matrix like this which has a determinant which exactly expresses the verification function of water signatures. So this is a nice thing because if we have evaluations like this then this can be viewed as an implicit water verification, implicit verification for the signature scheme. In particular we have a lossy function if and only if the signature was valid and so we can essentially take the security proof of water signatures transported to a different setting and what we get is an all but many lossy chapter function. Okay, now last slide will be about what we can do with this primitive now that we have such a multi-challenge multi-user primitive. So the first thing is we can derive efficient CCA secure selective opening schemes where in particular neither ciphertext nor public key depend on the number of challenges. So also it's compact but it's well you have to work with Dumbled Eurig encryption so it's not exactly efficient but it's still a small number of group elements. And this is the first scheme with these properties. Then we can also get tight in CCA security so the security, this is a funny thing because the security reduction is actually tight but the scheme itself is not really efficient. So it has a quadratic number in the security parameter of group elements in each ciphertext but it gives a tight in CCA secure encryption scheme. Okay, and it also has applications if we modify these techniques a little then we get key depend message security which also has a setting in which there are inherently many challenge ciphertext and you cannot easily get rid of them or you cannot easily apply a hybrid argument. So again there are similar concepts but we need a few tweaks and this is upcoming, this is not in the paper. And finally, I don't know but maybe there's also an application to leakage resilience because also there we have a bunch of, inherently a bunch of ciphertexts of challenge ciphertexts you cannot easily get rid of. Okay, that was all, thanks. Thank you.