 So, yeah, I'm going to talk about Kurosawa Decimate Meets Tide Security today. This is joint work with Homer Gay and Dennis Wolfhines, and I'm Lisa, so good morning everyone. As the session title already says, we're going to talk about public key encryption today. So you can see this grinning cat here on the right, and she wants to communicate with Alice just to set you up with the notation. How can this be done? Alice publishes the public key, and then the cat can encrypt messages under this public key and send them to Alice. And of course, we want to be able to talk about security, so we consider an adversary here, and we don't only want to consider passive adversaries that can listen to the communication, but even adversaries actively interfering with it. So the model of security we have here is in CCA security. You can see the experiment here corresponding to this model, so the adversary gets a public key. He can choose two messages M0 and M1 of the same length, send them to the challenger, gets an encryption back of one of the respective messages, and in the end has to find out which of the messages was encrypted. As we want to consider active security, additionally we will provide the adversary with a decryption oracle, so an oracle where he can query anything but the challenge ciphertext and gets the respective decryption. But this is actually not quite what we use, quite the security model we want to use in this talk. Why is that? So in the real world, as you can see here on the slide, there's not only, we don't want to have this isolated one person sending one message to Alice, but we have many, many more parties sending many, many more messages. So if we think of Alice not as a person, but as a server, this can be something like a billion messages or something like two to the 30 messages a day even. So what we really want is multi-cypher text in CCA security. For multi-cypher text in CCA security, as you can see, it's the same game, but now the adversary cannot only query two messages, but many, many pairs of messages, and each time gets the respective encrypted message back. And I didn't say this before, but of course we say a scheme is secure if the adversary cannot find out this bit that's corresponding to the right message with probability negligibly better than guessing. So yes, so why do we usually, like, why do we usually talk about in CCA security and not about multi-cypher text in CCA security? Well, if you just care about asymptotic security, we're fine, because in CCA security implies multi-cypher text in CCA security. So yeah, why do we care in this talk? Why do we directly want to work with this notion? It's because we care about the quality of the reduction. And yes, what do we mean by that? So how do we usually prove a security reduction? So we have some adversary breaking the encryption scheme with some advantage epsilon. And then to prove security, we want to reduce the securities to some kind of assumption. So what we do is we construct an adversary breaking the underlying assumption, this rabbit here. But the adversary will not have the same advantage usually. There will be some loss. So this L here, this is the security loss. Why do we have this loss in reductions? Well, for example, because we have to guess something during the reduction, for example, going from in CCA security to multi-cypher text in CCA security, we basically have to guess in which challenge cypher text to embed the challenge from the underlying assumption. So the loss will be in omega of the number of encryption queries we have. So now there's already concrete parameters in there. So if we want to have 128-bit security, we have to take a security parameter of 158 for the underlying assumption to actually have this security guarantee. So what I mean by quality of the reduction, it's desirable to have a tight reduction. So L should not be, in particular, be independent of the number of encryption queries. So what we call here tight reduction, or often referred to as almost tight reduction, it should be linear in the security parameter, some small constant desirable times the security parameter, because then we will not have L equal 2 to the 30, but something in the order of 2 to 8, 2 to 9, 2 to 10, but significantly smaller. So and then this will yield shorter concrete parameters. So actually, generally we care about tight security reductions, because we care about the concrete efficiency instantiating scheme. Okay, so I want to give you a short walk through some CCA secure encryption scheme in the line of our work. Of course, this is not at all complete. So starting with the Kama and Shub 1998 and Kusava Desmond 2004, you can see two very efficient schemes in terms of Cypher tax, also in terms of public key, but with the large, but you have the security loss in omega after number encryption queries for the reasons I just said. So starting with the work of Hofheinz and Jagger 2012, there was the aim to get tight security reduction CCA secure encryption with tight security reduction, but as you can see here, the first scheme has a, it's not where we want to be, because it's really inefficient. The Cypher tax size is linear in the security parameter instead of two group elements before. So we're talking about group elements here. And yeah, there was a lot of progress made very recently. So 2016, Gay, Hofheinz, Hils and V improved greatly on this bound. They have a very short Cypher tax, only three elements, but still suffer from a large public key. So would be around 200 group elements if you put in 128. And then just from this year, Hofheinz had a work with compact Cypher tax and compact public key, but required pairing, so also a source of inefficiency. So the question starting the work was, can we do better? Can we have it all green? And the answer is yes. That's why I'm standing here. So we got a Cypher tax size of three elements, a public key of six elements. We have a tight security reduction to DDH, and we don't require pairings. So how does our scheme look like? So I can tell you in one line, our scheme is Kurosawa Desmit, the scheme you saw from 2004, plus one or proof pi, where this proof pi is some new proof. Okay, so you might have a number of questions now if you see that. Maybe the first question, in case you're not familiar, is okay, but how does Kurosawa Desmit look like? Or if you know, what is pi good for? Like why do we need the pi? Why does pi suddenly enable us to get this tight security reduction? And the second question would be, how does pi look like? So yeah, in the following, I want to answer all of these questions, starting so stepping back a bit, going to the foundations, and then explaining Kurosawa Desmit and why it is not tight. So very short recap, the decision of the Diffie-Helmen assumption. You saw DDH on the slides before already in this table. So I want to start with defining the Diffie-Helmen language, because that is what we'll be working with later. So we have a group, and we have a vector, a group element, a and g square, and we say, okay, the Diffie-Helmen language are all elements respective to this vector a, are all elements that are linearly dependent of this vector. So by this multiplication, actually, we're taking the group elements to the power of the scalars. So x1 is a1 to the power of w, and x2 is a2 to the power of w. And so for the following, you just remember that by bold, we always denote these group vectors from g square by bold notation. And then the DDH assumption basically allows to switch between choosing a random element from this linear language LA to just choosing a linear element from all g square, just randomly from g square. And this is computationally indistinguishable, we want this. And this is a very nice assumption. It's a very nice assumption. If you care about tightness, because then also if you don't care about tightness, but for us especially, because you have this re-randomizability. So if you get just one tuple, you can yourself re-randomize it to get many, many tuples. And the next ingredient we need is a hash proof system. So what's a hash proof system good for? Well, we will use it to prove that an x is indeed in this language. How can we do that? We know the witness, and we want to prove that to a designated verifier. And a hash proof system enables that by having two different mechanisms to compute a proof. Like you can publicly compute a proof knowing the public key and the witness, and you can privately compute a proof just knowing the secret key and the element, and you don't need to know the witness. And of course we want to have some kind of security guarantees for that, or like we want to have security and we want to have some kind of correctness or completeness. So whenever x is indeed in a language, those two proofs should equal. So a designated verifier can compute a proof, then check if they're equal, and if accept, if not, not accept. And security we want whenever x is not in a language, then even if you know the public key, you don't want someone to be able to compute this proof. And even stronger for a hash proof system, this should look completely random. So the private evaluation on this x should look completely random. So what's the one here? The one here is supposed to mean we can only do this as long as we don't know any proof in the private evaluation, but this will actually be not good enough for us. Why is that? This is not good enough for us because actually we want to have two universality. We want to be able to give out one proof and the public key for something and the proof outside the language and we will still want still to have universality to hold. So what we do is basically what Kurosawa Desmond did in 2004, their approach was to linearly combine two hash proof system and that gives them two universality. Basically because some kind of computational two universality because this linear combination will be fresh each time due to the collision resistant of the used hash function. So now I can tell you Kurosawa Desmond, you have to know everything you have to know. So what do you do? You first choose an x from the language with corresponding witness w. You compute the proof using the public evaluation with the public key and the witness. So public key, secret key are just the public key and the secret key of the hash proof system. And then you can just use this proof, that's why it's called K here, to encrypt a message with a symmetric encryption scheme. And then decryption, well decryptor has the secret key, he can recover this K and he can decrypt symmetrically. So correctness directly follows from the completeness of the hash proof system. Why is it secure? So it's secure because again we can switch. We first want to forget the witness so we can switch from pub evil to priv evil by completeness and then we don't need to know the witness anymore. So by DDH we can switch to how we choose x, we can choose x from all g square. And now we're done basically because now because of the computational two universality we made the decryption article useless. Everything outside La Lin will have a K, which looks uniformly random to the adversary. So and then we can just use that's the K that we use for the challenge query is random two and we get in CCA security. So this is great, but what's the problem? Of course there's no problem, but if we care about tight security the problem is that entropy in the secret key is limited. So this reduction cannot be tight. Why can this reduction not be tight? Because we rely that we only give out information about one of those key, only the key of this one challenge cipher text. But if we do that many, many times we don't have security guarantees anymore. We just have this two universality, but not more. So how does this proof pi just a short recap? Our scheme was Kurosawa Desmet plus this new R proof pi. So how does this proof pi save us here? So the idea is we don't have enough entropy, so we need more entropy. So how do we get more entropy? We generate more entropy. We use not one secret key, but we use a freshly randomized secret key for each new cipher text. And by doing so we actually have enough entropy because that's what we did. And then we can use the re-randomizability of DDH and use just one DDH triple to randomize all cipher text at once. So yeah, that's actually a technique of Gaetal 2016. But that works very handy. But the difficulty is how can we answer decryption queries if we don't know the secret key anymore? Because now we use many, many secret keys and not just one. So which one to use for decryption queries? And so before even thinking about re-randomizing the secret key, we have to do something else. We have to randomize the secret key, but differently step by step. And how do we do so? How do we do so? So in each step, we partition the cipher text space in two parts. So you see here in a blue part and a green part. And for one, we just generate the cipher text as before. We choose X from LA and do everything as before. But now we also take another linear language. So we take LB. And for the other part, we choose X from LB. And for the blue part, we use one secret key. And for the green part, we use the other secret key. And now we can answer, how can we answer decryption queries? Well, for decryption queries that has X in the one part, we just use the secret key we used for the blue part. And for decryption queries that have X in LB, we just use the secret key for the green part. But what do we do with the decryption queries that are neither in one part or the other? Well, we have to ensure that they are in one part or the other. And how do we do that? We can use as-so-fines 2017 explicit proof pi. But the novelty of this work is we do it without pairings. We do not require pairings for that. And why does this help? Shortly, this randomization helps, because this enable us to just recheck all decryption queries which are outside of LA. And then we can do the same as for Coursava Desmet before. So this is why this rest, yes, uses us. So now I can show you our scheme again. Maybe now it's a bit more clear how it looks like. Actually, you've seen everything you need. So we just have Coursava Desmet. And then we additionally prove that X is either in LA or X is in LB. And we only decrypt if this pi is valid. So the main challenge of this work was like, how can we construct a pairing-free non-interactive OR proof? So why is that hard? So the problem is this is a disjunction of languages. So it's not as nicely as if you just have a linear language. Usually you require pairings to do so. So how did we solve this issue? Well, if you take a look, if you go back to the encryption algorithm, what you do in the first line is you choose X from LA with the witness and then you prove that X is in the disjunction. But actually, you always choose X from LA. You never choose, honestly, never choose X from LA. So this is great. So honest proof generation, we just need that for linear language. So we can employ a hash proof system. With hash proof system, we can do this pairing-free. But of course, it's not as easy as that because during randomization, we also sometimes have to choose X from LB and then give out simulated proofs for having it in the disjunction of both. So hash proof system, as I said before, same problem as before. We don't, by giving out something outside the span of LA, and we give away all security guarantees. So the difficulty is how can we ensure that forging a proof for X outside the disjunction is hard? So the answer is just hide it. It's just hide it whenever X is in LB. How can we hide it? Well, it's a kind of weird encryption that we can use there. And the encryption is indexed by S. And we can encrypt the relation of the hash proof system. And this encryption scheme is in such a way that it will be lossy whenever X is in LB. So what do I mean by lossy? Just as generally known, for all X in LB, whenever for any fixed K, whenever we encrypt this K, it just looks like a random encryption. So it doesn't leak anything about the relation of the hash proof system. So the security guarantees will remain and we're fine. And so our OR proof, I can show you a very simplified version of our OR proof now. If you want to prove that X is indeed item W for some scalar W, then you just publicly evaluate the hash proof system using W and the public key. Actually, now one universal hash proof system is good enough. And then you encrypt using this X. You encrypt the relation of the hash proof system. And that's this proof, like the security notion you get. It's quite weird to be honest, but it's exactly what we need. And it gives us exactly what we want. So to conclude this talk, what's to take home? So the new thing of our work is this new efficient, pairing-free, non-interactive, designated, verifier OR proof. And so what we did was we reduced the cost of tight security to just one more element, like compared to the best scheme in this line before. It's just one more element in the Cypher text and less than a handful more element in the public key and we get this tight security reduction. So yeah, that's all I want to say for today. Thanks very much for coming here. Thanks for your attention. And yeah, I'm happy about questions. Thank you.