 And all free papers will be also invited for publication at the Journal of Cryptology. So the first paper, which got the best paper award, is tightly CCA secure encryption without pairings by Roman Guet, Denise O'Fines, Ike Kils, and Hotech V. And Roman will give the presentation. Thank you for the introduction. First, I will recall the definition of CCA security. I suppose Alice wants to send the message to Bob through an insecure channel. And suppose that Alice and Bob do not share any secret key. So they use public encryption, where Bob generates a secret key and that he keeps for himself and a public key that he gives to everyone, thanks to which Alice can encrypt the message. And the basic security notion that we want is that if an adversary is dropped on a ciphertext, it should not learn any information about the plain text. And formally, this is captured by chosen plain text that attacks your security game, where the adversary gets a public key, then chooses a pair of message M0M1, and gets back the encryption of one of these two messages picked at random. And finally, the adversary has to guess which message was encrypted. So if the ciphertext doesn't reveal anything about the plain text, in particular, it doesn't reveal any information about the BD. And the adversary has a small advantage of winning this game. So this is for if-dropping attack, passive attacks. If you want to capture also active attacks, such as the Blasian-Bacher attack on TLS, we also had a decryption oracle that the adversary can query before and after the challenge ciphertext, adaptively and many times. The only thing the adversary cannot do is ask the decryption of the challenge ciphertext. So we usually only consider one challenge ciphertext, and this is without loss of generality, because it implies many challenge ciphertext via a hybrid argument. But for the purpose of this talk, I want to make explicit the fact that the adversary can actually get many challenge ciphertext. So it can send many pairs of M0M1 and get back the corresponding encryption of MB. And in fact, I'm going to count the number of such challenge ciphertext and decryption oracle queries. This is a de facto security notion for encryption. Now how do we prove a scheme is CCA secure? We do a reduction. So suppose we have an adversary that can win the security game with an advantage epsilon. Then we use it to build an algorithm, an efficient algorithm that has roughly the same running time as the adversary and that can break hard problems such as ZDH. With a smaller advantage, and the ratio between these two advantages is called the security loss. And most schemes use a hybrid argument to get security for many challenge ciphertext. And therefore, the security loss is proportional to the number of challenge ciphertext here. This is what we call a non-tight security reduction. And this can be a problem if you want to use the reduction as a tool to choose concrete parameters for your scheme, because then you will have to take into account this loss. For example, if you want 128-bit of security and that your loss, because of this hybrid argument, is exactly the number of challenge ciphertext, which can be as large as 2 to the 30 in a large, widely deployed system, then we have to pick a group where DDH is unbreakable with advantage more than 2 to the minus 158. So a large loss implies large parameters and less efficiency. To avoid that, we would like to build the reductions with a tight security, which means that the security loss L is small. And by small, I mean, in particular, it's independent of the number of challenge ciphertext. And it's typically a small constant times the security parameter lambda. Think of lambda as 128, which is much smaller than the number of challenge ciphertext. So finding tight security reduction in the context of encryption and signatures has been extensively studied before. So now let's look at a prior CCA encryption scheme. First, we have very efficient schemes starting with Chroma and Shoop encryption scheme, whereas the ciphertext overhead is three group elements. Improved by Kurozawa Desmet to two group elements, they are based on DDH, which is good. But because they use this hybrid argument to get security for many challenge ciphertext, the security loss is large. It's non-tight. Then there is a series of tight construction starting with Ophine's Jagger crypto 2012. But as you can see, looking at this column, these schemes have larger ciphertext overhead. The number of group elements is larger. Even though in the latest work last year, the efficiency has been significantly improved. But more importantly, all these construction use a qualitatively stronger assumption. They use bearings, which is not the case of non-tight schemes. So a natural question to ask is, does tightness intrinsically require the bearing? And the answer is no. We build a efficient CCA secure encryption scheme, which tightly reduce to DDH, no bearing. So this is a quantitative improvement upon prior tight construction because the ciphertext overhead of our scheme is shorter. It's three group elements, which is one group element more than the most efficient CCA secure encryption scheme, which is Kurosawa Desmet. But it's also a qualitative improvement because we use a weaker assumption. We use DDH. So prior tight construction, they use two different techniques. There is a group of constructions that use signatures and non-interactive zero-knowledge proofs, which are primitives that admit public verification. And for which, we don't know any efficient construction without pairing. So we cannot use this technique. Then there is a group of works that build identity-based encryption, or IBE for short, that is stronger than CCA encryption. And to build them, they use a methodology which is called dual system encryption introduced by Watters, which require computational assumption both on the ciphertext and the secret key space. And therefore, it also crucially need a pairing. It's inherent to the construction. So also, we cannot use this technique. Instead, to overcome these barriers, we consider the designated verifier on ISIC setting, which is like an ISIC, except the verification is not public. It requires a secret key, as in Kramanship. I won't formally define what a divine ISIC is. Instead, I'm going to describe the technique in the context of Kramanship encryption. So this is our starting point. Also, we encounter new techniques that I'll talk to later, new technical difficulties. So this is our result. Now, the overview of the construction. To build a CCA secure encryption scheme, we build a much simpler primitive, which is called a tag-based encryption. I'll define in more details what it is later. But this tag-based encryption is simpler than the CCA in two ways. First, using tags simplifies the proof. And these tags are easy to instantiate using standard technique, such as one-time signature. In our case, we use collision-resistant hash function for efficiency. The second thing is that this tag-based encryption scheme is not actually CCA secure. It satisfies the weaker notion of security. For those who know, it's called plaintext checkable attack security. And this is easier to prove, again. So this makes this thing simpler to design. And again, using standard framework, you can upgrade this weaker than CCA primitive to full CCA security using authenticated symmetric encryption. So roughly, we instantiate a standard framework with a new tag-based encryption. So this is our contribution, build this tag-based encryption. We also do non-trivial optimization when we combine these three pieces together. But I'm not going to talk about this in this talk. I'm only going to focus on the tag-based encryption because it's a core component, and it captures most technical novelties. So what is a tag-based encryption? It's encryption where the encryption algorithm takes an additional input tau here, which is called the tag. And the decryption also takes an additional input tau. And it decrypts a ciphertext for tau star when tau is equal to tau star. So this is correctness for tag-based encryption. Now the security is similar to the previous CCA security for tag-free encryption. The difference is that now we require the adversary queries decryption or a call with a tag-tau that must be different from the tag-tau star used in the challenge ciphertext here. As I said, we can enforce this using, for instance, collision resistant hash function. And the security proof crucially rely on this property here. So I will give the construction in three steps. First, our starting point is a simple CPA secure encryption scheme known as Damgard-Elgamal. Then I'll show how to modify it slightly to get a simplified version of Cramentship encryption scheme, which is non-tight. And finally, I will show how to modify it again to get our construction, which is tight. So this will be the outline of the rest of the talk. So first, this simple Damgard-Elgamal encryption. We use a primary group. And the secret key is simply a random vector of exponent of dimension 2k here. So through all this talk, I'm going to use white boxes to denote vectors of exponent over zp and blue boxes and later red boxes to denote vectors of group elements. This will be the convention. So secret key is a vector of exponent of dimension 2. And the public key contains a random vector of group elements of dimension 2. And the inner product of this vector k with the vector a, where the inner product is done in the exponent, just like so. So this is a group element. This is a public key. And the cipher text to encrypt a message, one picks a random exponent r and compute this a times r here, and the corresponding k times ar, which serves as an encapsulation key for the message m, which is a group element. To decrypt, one uses the secret key here, k, which we multiply with this part of the cipher text to get the encapsulation key and get back the message. So it's very simple. Correctness. And for security, we will need two properties. The first property says that this vector a times r is computationally indistinguishable from a uniformly random vector of group elements, u here. And this is true even when the vector a is given. And this is implied by DDH. So we can switch this a times r to uniformly random vector here and here. This is the first property. Second property says that this value k times a, this group element, is random independent of this group element k times u, statistically independent. This is because a and u are independent random vector. So most of the time, they will be linearly independent. And therefore, these values are also independent. So finally, we can argue that this value here is a uniformly random group element that completely masks the message m. So this concludes the proof. So this is our starting point. It's a simple encryption, but it's not CCA secure, in particular because it's multiplicatively homomorphic. But we can modify it slightly to get a simplified version of Kram and Schup encryption scheme, just like so. Instead of using a secret key k, we'll use two random vector k0 and k1. And we want to build a tag-based encryption where tags are in zp. And they are mapped to a vector k tau defined by k0 plus tau times k1. This is what Kram and Schup did. And this map is a power-wise independent hash function, which means a k tau is independent of k tau star when tau is different from tau star. And the security proof crucially relies on this property. I'll show you later. So this is what we obtain when we replace k by k0 and k1. So this is a simplified version of Kram and Schup, as I said, which is actually not CCA secure. I'm a bit simplifying here. So this is our Kram and Schup encryption. And to compute a ciphertext for tau, you compute this encapsulation key, as I described. Now let me show you where the power-wise independence property comes up in the proof in a simplified setting where the adversary only gets one challenge ciphertext for tau star and one decryption or a call query for tau, which must be different from tau star by definition of the security game. So this is a simplified setting. Now, by power-wise independence, we can argue that this tau, k tau, is independent from k tau star because tau is different from tau star. And therefore the decryption or a call query doesn't leak any information, in fact, about this encapsulation key. So we can just ignore it and do the same proof as for the CPA secure encryption scheme, okay. That's the idea. And finally, to get many challenge ciphertext for many different type tau, as is the case in the security proof. So this is the adversary view. And many decryption or a call queries for many different tau also. We do a Kram and Schup did a hybrid argument. And if you do it in a clever way, you can prove that the advantage of the adversary breaking the scheme is less than this quantity here, which corresponds to the computational argument, which we use once per challenge ciphertext, plus this quantity here, which corresponds to the statistical argument, which is used once per challenge ciphertext decryption query pair. So because of this hybrid argument, this is not tight. Okay, a security loss is proportional to the number of challenge ciphertext. So our idea was to avoid this hybrid argument by using stronger properties than the pairwise independence, because the pairwise independence forces you to enumerate over all the possible challenge ciphertext and decryption query pairs. To avoid this hybrid argument, we use a stronger property that can talk about all the challenge ciphertext at the same time. So instead of pairwise, we would like number of ciphertext wise, basically. Again, to avoid this hybrid argument and get tight proof. So we would like to design this k tau, which behaves as a number of ciphertext wise, but the number of ciphertext is unbounded. So number of ciphertext wise is actually a random function on tau. But we cannot set k tau to be a random function because that would be too large. So public key and the secret key would be too large. So what we do is use a k tau that behaves as a random function computationally in the ciphertext space. So it's sort of a randomized PRF, if you want. Basically, it means that in all ciphertexts can argue that this k tau times this AR is computationally indistinguishable from a random function on tau. And this PRF randomized PRF has to be tight. This is what we want to build. Now let's see how we implement this ID. We do so by modifying the simplified version of Kramer and Schupen encryption scheme by replacing k 0 and k 1 with a set of two lambda vectors, k i b, for index for i from one to lambda and b a b 0 or one. Okay, so the secret key is going to be huge in our setting. And we map the pairwise independent hash function to, we replace it with a map that takes tags which are now lambda bit strings and map them to the vector here, some of lambda vectors. So this is used in the Chenoui identity-based encryption, tightly secure identity-based encryption. And it's also reminiscent from narrow-angle PRF, also quite different. So this is our ID. And finally, for our technical reason, we also have to increase the size of all the vectors. So instead of having dimension two vectors, we'll increase the size and have dimension three vectors. This is a bit technical. So finally, this is the construction we get. So as I said, the secret key and public keys are large. And the proof sketch, I already give the intuition. So we basically, so if this is the adversary view with many Chinese hypertexts and many decryption queries, or we have to prove simultaneously in all Chinese hypertexts that this part here behaves as a random function, as I said, using this randomized PRF paradigm. And the main technical challenge is to carry on the Chenoui proof in a pairing-free setting. Because Chenoui used dual-system encryption which used pairing. In our case, we replace a computational argument by a statistical argument in the secret key space to get rid of the pairing. That is the main technical difficulty we solve. And finally, we get, we show that the adversary cannot break the scheme with advantage more than this quantity here. So you see that the security reduction is much smaller. To sum up, we build an efficient scheme that tightly reduces to DDH. And the only drawback of our scheme is the very large public key, which is also the case for most tight construction, but which is not the case for non-tight construction. So a natural open problem would be, can we reduce the size of this public key to a constant number of group elements? Some partial progress has been made by Denys Salfides, who build a tightly secure signature with constant size verification key. But it crucially relies on pairing. And more broadly, can we build a tightly secure CPA encryption from minimal assumption, or such as a hardness of factoring or CDH? So this concludes my talk. Thank you very much. Are there any questions? A small inoptimality in your scheme is that you increase the number of group elements from two to three, but I assume that it was in one of the last few slides where the vector size increased from a two-dimensional to three-dimensional. And then you say that you have to do it for technical reason. Can you elaborate a little bit more? Why is it necessary to increase from two? What's wrong with two? Okay, thank you for the question. So we need to increase the size from dimension two to dimension three. And so the reason is in the proof. If we use the Chen We IBE proof, we require, there is a condition that must be satisfied, which says that, so there will be many hybrids. And at one hybrid, at one given hybrid, all the tag in the challenge ciphertext should have the same value for the, okay, the ith bit of the tag should have the same value. It should be all zero, all one, which is not realistic. So to get around this condition, we need basically to give two copies of the scheme, more or less, one which will be used for the bit zero and one which will be used for the bit one. Okay, in fact, this is, right, this is, we have this technique which is also used in this paper, which we have to adapt. And so these two copies means you have to increase the size of the vector a. Basically one dimension, so you have three dimension, one dimension is for correctness and the two other dimension are for the proof. Instead of one extra dimension, you need two basically. Any other question? So yes, in practice, if you have like a non-tight scheme, like Kramer-Schup or like Kurosawa-Desmond, I guess you have this factor q, right loss. So q is like, I don't know, what is it in practice? Maybe like two to the 30, two to the 40. But then you're saying, so that translates to like a bigger group. But in your case, even ignoring the secret is a ciphertext and now like three elements versus two. So did you make any like realistic parameter computation? Do you actually, even ignoring the secret in public inefficiency, do you actually save or you usually lose? Yes, that's a good question. So tightness becomes more important than saving one group elements. When for say 128 bit of security, we computed it and it's when the number of Chinese ciphertext is larger than two to the 74. So it's quite large number of ciphertext for lambda equal 128, using elliptic curves. This is when you take into account the security loss. So. Okay, any other question? No questions, so let's find the speaker again. Thank you.