 Thanks for the introduction. As you said, this joint work is Christian, Ike and Victor Schuh. We will construct public key encryption schemes, which are secure under the Computational DfM assumption. But actually, of course it's not working, we are not constructing public key encryption schemes, but we construct key encapsulation mechanisms, which can then be used in a straightforward way to construct public key encryption schemes. So what is the key encapsulation mechanism? Very briefly consists of three algorithms, a key generation algorithm, which outputs public and the secret key. Then an encapsulation algorithm, which takes its input public key and outputs ciphertext and the key. And the decryption or the encapsulation algorithm taking its input secret key and the ciphertext and returning either a key, the key which is encapsulated here in the ciphertext, or maybe a special rejection symbol which indicates that the ciphertext is not read it. Okay, and this key encapsulation mechanism, or chem for short, can then be used to construct a hybrid encryption scheme, which means that the chem is used to encapsulate a key, and this key is then used to encrypt the data with the symmetric encryption scheme like a block cipher for instance. And now CCA secure public key encryption can be obtained by combining a CCA secure chem and a CCA secure dem data encapsulation mechanism. This is one possibility to obtain CCA secure public key encryption, and this is the way that I'm going to talk about in this talk, but there are several other definitions. For instance, constrain chosen ciphertext secure scheme are sufficient to construct CCA secure public key encryption, but I will not talk about this. Okay, so what does it mean for a chem to be in CCA secure? So this is formalized as a game between two parties. So this game is played between an adversary and the challenger, and it proceeds as follows. The challenger generates a public and a secret key. Then it generates a ciphertext, which encapsulates a key K0, and it samples a uniformly random key K1 independent of this procedure here, and finally it tosses a coin B, and the adversary receives this input, the public key, the ciphertext, and one of both keys, so the real or the random key, and the task of the adversary is to output whether this key here is the real key or the real key which is encapsulated here or a random key. And the adversary may issue chosen ciphertext queries to the challenger. This means that the adversary submits some ciphertext C prime, which of course must not be equal to this challenge ciphertext here, and it receives back the decryption of the ciphertext. And we say that a chem is in CCA secure if the probability that an adversary wins the game is at most 1 over 2 plus some negligible factor for all adversaries running in polynomial time in the security parameter K. So, okay, which instantiations of CCA secure chems are known? So first of all, of course, it's possible to construct CCA secure chems in a random oracle model. Then it is known that there exist plenty of constructions from decision assumptions, which are all based on the techniques due to Kramer and Schubb. So Kursava Desmet, for instance, is strongly related to Kramer-Schubb. And as I said, there are many variants of it. But these constructions rely on decision assumptions, which on the one hand may be criticized because they are very strong, which sometimes do not hold in bilinear groups, for instance. And so, of course, it's desirable to have constructions from weaker assumptions, from computational assumptions. So computational means here, of course, decision assumptions are also a computational assumption, a computational-hardness assumption. But computation here means search problems, like the decisionally for Kramer problem is a decisionally assumption and the computational assumption is a computationally for Kramer problem. And here, various constructions are known. So, for instance, Kersch-Kilz and Schubb have presented a scheme like Herocryp 2008, which has one drawback, namely the ciphertext size is linear in the length of the key which is encapsulated. So if one would like to encapsulate any key bits, then the ciphertext consists of n group elements plus some constant overhead. Then there is the scheme due to Hanauka and Kursava, which is also quite complex. So the scheme achieves a constant size ciphertext, but the construction itself is complicated and also the security proof is quite involved. And there are simpler and more elegant constructions due to Huff-Einzen-Hilz. But this scheme is based on the factoring assumptions and so consists of two elements of the n for some integer, which is hard to factor, and therefore the representation of the group elements is pretty large because it has to be 2,000 bits or something to insaturate the factoring assumption holds in this group. Our goals are to construct key encapsulation mechanisms, which on the one hand provide sufficient security to obtain CCA secure public key encryption. So in this talk this means CCA security for cams. It's based on mild hardness assumptions, the computational DVL man assumption, and the proof should be in the standard model so without idealizing a hash function or something else to make the proof go through. And finally we are looking for construction, which are as simple as possible, of course as efficient as possible, and which allow for a simple proof of security. Okay, before I can start explaining our schemes, we need some definitions and some tools. So first of all very briefly, the computational DVL man problem. Okay, we have a generator G generating a group and the group order is P is prime and the computational DVL man problem is to compute G to the AB on input G, G to the A and G to the B. The decision, the DVL man assumption, is basically the assumption to decide whether a given four tool here is a solution to a computational DVL man instance, so the question here asked is to answer whether C equals A times V. And we define some gap like CDH problem, which here is the problem to solve the CDH problem given a decision of the DVL man oracle, but here we consider a gap CDH assumption where this CDH oracle does not answer arbitrary CDH queries, but CDH queries where these first two elements are fixed. It's just a recur variant of the gap CDH assumption. Okay, and we need hardcore predicates. So what's a hardcore predicates? For short, a hardcore predicates for the DVL man function is a function mapping group elements to bits with the following property. If there exists a distinguisher which receives this input G, G to the A, G to the B and some random bit K, and on some bit K, and which decides whether either K is the hardcore bit of G to the AB or which is chosen uniformly random, then this implies an algorithm which solves the computational DVL man problem on this input here. Of course, this definition here realizes two hardcore functions, but maybe I'll drop a few words on this at the last slide in the last. Okay, using these tools, we can now construct a key encapsulation mechanism which is a simple extension of the Elgar Myles scheme. So we start with the Elgar Myles scheme which looks like this. We have a generator and some group element G to the Z1. The secret key is the discrete log of G to the Z1 to base G. The encapsulation algorithm samples a random integer R, computes the Z0 as G to the R and returns a key K which here equals K1, and K1 is computed as the hardcore predicate of G to the R, Z1. And the encapsulation algorithm where... Of course, the encryption algorithm returns a tuple here, and the decapsulation algorithm uses this key bit K1 by using knowledge of this exponent Z1 here. Okay, so with this scheme here, we can construct a key encapsulation mechanism which is able to encapsulate a single bit which is quite short for using it for a key later, but it's straightforward to extend this scheme here to a multi-bit scheme just by adding N minus one element to the public key, adding the discrete logs to the secret key, and just proceeding as before for each element from the public key here. So with this scheme here, we obtain N bits per ciphertext, and this scheme can also be proven secure against chosen plaintext attacks under the computational if you have an assumption. Okay, but we would like to have constructions which are secure against chosen ciphertext attacks, and therefore we need a way to answer chosen ciphertext queries in the simulation game. And we don't know how to do this for this scheme here. So we have to add some additional values to the public key which construct some redundancy. So these additional elements here are g to the x and g to the x prime, and the discrete logs either to the secret key. And the encryption algorithm now computes attack t as a collision resistant hash or target collision resistant hash of the ciphertext element c0 and c1 is computed by this formula here using this attack t. And this additional element is then added to the ciphertext. And the encryption algorithm checks for consistency and returns this error symbol if the ciphertext is not consistent, but otherwise it returns the key. So this is a technique from Bonet and Bouyen from the world of selective ID secure identity based encryption. And this gives us the handle to construct an all but one simulation which means that we are in the simulation we are able to decrypt all chosen ciphertext keys except for one. That's exactly the one ciphertext where we plug in our CDH challenge. And this scheme now can be proven secure against chosen ciphertext attacks, but not under the CDH assumption, but only under the GAP CDH assumption that I've explained two slides ago. And the reason is that we need, in the simulation we need a way to check for consistency of ciphertext and therefore we need the CDH right here. And it's unclear whether this scheme is secure under the ordinary CDH assumption. So I will describe some modifications to the scheme here in the remainder of the talk. So this scheme will be called scheme zero in the remainder of the talk. Okay, so we have this problem with this GAP computationally but luckily there is a tool that we can apply here to base security on the ordinary CDH assumption. And the technique that we use here is called twinning. So what does twinning mean? First of all we need to define two more computational feedback problems. The first of all is the twin CDH problem which is a variant of the CDH problem where an algorithm given this vector of group elements here has to solve the CDH problem for GG to the A1 and G to the B. So compute this element here and simultaneously has to compute the Diffie-Hellmann element for GG to the A2 and G to the B both at the same time. And likewise we can define this twin DDH problem here which is the problem given this vector of group elements here. Decide whether C1 equals A1B so whether C1, A2 and C1, A1, G and B form a DDH tupper and similarly for the C2 case. And again we can define this gap problem here which is the problem of solving the twin CDH problem given a twin DDH oracle which also has three fixed input parameters here which are exactly the three first input parameters of the CDH instance or twin CDH instance. And now the result that we can apply is due to cache kills and chup here. Namely they have shown in their Eurocorp 2008 paper that the gap CDH problem is equivalent to the CDH problem. So this means if we would like to construct the cam based on the CDH assumption we can also construct the cam based on this gap twin CDH assumption. And this gives us the handle to construct our scheme 1, that's the first scheme now from the paper which is simply a twin version of the scheme 0 as I have explained. So the scheme is as follows here the black characters here show the scheme 0 and we extend the scheme just by adding two more elements to the public key and to the secret key by computing here a second element and adding this to the cipher text and finally by adding one consistency check to test whether C2 is consistent and this scheme can now be proven secure under the gap twin CDH assumption which is equivalent to the CDH assumption and we can use this twin CDH to test for consistency of cipher text. Okay, now the scheme suffers from pretty large public keys because when we use hardcore predicates then we need to add one element to the public key for each key bit which is encapsulated and the question is of course can we do better here and an idea that one can have is to compress the public key somehow for instance using bilinear maps a similar idea is I guess going to present it at ACNS this year. Okay, so also very quick here what's a bilinear map or a pairing so this is a function taking as input to group elements from G and outputting an element to the group GT with two properties first it's bilinear and second it's not degenerated and the interesting property of such a pairing is that we can multiply exponents here so we evaluate a pairing on these two elements and get a multiplication of these two exponents without knowing the discrete logarithms of these two values here. Okay, and we can again define a computational assumption here known as the bilinear Diffie-Hellman Assumption and this is the problem of computing this value E of GG to the ABC on input this vector of elements here. Okay, so how can we use pairings now in our construction? The first approach is the first idea is to move most elements from the public key to system parameters. This is useful where if we have a multi-user setting where many parties are using the same scheme then these parameters here can be used by all users simultaneously and yes, a public key consists only of a shorter vector here of one group element and one integer. Yes, and the scheme now works as follows so this vector of group elements here are system parameters the public key consists of G to the Y the secret key of the discrete log of this group element and then the encapsulation algorithm basically proceeds as before but evaluates here this hardware predicate on the evaluation of the bilinear map on this input here and the decryption algorithm basically recomputes this element here so this element here is G to the R and then one can see that the exponents here are equal to this one so it looks like it works. This scheme can be proven by a particular against chosen ciphertext attacks under the BDH assumption and it has another interesting property namely it is publicly verifiable which means that any party can verify that a given ciphertext is valid without having a secret key. The second trick that one can apply to this bilinear map setting is to reduce the size of the public key to the size of the order of the n and this works basically as follows so we let eta or we let n be the square of some integer eta so eta is the square root and we add two eta group elements to the public key here we add the discrete locks of every second group element here so all this z1,0 zeta,0 and so on to the secret key the other discrete locks are not required for the scheme we proceed as before in the encryption scheme but at some point during the encapsulation algorithm we recompute n group elements here and this is done by applying the bilinear map to each possible pair of elements zi,0 and zj,1 and raising it to the power of r this was the randomness used for encryption these values are then used as input for the hardcore predicate and yes the decapsulation algorithm recomputes these elements here to making use of making use of knowledge of this discrete locks here and again the scheme can be proven secure under the bdh assumption and it's also public verifiable okay so to conclude I have now described three schemes which are here the scheme scheme 1 scheme 2 and scheme 3 in the paper we have one more scheme constrain cca secure which is a recon notion but suffices to construct cca secure public key encryption if one addes a mech to the ciphertext and yes we can see here that the schemes improve over the cache kills and shub scheme from Eurocrypt improves slightly here over the hanaoka korosava scheme but recall that this scheme was quite complicated and well I guess the schemes that I presented here are a bit easier than the schemes so yes that's the cca secure construction that I mentioned so what we can do here is we save one group element in the ciphertext at the cost of adding a mech here okay so now I've only talked about hardcore predicates so if you use hardcore functions then of course one can obtain more key bits from a ciphertext from a hardcore function outputting new bits then it's possible to encapsulate n times new key bits with the scheme and yes an open problem is so one can see that everyone here is an n in this table either here in the public key or secret key or in the system parameters and of course it would be nice to get rid of this n here so the open problem is is it possible to construct a camera with constant ciphertext constant size keys constant sizes and parameters okay yes that's the end of my talk thank you