 Today, I'm going to talk about all but many lossy trapdoor functions from now on called ABMLTF and selective opening secure public encryption from lattices, from standard lattice assumptions. So as the session chair mentioned, it's a soft merge between two papers, buoyantly, from now on referred to BL17. And the second one is referred to L triple S17. So you can distinguish between the results during the talk. So what are the motivations here? It's about constructing ABMLTF from lattice assumptions. And we are going to build the selective opening and CCA secure public key encryptions from these primitives. So what are the contributions? The constructions that are provided in these two papers. The first one, the BL17 uses weak PRF. And the second one, L triple S17, will use PRF. Although they use weak PRF, the BL17 use weak PRF, they achieve a weaker security notion that is indistinguishability based security. In contrast to what we achieve, in fact, L triple S achieve, and that is simulation based security, which is a stronger security notion than in the SO. So there are also other contributions for these two papers that I will not discuss in this talk, because it's going to be longer than my time. So L triple S17 also obtains a tighter security proof for the lattice-based key homomorphic PRFs introduced in BLMR-13. And also, we will achieve tightly secure public key encryption from lattice as in the multi-challenge setting. So let's start with the definition of lossy trapdoor functions. This is introduced first in Paracel Waters in 2008. And, well, the functions will have two modes in injective mode and the lossy mode. So in the injective mode, you will be given by an evaluation key and an inverse key. So using the evaluation key, you can compute F, that function at any x in the domain. And using the inverse key, you can recover back the x. In fact, the y, which is the evaluation of F at x, can be reversed back to x using the inversion key. But in the lossy mode, although you are given by another evaluation key, which is computationally indistinguishable from the evaluation key in the injective mode, you cannot recover back the x from the output of the function. So what's about all what many lossy trapdoor functions, the ABM LTFs, are introduced first by Huff-Hinds in 2012. And in that paper, the modes are parametrized by the tags. So the whole set of tags will be partitioned in two sets, the lossy set, and the injective set of the tags. So if a tag belongs to the injective one, then it makes the function itself injective. That I explained, you can recover back using the inverse key. And if the tag belongs to the lossy set, then it makes the function lossy. So usually the tags come in pairs. There are two parts associated to each tag, a primary one and an auxiliary one. So one of the other properties which would be needed for this construction is the large number of lossy tags. In fact, they should be exponentially many in the security parameter. And the other difference is this special private key tag key, short for TK, that is in response of generating these lossy tags out of the auxiliary part and the tag key. So what are the properties which would be needed for the ABMLTFs? First of all, we would like to have the tag indistinguishability that I explained. So the tag, the lossy mode, the tag from the lossy mode and the injective mode, they should be computationally indistinguishable. We would like to have evasiveness, which means that generating new lossy tags should be hard. And well, here I wanted to note that this definition of security, this security model is in contrast to the all but one LTF, where all of the tags except one are the lossy ones. And the all but none LTFs which are introduced in these two papers. So what does that mean exactly? What do we mean by ABMLTFs? What's the usage? In fact, they are kind of the same as encrypted messages. And you can see the lossy tags as the valid signatures. In fact, these signatures that I am talking about, it's not only encrypted signatures. You can also replace signatures with max. And the lossy tags means you will have valid signatures or valid max. The evasiveness is kind of the same as having signature unforgeability. Or in the MAC case, it should look like a random one. And the tag indistinguishability, which comes from the indistinguishability of encryption scheme. So the idea here is to replace the signatures with the PRFs. So in these two papers, we are trying to do these replacements, changing between signatures and PRFs. So in fact, we will build a weak PRFs or PRFs, which are homomorphic evaluation friendly. And they will have long outputs. So the work of BL17 is inspired by the Boyanli IBE, which is appeared in ABE. And in there, they will use encrypted PRFs with single-width outputs in contrast to these long outputs that we have. So what about the lattice trapdoors? A little bit of lattices. So this looks a bit ugly, because there are too many parameters here. But let me tell you, this function F, which is introduced in the second bullet of here, it's the trapdoor function with input x and E. So the idea is to multiply this x by a concatenation of a random matrix A and AR plus SG. And then add the additive noise and take the modular queue. So it should be noted that if H is full rank, for example, identity matrix, then there is an inverse matrix that you can multiply from the right to that concatenation matrix. And that will make finding the inverse of such a function easy. Although if you take H to be zero matrix, then that would be hard to invert this function. So what about the parameters which are inscribed in these two concatenation of matrices? I mean, this A, AR plus SG. So SG is the typical gadget matrix, probably all know. But what's about A? A comes from LW from learning with error. And as I mentioned, if H is zero, then it makes F lossy. And well, the A that you can use here will look like this one. The A can be replaced by this CB plus F. And as a special case, you can see this A to be an LWE kind of set of samples. In fact, A and AS plus E concatenated together. So that serves as A itself. So as I said, we need to have evaluation, homomorphic evaluation friendly constructions. For that reason, we use the typical GSW13 construction, where the gadget matrix will have a small inverse with respect to matrix A. And that we call the G inverse of A. And as you can see, you can compute the additions and multiplications and nan functions of two ciphertexts, C1 and C2, corresponding to U and V, easily by just computing G minus C, C1 times that is small matrix with respect to A times of C2. OK, so where do we plug in all these PRFs or weak PRFs into ABM LTFs? In fact, the idea from L triple S17 is to use the PRFs, which are conceptually simpler than the weak PRFs. And the construction itself is also simpler than the construction in the BL70. Well, we'll use the equality circuit, which is with checks if x equals y. And we, in fact, are plugging this equality of the PRF evaluated at the auxiliary part of the tag. And the primary part of the tag. And we check if these two are equal. And that will go instead of, I mean, as a role of hedge. So the whole thing, which is underlined, will be the hedge. So that's for the injective mode. But for the lossy mode, to generate a lossy tag, what we do is to get a lossy tag generation key, which we call the tag key. And we put it as the key for the PRF that we use to generate hedge. And given the auxiliary part of the tag, we can easily compute the primary part of the tag by evaluating the PRF at that auxiliary part of the tag. So just note that if PRF is evaluated at TA, and that equals to TP, then that equality will be satisfied. And then we will have zero instead of hedge. So the lossy mode is equivalent to have hedge equal to zero. OK, so what about BL17? They will use the weak PRFs instead of the PRFs. Note that weak PRFs are so-do-random, given the random input-output samples. And the output of a weak PRF is just a bit. It's not a string of binary elements. It will also use chameleon hash functions. And well, the idea is to again construct one hedge to be inscribed in that concatenation of matrices. And what they do is just to insert a linear combination of parts of the tag key with coefficients coming from the weak PRF. So the weak PRF will output the bits. So these will serve as the coefficients. And then there will be some parts of the tag key which will be inscribed to the matrix which will serve as the hedge. So to generate the lossy tag key, they just use the output of the chameleon hash function to find the auxiliary part of the lossy tag key. Because they again want to have that hedge matrix equal to 0 equivalent to lossy mod. OK, so what is achieved using these two constructions? The tag in this singularity will be achieved, in fact. So in the injective mod, the random tag t will have the auxiliary part as a random one and the primary one as a random one. But in the lossy mod, the auxiliary part will be again random one. But by the methods, the two methods that I described in the previous two slides, the primary part will look like a random one. It will be a pseudo random one. The evasiveness will be achieved by the fact that given TA, given the auxiliary part of the tag, it's hard to predict TP, the primary part of the tag. And what we will achieve, in fact, both paper, is the selective opening attack. I mean, security against these kind of attacks. But what is a selective opening attack? In these kind of attacks, there will be n ciphertext given to the challenger. The challenger will select a subset of these ciphertext and open them for the adversary. In fact, the challenger will reveal both the messages and the randomness that is used to generate these ciphertext. But keep a couple of them like the complement of those n unopened. In fact, in this example, there are three ciphertext given to the challenger. The challenger will open the first and the third one. In fact, he reveals the message and the randomness, those red ones. And now the question that we ask is that does M2 remain secure? In fact, can we show the indistinguishability of C2 with respect to a fresh generated encryption? So that's, in fact, the notion of indistinguishable selective opening security. In fact, we would like to have unopened ciphertext to be computationally indistinguishable from the ciphertext, which are freshly re-sampled from the messages space. It's also written down here in terms of the mathematics. So all three M1, M2, M3 are distributed based on that distribution D. So given that M1 and M3 are opened for the challenger, for the adversary by the challenger, I would like to re-sample this M2 prime. And you would like to have these two encryptions computationally indistinguishable. In fact, the first one, the one on the left, is the ciphertext, which was not opened. And the one on the right is the one which is freshly sampled. So this re-sampling might not be efficient, but this is what the notion will ask you. OK. So both the schemes, PL17 and L3 plus 17, will achieve indistinguishable selective opening CCA security for a public key encryption, which is described on this slide. So the idea of construction for both the scheme is kind of the same. We'll both use this double LTF evaluation construction described in those two papers, in the Piker paper and in the Huff-Heinz paper. And the idea is to have a triplet of ciphertext. The first one is just a universal hash-proster message to hide the message. The second part is just the LTF without any ABM. It's just kind of a trapdoor function. And the last one, the last part of the ciphertext, is the ABM LTF that we described so far in these two constructions for that tag team. So the lossy tags will be used to respond to challenge ciphertext. So in fact, it will provide the indistinguishability, the tag indistinguishability. And the decryption queries will be responded using the injective tags. And there we will use the evasiveness of the property. So there are some tweaks in the construction. For example, L triple S17 will also use a MAC because that would be needed later on to achieve some security. And BL 17 will also use their own tricks. But the general idea of, I mean the high view of the construction will look like this one. OK, so what is achieved? In fact, let's look at first on the simulation-based SO, selective opening security. And that is to ask a simulator to output exactly what is produced by an adversary, which has seen the public keys and the ciphertext and the open ciphertext. So the adversary in this case in the simulation-based selective opening, the adversary will see the public key, the ciphertext, the open ciphertext, and then output something. But we'd like to have a simulator which only sees the messages, the open messages. And we will ask the simulator to produce the same thing. And if the simulator can also be computed only with the open messages, then the CMSO security is achieved. So this is a stronger security notion than indistinguishability selective opening. And there are only a few simulation-based selective opening CCA public key encryption. For example, the first two constructions are based on the standard assumptions, but the encryption is bit by bit. The last one is based on the random oracle model. And the second one is based on non-standard assumptions. But it does have smaller ciphertext. So we'll achieve, in fact, L triple S 17, we'll achieve simulation-selective opening CCA. And that's going to be based on standard lattice assumptions. And we'll make use of these well-known results from Belair in 2009, which says that if you are given by an indistinguishable selective opening secure public key, and if you couple it with an efficient opener, then that will give you a simulation-based selective opening secure public key encryption. So the only thing we need to provide here is an efficient opener. So I'm not going to give the details of constructing the opener in L triple S 17, but just giving it a definition of the opener. So what does the opener do is that given a ciphertext C of the message M and randomness R, then for any other message M prime in the message space, the opener should give you a new randomness R prime, where the ciphertext coming from the M and R is just the ciphertext. It's exactly equal to the new ciphertext from the M prime and R prime. And if you achieve, I mean, if you construct such an efficient opener, then you can achieve the simulation selective opening security. And in fact, L triple S 17 will achieve simulation selective opening CCA security. OK, finally. Well, time to conclude. So post-scheme constructed ABMLTFs from lattices, one used WICPRF, one used PRF. Post-schemes are indistinguishable selective opening CCA. The one given in L triple S 17 is simulation-based SOCCA, which is kind of stronger than the second bullet. And as I mentioned, we also achieve tightly secure in CCA secure public encryption. And that is one of the contributions that I promise not to discuss here. So that's the end of the talk. Thank you.