 from the University of Limage and this is a joint work with Stéline Chevalier from PAI2 and as the title said, I'm going to speak about structure preserving, most projectivation. So to give an overview of the talk, I'll try to present the general context of protocol in the UEC framework, why we want to do that and what we are trying to improve. Then I'll recall two of the main tools that we are going to use to do our construction. Then I'll explain what I mean by structure preserving, most projectivation and give some examples of the construction and then provide application to show that indeed this new primitive is useful and is improving the state of the art. So in cryptography, we have many protocols where you have conditional action. So for example, as we reviewed earlier, we have a linear transfer where a user wants to retrieve a line in the database and he's going to commit in some way to his line, send it to a server. The server is going to provide an answer and the user, if everything goes right, it goes correctly, is going to be able to obtain the line he requested and we want two properties on one end, the user only learned one line, the line he requested and the server learns nothing. So that's some sort of conditional action in that if indeed the user requested only one line, the user is able to retrieve this line. So another family of conditional action is going to be authenticated key exchange. For example, in case of a password key exchange, Alice is going to send a function of a password, broad definition of function, and Bob is going to provide an answer using his password and possibly the first row. So once again, it's a conditional exchange. If both users sent the correct password, they're going to obtain a shared key at the end and otherwise they're not going to obtain the key, nobody's going to learn which password was tried and it's neither someone outside the protocol or the other user. So in a broad sense, we have many protocols where we have two participants that are going to exchange information and there should be a result in the end if something went correct. So we want to do that in the UC framework. So to remind a little that what was said earlier, the UC framework provides a functionality that say in a perfect world, my protocol should do that. And now I'm going to show that my protocol can be, I suppose, equivalent to this functionality. So let's consider the previous example where we have only two flows. First, we need to be able to say the adversary just says that. So if the adversary sends the first flow, we should be able to extract the first flow. So that sets a hard requirement, the first flow should be extractable in some way. Then let's assume the first user is on S, so we are playing, we send something and at this step, the functionality is going to tell us you should have sent that. So we need it some way to be able to make sure that the first flow is equivocable so that the first flow can be adapted to be what we're supposed to send and not what we sent really. And as we allow adaptive correction, that means in the middle of the protocol, the adversary can come and say, now I'm corrupting this user, give me everything he has in memory and that should fit to what was sent before. And so in some way, we should be able to adapt the memory. So we allow rager, but what remains in the memory should still fit what is sent by the adversary. So the classical way to do that is to use colors in the memory, that's going to be the randomness used in the ciphertext, mostly randomness used everywhere. And so the problem with colors, especially on electric curve, is that it should no longer have any trapdoor possible. And so it's not really the only way, but the main way is to do partial rager, so you are going to keep some randomness, but not everything, and that's partly inefficient. So what we propose in this talk is to keep in memory only group elements. So the nice thing about group elements is that you have extra trapdoors. Your simulator, as we are going to be considering the proof, might be able to work using the discrete logarithm, but should only provide a group element. So this gives you a little more freedom. And when we propose this idea, we are hoping that this would allow us to provide more interesting features. So to do our construction, we are going to use mostly two tools. So first an encryption scheme, that's the base of the first rule. So you have the four classical algorithms, the setup, the key gen, that gives you a public and a secret key. The encryption algorithm, so let's use the public key, the message, and some randomness. So randomness is important here because that's going to be your witness later. And the decryption algorithm that takes the ciphertext, the decryption key, and returns the plaintext. So in terms of security, we are going to consider NCCA two games. So this means that the ciphertext should be secure, even if the adversary has access to a decryption or a case. So we are also going to use most projective assumptions. So to remind a little of the definitions, this was introduced in 2002 by Cramer-Hanshaw. So you're going to consider a familiar function that are defined over domain X, and more precisely you are going to consider a subset in this domain that's going to be what you call a language. So that's going to be a set of words that verify some property. And where there exists a witnessW that indeed proves that the word verifies this property. So you want the function to be evaluated in two ways, either through a private evaluation using a secret hashing key HK. So you compute directly a hash using HK on the word, or through a public evaluation that uses a projective key HP that's defined with the use of HK. And here if you use the public projection key, the word and the witness, you should be able to compute a value H prime. And if everything goes well, H prime should be equal to HK. So in terms of security, so first your function H using the secret evaluation key can be computed on any word in the domain, while the projective function can only be evaluated on the word in the language because you need a witness, a proof that you belong to the language. So in terms of security, you have the smoothness. So this means that if your word is not in the language, then someone not knowing the secret key should not be able in any way to compute the hash value. So it should be independent in a theory of information sense. While if the word is in the language, but you don't know the witness, so you assume that there's some kind of subset assumption, then not knowing the witness means that the hash value is pseudo-random, so you should not be able to compute it without breaking the underlying assumption. So here we are going to consider what we call structure preserving SPHF. So that's in the same way as structure preserving signature. So we are going to force lots of value to be group elements. So here's the domain. It's a collection of group elements. The language, again, it's a collection of group elements. So you can think of a solution to a pairing equation, to a pairing product equation. Now your words are going to be group elements. The really new part is that the witness are going to be group elements. So this means that the hash value you compute is an element in the target group. So the projected hash is also a value in the target group. So you might want to have the hashing key to be a group element. That's not really needed for our application because as far as I know, there are no smooth projective hash functions where you use an extra trapdoor on the secret hashing key because at least there's no reason to do so for now. That could be an interesting feature to develop later. So why we want to do that? Really, we want witnesses to be group elements. This gives us so much freedom that it's really, it will be much easier to do simulation later on. So the important part is that if we have group elements, then we can use glossary proof. So we can use witnesses that are zero knowledge. So we can simulate witnesses to answer our problem. And this also means that we are compatible with quasi-adaptive engineering and many new features around that. So what you have to remember from this talk is that now witnesses can have torbdots. So let's give some abstraction. It's the only ugly slider in this talk, I promise. So what we had before is smooth projective hash function. And most of them can be adapted directly to structure preserving smooth projective hash function. The main point being the witness that was before, Scala, that's now an element in G2. So just to explain the interpretation, to the word, it's an element in G1 where you use a witness on a group element. So the witness before was Scala, now it's an element in G2 raised to the witness. Your HK here, I took the Scala definition. HP, it's a public group element at the beginning times raised to the ashenki, so that doesn't change in both worlds. So the hash, it's a public world raised to the ashenki, while on structure preserving SPHF should do a bearing computation, and the projective hash is the same thing but using the witness and the HP. So what you can see is that here on the hash line, you use a ashenki as a Scala, but in fact, you can define HK to be F composed with lambda, and that would also work, and here you would have HK to be a group element. So let's give a very example so that the notation are more easy to follow. So we consider the previous language of valid Diffie element quadruple. So you have H and G that are public elements. So your world is going to be a pair of elements of fully H to the R, G to the R. So before, your witness was the random Scala R. Now it's going to be G2 to the R. You compute lambda in use that are two Scalars. HP, it's a base element raised to HK, so it's H lambda times G mu. Here you do exactly the same thing for the structure preserving computation. The hash is the word, so HR and GR raised to HK. For the structure preserving and SPHF, you do that, but with a pairing, and what I said earlier is that in fact, you could have for HK G2 to the lambda and G2 to the mu and compute exactly the same value, and you have the projected hash that's once again computed using HP and R, or HP and the new witness, G2 to the R. So why are we doing that? Simply because this means that all the SPHF can be for a tremendous part of them to transform into structure preserving SPHF, so that's not primitive, that's not insensible. But we can also see that the reverse transformation may be hard, because sometimes when you are going to compute a value, so I think zero knowledge proof, you are going to have a group element where you don't necessarily know the discrete logarithm, so you cannot move from a structure preserving SPHF to a SPHF. So, okay, now we can instantiate that, so we know that there are all the SPHFs that are verifying these new properties. What we show in the paper is how to build new SPHF, new structure preserving SPHF. So what are we going to do with such a primitive? So there are many generic constructions that are used to instantiate protocol in the UC framework. So, for example, a resource transfer, so I'm not going to repeat what I said earlier, but still you have a user server, and the user should learn only a line, the server should learn nothing. So to do so, I'm just simplifying the construction because you have one free flow that's here just for the proof. So the user is going to pick a line. Do a UC commitment to this line? So we just expect a UC commitment to be structure preserving SPHF as function-friendly. So he's going to do this UC commitment, keep the decommit information D. He's going to send C to the server, to the commitment, and keep only the decommit information and erase everything S. For each line, the server is going to compute hash key and projective hash. The hash value for the language, this is the first commitment was the commitment to the line I. I, the line I, buys a hash corresponding to the language of commitment to a line I, sends that and the HP to the user. For the line that was committed, the user is able to compute the projective hash because it possesses the witness, the decommit information D. So he's going to be able to compute the one computed by the server. And for the line I, he's going to be able to recover the value of the line. And for the other, it's going to be completely random thanks to the smoothness. So he's not going to be able to compute the correct value. So another example is for Paik. The users, both users are now going to be able to compute an encryption, a UC commitment, SPHF-friendly for the password using some randomness, keep the decommit information. So you have nice properties on SPHF. When you compute the project key for some of the SPHF, you don't need to use, to know the word beforehand to compute the, to compute HP. So what we show in the paper is that, in fact, those KV HPHF can be transformed into KV structural processing SPHF. We have the properties that's directly inherited. So you can compute that directly on the first floor. And both users are going to send CI and HPI, so the project key for the language. The other user is sending a commitment to the password I am expecting. They keep the decommit information in the hash key and they raise everything. And now they do an evaluation of the SPHF on the language they committed and on the language they are expecting. And if everything goes well, you once again are obtaining the same value. The last example of generic transformation using SPHF in the UC framework. So the user lets assume that you want to send a message to someone that has some credential. You want the properties that this person can only obtain the message if indeed he has this credential, but you want some kind of reverse anonymity where the server is not going to learn whether the person in front of him possesses this credential or not. So the user is going to do a UC commit to a credential. Once again, he's only keeping the decommit information. The database is going to do an evaluation, a smooth projective hash function on the language of this credential verifies the property I am expecting. Recover hash value, hide the message using this hash value. And the server is going to send the project key associated with the language, the mass value. The user is going to possessing the witness, is going to compute the projected hash, obtain normally the hash value that was used to mask, and then recover the message if indeed this credential is verifying this property. So all these examples have one strong requirement is that we are able to compute to have SPHF friendly UC commitment. So there are many examples of commitment to this information, but only one of them has a nice property of having decommit information being a group element. So we are going to focus on the FLM 11 decommitment, UC commitment. So from a high level, this paper has a linear premiership encryption of some word M. And then the important part is that they do a gross high proof of knowledge of the exponent of the randomness chosen in the encryption to achieve the UC security. So thanks to the gross high proof, they are able to equivocate their commitment if needed. So why are we interested in that? So on one hand, the nice thing is that here the decommit information is a group element, but also the other part is that contrary to a classical UC commitment where you keep scalars in the end, here your commitment is not linear in the size of what you are committing. You are committing to one group element and you don't grow in the size of the database or the size of the password. So what we achieve? So for this scheme, we can see that we obtain something that's linear in the size of the database. So that may seem a little less efficient than the scheme from azelcrypt 13. But in fact, in this case, the user, so the first flow is going to be constant size and it's only going to be four elements in G1 and four elements in G2. And only the server is going to have a flow that's linear in the size of the database. So this main situation was a user has a small emitting power. This might provide some interesting transformation. And we can also see that with a non-tailored example just applying a framework directly on existing scheme, we can obtain something that's roughly equivalent to what exists already. So another interesting comparison is for Pink. So we compare to some of the same scheme. So here you have schemes that are adaptive or you see secure with adaptive correction or one run scheme. So what we can see is that except for the GR15 scheme, which is somewhat special, our scheme ended up being more efficient at what already exists. So one has to know that the GR15 is not really following the same construction of the other. They are using only quasi-adaptive music proof, not a SPHF approach. So one might expect our scheme to be a little improved in the sense that here we are using plain old or gross size music proof. If you use quasi-adaptive technique, you might reduce the factor two to factor one. And the main difference is that they manage to do the two evaluation in one. So that's the extra element in G2 that we lose. When we do a presentation, we also show how to instantiate our scheme not only in a 6GH but under every kind of matrix assumption. So that's mostly the algebra. And the idea of a matrix assumption is that you can transform a DDH assumption, a DELIN assumption or anything, in the sense of I am in the span of a matrix or not. So this allows us to give a framework for everything. So MDDH was compatible with CCA2 commitment. We show how to instantiate FLM under MDDH. And then now we can do a structure preserving most projective assumption under MDDH. So let's try to sum up. So first, we provide a generic transformation. So we keep the security. So this means that if the SPHF was secure under DELIN assumption, then the corresponding structure preserving SPHF is going to also be secure under DELIN assumption. We keep all the extra properties. So this means that if the SPHF was such that the projection key could be computed without knowing the word, then the associated structure preserving SPHF could be computed the same way. The main part of this construction is that now we can use music as witnesses. So this is really important because this means that we can do more. Our simulator is more powerful than before. And so we are avoiding some problems with the UC framework. The idea is that if we naively plug this framework with existing building blocks, we obtain efficient protocol. So they are not always better than what exists already, but we manage to have roughly the same efficiency without trying to tailor the approach in any way. And the nice thing because everything used on linear properties is that the construction can be transposed to the matrix if the element assumption. So we can adjust the security of our scheme like we want to rely on other assumptions. Thank you. Okay. Questions or comments? So, no questions. Let's speak in order to speak again. Thank you.