 All right, thank you for the introduction. Hi everyone, welcome to the last talk of the conference. And thanks for sticking around. My name is Mohamed Hajjiobadi, and I want to tell you about a joint piece of work with Sanjom Gart on a construction of chapter function from the competition of the Fee-Holman assumption. OK, so chapter functions, you have heard about them probably a lot during this conference. And they are a fundamental primitive in crypto. They were first introduced in a landmark paper by Defi and Helman in 1976, which developed the foundations of what we now call public crypto. So another fundamental primitive in cryptography is a public encryption that was first designed in the famous RSA paper in 1970s. And they were later rigorously defined by Goldwasser and Mickley in 1982. So in order for me to set up the stage and notation for this talk, let me quickly go over these two notions. I'm sure that all of you are familiar with, but just to set up the notation. Public encryption, as you all know, is given by three algorithms, G, E, and D, the key generation, the encryption, and the decryption algorithm, where the key generation algorithm gives us a public key and secret key. And you can use the encryption algorithm to encrypt a plaintake message M to a public key Pk using some randomness R to get a ciphertext C. And you can use the decryption algorithm to decrypt the ciphertext if you have the right secret key. In terms of security, we have the basic notion of semantic security, which says that encryptions of any tool on plaintext should be computationally indistinguishable. Now, trapdoor functions are defined exactly like public encryption, with the only difference that the encryption algorithm that we now call the evaluation algorithm does not take as input any randomness. So this means that the decryption algorithm that we now call the inversion algorithm can recover the entire input to the trapdoor function evaluation algorithm. And since we make use of no randomness, we cannot achieve semantic security, but we should settle for something weaker. In particular, the one-ray notion says that a randomly chosen function from the family should be one-ray. OK, so as I mentioned earlier, a main distinctive feature of trapdoor functions is that no randomness is used in the evaluation algorithm. In terms of the relationship between these two notions, we know by classical results that trapdoor functions imply the existence of public encryption. As for the other direction, we know that it's impossible with respect to black box reductions. So in particular, what this impossibility result says is that techniques that generically give us public encryption may not be sufficient for trapdoor functions if we are working with black box reductions, which are most of the techniques in crypto. So I want you to remember this point because I want to come back to it later. All right. So you might ask that we have public encryption. It's pretty useful. Why do we care about trapdoor functions? Are they a theoretical object that people want to build out of curiosity? Or do we have any applications of this notion? So let me motivate this notion a little bit. Suppose we have a trapdoor function. And imagine we have two users, Alice and Bob, where Alice has two index keys, IK1 and IK2. And suppose that Bob has both these keys plus a trapdoor key for IK1. What is happening is that Alice is sending two image points Y1 and Y2 to Bob, which were made with respect to index keys IK1 and IK2. And she wants to convince Bob that these two image points correspond to the same inputs. So the question is how Alice can do this. Does she need to provide some kind of proof for Bob in order for Bob to check for this? Or can Bob do it by himself? So it turns out since we deal with trapdoor functions, which make use of no randomness, this is easy for Bob to check himself. Namely, Bob can make use of his knowledge of TK1 to recover X1. And then he can check whether F of IK2 comma X1 gives us Y2 or not. This is not a toy example. In fact, this is the main reason behind the success of building CCA to secure public encryption schemes in a black box way from various forms of trapdoor functions. In contrast, when you think about the same situation, but when we use public encryption, this is difficult to do. Because in general, it is difficult to check whether two ciphertexts are encrypting the same plaintext. You need non-interactive zero-knowledge tools to do this. And when you use NISC in a protocol, the protocol will be non-black box and will not be efficient. So this is one important property of trapdoor functions. So far, we knew how to build trapdoor functions from a very small set of assumptions, which are limited to factoring DDH and LWE. And in fact, there's a very big gap from the set of assumptions that give us public encryption. In our work, we showed how to build TDFs from the computational Diffy-Hillman assumption. And this question has been open for more than 30 years. OK, so let me review the notion of CDH and the related notion of DDH. Both these assumptions are defined with respect to a group G. The CDH assumption says that from G, G to the x, and G to the y, it is hard to compute G to the power of x times y, where G is the random generator of the group, and x and y are random exponents. And the DDH assumption says that the joint distribution of G to the x, G to the y, and G to the x times y is pseudo-random. OK, so you might ask that, why do we care about building TDFs from the computational Diffy-Hillman assumption? The answer is very simple, because CDH is a weaker and more trustable assumption. In fact, we have examples of groups which are plausibly CDH hard, but are provably not DDH hard. OK, so let me tell you a little bit about the challenges that one would face when wanting to build chapter functions from Diffy-Hillman related assumptions. In general, this is not a trivial task, and the short answer for this is that all these Diffy-Hillman related assumptions rely on the discrete log problem, and we don't have a generic chapter for the discrete log function. So to see how this makes things more difficult, let's try to solve an easier problem. Let's try to build a chapter function from DDH. So from DDH, we know how to build public encryption, so let's try to d-randomize L-Gamal encryption, which is a CPH secure public encryption based on DDH. So remember that L-Gamal works as follows. The secret key is a random exponent alpha, and the public key is g to the alpha, where g is a generator. And if you want to encrypt a group element m, you raise g to the power of r, and you also raise the public key to the power of r and multiply it with m. Now you can see that if you have the secret key alpha, you can easily recover m, which is the plaintext, but you cannot recover r. And we cannot recover r is not because we are not smart enough to recover r. We have a very good reason that why we cannot recover r? Why we cannot recover r? Because recovering r is as hard as solving the discrete log problem. So the take-home message that I want you to take from this slide is that if you want to design a trapdoor function based on Diffie-Hulman related assumptions, you should not perform exponentiations in the evaluation algorithm. Because for the exponentiation function, we don't know a trapdoor. So with this intuition in mind, let's let me review how the previous work managed to build a trapdoor function from the DDF assumption. So this was first built by Piper and Waters, and it was simplified by some other paper. I just realized that they don't have the most updated version of my slides. OK. So the trapdoor function works as follow. The injective key of the TDF is g to the m, where m is the random invertible matrix of exponents. And the g to the m means g to the first element, g to the second element, and so on. And the trapdoor key is m inverse. Now we can see that for the evaluation algorithm, if you have g to the m and if you have an input x, which is given to you as a row vector, you can perform linear algebra in the exponent to compute g to the power of m times x transpose. Now if you have the trapdoor key, which is m inverse from the image input y, you can very easily compute x bit by bit, where the is bit is given at g to the xi. So when we want to prove that this trapdoor function is one way, we need to rely on this rank indistinguishability property, which is implied by DDH, which says that the distribution of g to the m is computationally indistinguishable from g to the m prime, where m is a random invertible matrix, and m prime is a low rank random matrix. This property can be proved based on DDH, but it cannot be proved based on CDH. The main reason is that when you want to prove this property one way or another, we should reason about a homomorphic property of the underlying hardcore function. And we don't have such a property based on CDH. So we do not really have any techniques that give us trapdoor functions based on CDH. So this is what we do in our work. We give the first construction of trapdoor functions from the computational Diffie-Hillman assumption. Our methodology in a very high level involves de-randomizing a specific class of public encryption schemes. So what I'm going to do is that I will first tell you what this notion is, which is the class of public encryption schemes that we de-randomize. We call it recyclable targeted key encapsulation schemes. We actually called it something else in the paper. I'm calling it this way in the stock. This notion combines properties from some previous work, from a piece of work of Sanjom and Deutling last year and from a paper of Bilor Adol from 2003. So I will tell you what this notion is. And then I will show you how we can build trapdoor functions from this notion. And I will refer you to the paper on how we can build this notion from CDH. OK. So let me tell you what recyclable targeted Kim is. So it's a kind of key encapsulation mechanism enhanced with some properties. So let's review the notion of Kim. A Kim scheme is defined exactly like a public encryption scheme with the only difference that the encapsulation algorithm does not take as input any plain text message m. It takes as input a public key pk and some random sr. And it gives us a cypher txc, which is encapsulating in some sense a key e. So it is encapsulating in the sense that if you have the right secret key, you can derive the value of e from c. So in my talk, I'm always going to assume that e is the single bits. So this is the notion of Kim. Now let me tell you what these two properties mean. Let's start with the targeting property. What it says is that the input to the encapsulation algorithm contains two new values, a target index i and a target bit b. So the output is the same as before. It's a cypher tx and a key value e. Now what is going to happen is that in order for you to be able to decrypt, not only should you come up with the valid secret key for pk, but that secret key should have the property that its ith bit is b. So i and b were specified in the encapsulation phase. So you might ask that, what will happen if I have a secret key that is valid relative to pk, but its ith bit is not b? This brings me to the security notion that even if you have the secret key, but if you are given an inconsistently formed cypher txc t, you cannot distinguish the true value of e from a totally random bit. OK. Now the recyclability property in a very high level says that the cypher txc output part of the encapsulation algorithm does not depend on the given public key. It only depends on a public parameter, which is the same across all public keys. Now you can see that I have specified the outputs of e as e1 and e2, and I didn't put pk as an input to e1 because e1 is independent of pk. OK. So these are the two properties, and this concludes the notion of recyclable targeted chem. Now, this picture summarizes the two properties that I just told you about. Whenever we want to build a chapter function from any assumption, one way or another, we should show two different deterministic ways of reaching the same output. OK. The way that this is enabled in our study is that you can drive the value of e in two different ways, either by applying the algorithm e2 to a public key pk and a randomness value r, or by applying the algorithm d to a cypher txc, which uses that randomness value r, and with secret key, which is consistent with this cypher text. So with this intuition in mind, let me give you a version of our chapter function, of our chapter functions, which allows us to recover the first bit of the input. All right. So what we are going to do is that the index key of my chapter function consists of two chem cypher texts, and the chapter key contains the underlying randomness values. The input to the chapter function is the secret key sk of the chem scheme. And what we are going to do is that we are going to first compute the corresponding public key. And then, depending on the value of the first bits of sk, I'm going to apply the algorithm d to either ct1 or ct prime 1. Remember that I told you that we can apply the algorithm d to a cypher text, which is consistent with the secret key. The part that I've marked with orange comes from the property that I told you in the previous slide. Now, if you want to invert, you have all the randomness values. You can form both these e2 values, and you can check for a match. So this allows us to recover the first bits of sk with probability 1 half. But this is not a problem because you can repeat this process in parallel to amplify correctness. So this is not a problem, but we have a problem. When we want to prove security, we will not be able to prove security. The main reason is that the security property of the chem that I told you about does not guarantee that we can hide the value of the target bits b. This is the main challenge that we'll have. The fix that we give to this problem is very easy. And that is based on this idea that we put a random bit in the input of the trapdoor function. And that random bit is going to be placed in the output in the place where we cannot apply the algorithm D. Now, in a little bit more detail, in a much more detail. So the key generation algorithm is exactly like before. Now, the only difference is that in the evaluation algorithm, I have a new bit b1. And the way that I'm going to form the output is that I will apply the algorithm D to one of the two ciphertexts. I can apply it to only one ciphertext. And for the position that I cannot apply D, I will put the bit b1, which is coming from the input. This magically solves the problem. And if you want to know how it solves the problem, I will refer you to the paper. OK, so I gave you a construction that allows us to recover the first bits of the input. You can do the same idea to recover all the bits of the inputs in a secure manner. So let me conclude. We gave a construction of trapdoor functions from the computational Diffie-Hillman assumption. As a couple of open problems, we would like to know whether we can build more advanced forms of TDFs from the CDH assumption, like lossy trapdoor functions. And as a second open problem, which I think is very interesting, it is whether we can build or separate trapdoor permutations from a CDH or DDH. Thank you very much.