 Hello, my name is Satya. Today I'm going to discuss about our paper on new constructions of printing values, when we function with encryption, and more. There's a joint work with Dishap Goyal and Bindwaters. Let's start with a brief history. In 2017, two hand others introduced this concept called Laconic Oblivious Transfer. To make oblivious transfer more communication efficient. They used really cool techniques in this paper to construct Laconic OT. Later on dotting and glug, I've started out these techniques in a beautiful primitive called communion hash with encryption. Later on this primitive and its variants found tons of applications like identity-based encryption, registration-based encryption, trapdoor functions, designated verifier zero knowledge, CPR2CC transformation, trapdoor hash functions, and so on. And that's a ton of applications. As it turns out, all these applications can be constructed based on a very few related parameters like one-way function with encryption, hash with encryption, signatures with encryption, and so on. These primitives are syntactically pretty similar and some of these primitives are even known to imply each other. Now we may ask, why are these primitives so powerful that they have so many applications? The reason is, they have a witness encryption flavor to it. We know witness encryption is quite powerful, so when these primitives are powerful. Since these primitives are so powerful, we certainly need more efficient constructions to build these primitives. And that's precisely what we're doing in this paper. In this paper, we concentrate on two primitives, one-way function with encryption, and hinting piages. Even though we concentrate only on these primitives, we believe our techniques would be helpful to construct even other primitives as well. I would also like to mention that there are other papers that also work towards improving efficiency of this framework. For example, Gagandhajibadi proved that one-way function with encryption implies trapdoor functions. And later on, three more papers work towards improving efficiency of trapdoor functions. In particular, these works try to improve the public key size of the trapdoor functions and the output size of the trapdoor functions. As I mentioned earlier, even we work towards improving efficiency of this one-way function with encryption, our efficiency improvements lead to trapdoor functions with short-up public keys. And that's it regarding the history part. Now let's see what one-way function with encryption actually is. As the title says, there is a one-way function component to it. So there is a function F that takes and protects and outputs a value of one. And there is an encryption component as well. So there is an encryptor that encrypts a message and there's a decryptor that encrypts a suffix. But what are keys to these algorithms? The keys actually come from the one-way function. Here Y acts as the encryption key and X acts as the decryption key. The correctness says that if X is a pre-major Y, then the decryption works. Well, that looks pretty similar to public encryption. To make it something different, the encryption algorithm also takes an index i and a bit b as input. And now the decryption algorithm works if X is a pre-major Y and if ith bit of X is equals to b. So not every pre-major Y works only as specific pre-majors of Y work. For this security part, F has to be one-way and two, F has to be smooth. Which means if X is a sample from a uniform distribution, then the corresponding Y should also resemble uniform distribution. For the security of encryption, if the adversary has a value X and if the ciphertext is encrypted with respect to image of X, which is Y, and if the encryption is with respect to opposite bit of X, in that case, the ciphertext should look uniform to the adversary. It essentially states that if the adversary doesn't have the right decryption key, then the ciphertext should look uniform. And now let me discuss about a cool construction of the sprinter based on DDH assumption. This construction is given by dotting and GERG. In this construction, the public parameters would consist of a group generator G and two-and-group elements, two-and-group generators, arranged just in the form of a matrix. Here N represents the input length of the one-way function. Suppose we have an input X. To complete the one-way function, from each column of the matrix, we pick one generator. How do we pick it? From the jth column, we pick the generator that corresponds to the jth bit of X. So that's going to be gjxj. And now we are going to take product of all these generators to compute the value of one. And now let's see how to encrypt a message. Here the encryption key is the image Y. And we encrypt the message with respect to index I, bit B, and randomness row. To encrypt the message, you raise every generator in the public parameters with the randomness row. And then you got all these entries as part of the ciphertext, except for i, 1 minus bit element. So you have 2n minus 1 of these entries as part of ciphertext. And then you also give out Y power row times N as part of ciphertext. And now let's see how to decrypt the ciphertext. Here the decryption key is the pre-mage X, such that X i equals to B. The decryptor would pick a generator from each column of the ciphertext, depending upon the bits of X, as N multiply all these generators. And since X is a pre-mage of Y, what you get is going to be Y power row. And since given Y power row, you can easily opt in the message given Y power row times N. And suppose X is pre-mage of Y, but i-bit effects is equals to 1 minus B. What happens then? In that case, the decryptor can't multiply all the generators in the ciphertext because one block is missing. And that's the basis for security. The security proof crucially depends on the fact that the ciphertext is missing in this block. As it turns out, the constructions of many other related primitives also proceed in a very similar fashion. Even their security proofs rely on the fact that the ciphertext is missing in a block. And that's what we call missing block framework. Recall this picture from few slides ago. We had tons of applications from the literature and all these applications are relying on a very few related primitives. As it turns out, the constructions of all these primitives all rely on missing block framework. So the technique is quite powerful. But there's a problem with it. Because of this technique, the ciphertext became quite long. That also means, if we use this one function function to construct tabbedo functions, the resulting tabbedo functions would have a long public key. And now I guess it's better to look for other frameworks and techniques to build these primitives. And that's exactly what we're doing this work. We develop our new primitives and we develop some new techniques to build one-way function with encryption. And we also show that our techniques are helpful in having a better constructions for hinting PRGs as well. And now let's compare the efficiency of our techniques with that of missing block framework. Here are the time complexes of DDH construction. Moving on to our construction, our first construction is based on factoring assumption. And as you know, we want to deviate from the missing block framework because this framework is giving out long ciphertext size. And that's precisely what we optimize. Rest of the complexes remaining same are encryption algorithm and ciphertext size are exponentially better than DDH construction. And but since we are relying on factoring assumption, our groups as is a bigger. So there is some trade-off here. We also extend the same techniques to pairing based groups with dbdh assumption, which is some pairing based assumption. We also optimize encryption time and ciphertext size. But this time, the time complexes of the other algorithms are actually higher. So there is some trade-off here as well. We also have to have a construction without pairings, but the efficiency is slightly weaker than that of pairings construction. So I'm not discussing here. And there's no clear winner. It all depends upon what you want to optimize. All right. So let's finally move on to the construction part. For the rest of the talk, we will see how to construct one-way function with encryption based on the assumption called factoring. The assumption states that, given product of two large frames, you can't easily factor it. In this construction, the public parameters would consist of an RSA modulus N, which is a product of two large frames. It also consists of a group generator G for the groups Z and star. So basically here we are dealing with the groups Z and star and N acts as a group description and G acts as the group generator. And now the public parameters also consists of two random large frames arranged as a matrix here. And again, represents the input length of the one-way function. And remember earlier, we had two random generators in the DDH construction. Here we're having two random large frames instead. And that's the difference. We want to compute the one-way function of input X and then from each column of the matrix, we're going to pick one element depending upon the bits of X. And we're going to multiply those. So that's going to be a product of EJXJ with an exponential G with this exponent. And that's going to be output value Y. Remember we earlier had to multiply a few generators in the DDH construction. Here we're going to multiply a few exponents instead. So that's a difference here. And suppose you want to encrypt MSSM with respect to key K, with respect to key Y, index I, a bit B, and randomness row. This avatar looks something like this. Remember earlier in the DDH construction, we exponentiate E generator in the public parameters with randomness row and we go to N minus one element. We miss one block there. Here it's exactly the opposite. We only give out, I come a bit entry and miss out the rest of the blocks. And that's where our efficiency kicks in. Here our ciphertext contains only two elements and our encryption time is also much less. For the decryption, we get the pre-match X such that XI equals to B. And the decryptor exponentiates the ciphertext with these bunch of EJXG values, just like when evaluating the one way function. But the decryptor is going to ignore the I, B entry because that's already part of the ciphertext. Clearly this is going to be equal to Y power row and given Y power row, you can decrypt the ciphertext but decrypt the misses given Y power row times N. And now let's move on to the security part. As we know, we got to prove three security properties, one-wayness, smoothness and security of encryption. Let's proceed with one-wayness first. Suppose the adversary gets public parameters and some image Y. We need to prove that the adversary will not be able to compute any inverse of Y. What we prove is if the adversary can compute inverse of Y, then it can also break RS assumption. Let's prove it. Suppose Y is sampled by first sampling X and setting Y equals to F of X. And now let's say the adversary returns some value Z such that F of Z equals to Y. We set up parameters in such a way that each image Y has a lot of inverses. So with high probability, Z is not going to be equal to X. Let's say ith bit of X is not equals to ith bit of Z. Given that both F of X and F of Z is equals to Y, we have this equation here. By moving E as EI to the opposite side, we have the second equation. And now we use something called Shami's trick. I'm not going to the details of the trick. All you need to know is that by using the trick, when we have an equation of the second kind, you can compute G power one over EI ZF from it. And that means we can solve RS a challenge for generator G and prime EI ZF. So the adversary is breaking the one-manus property. He can also have break RS assumption. And now let's move on to the security of encryption part. This property says that if the adversary does not have the right decryption key, then the Cypher text looks random. Here's the structure of a Cypher text. We know that from RS assumption, given a generator H and some exponent E, computing H over one by E is how? And that's what RS assumption states. If we apply the same assumption with H equals to Z power root times AIB and E equals to AIB, then RS assumption states that computing G power row is hard. Suppose we assume slightly stronger assumption called firing assumption. We can do that. Not just computing G power row is hard, G power row also has high entropy. If G power row has high entropy, then certainly Y power row also has high entropy. And that means the second part of the Cypher text looks random and that's our security. And now let's move on to the smoothness property. The property says that if X is sampled from uniform distribution, then resulting Y also resembles uniform distribution. For these types of functions, general in cryptography, we prove this property by proving first that the function is two universal and then invoke leftover hash lemma that implies the function is smooth. Unfortunately, this kind of proof doesn't work here. Well, why doesn't a traditional proof work here? It's because a good old statement says something like this. Product of EJJ mod T, this function is two universal. If one T is prime and if EJJ values are sampled uniformity. Let's see if this statement is true here. Here we have product of EJJ modulus FFN, that is the order of the exponent. So the modulus FFN is certainly not prime here. The first condition is not satisfied. And then we are sampling EJJ values as random large primes. The primes are not certainly uniform or T. So even the second condition failed. So we have two issues with the proof. Let's see how to solve both the issues one by one. The first issue is we are dealing with a modulus T equals to FFN, which is a composite number. Since we want a prime modulus, let's prime factor T into R1, R2 and so on to RK. So here each of these RA values is a prime number. And then let's break the original function into K components, product mod R1, product mod R2 and so on to product mod RK. Obviously you can combine these K components into the back into the original function by using Chinese humanitarian. So these K components are just a different way of representing the original function. If you can prove each of these K functions is smooth, then obviously the original function is smooth. Since each of these functions RA values is a prime number. So for each of these functions, the first condition is satisfied. So the first issue solved. And now let's move on to the second issue. The second issue is each of these values are not sampled uniformly. They are random primes. You know, primes have a bizarre distribution. We can't really improve much about primes. So what to do? Fortunately for us, we have a cool theorem down here which says that if you sample a large primes and take it modulo R, you will get a distribution which is close to ZR star. And once you have this theorem, our both conditions are satisfied. We know the modulus is prime and each of these values are also close to uniform. So that's cool. This beautiful theorem down here is called a prime number theorem for arithmetic proctions. This proved way back in 1800s and it's famous in math literature. To the best of our knowledge, something like this has not been used in Kippur before. I also have to note that this theorem works only if the modulus R is actually large. So we have to deal with the functions with small modulus somewhat differently. And that's what we are going to do now. Let's break up proof into two cases. One first case when modulus R is large and the second case where modulus R is small. In the first case, the prime number theorem applies and EJXJ values are close to uniformly random over multi. And since, so you can prove this function mod R I is close to universal. Actually, we were able to prove something slightly different. We proved that this modified function is close to universal. And now we can apply leftover hash lemma and prove that this function is smooth. In the second case, the value R I is small, which means that this entire function value is which is at most R I is also going to be small. We were able to prove that this function value is so small that it does not affect the distribution of Y, original value Y by much. When you want to prove something about the distribution of Y, this small function value only acts as a small noise in front of the overall distribution of Y. For the proof of the first case, we did on statistical arguments leftover hash lemma, whereas for the second case, we relied on computational arguments. We used factoring assumption for this proof. I proved words by combining both the statistical argument and computational arguments very coherent. And now let's finally move on to one function encryption construction based on paving assumptions. The idea of the constructions are pretty similar to the factoring based construction. Even in this construction, the public parameters consist of a group generator G and a bunch of these EIB values. Here N represents again the input letter, the one function. Remember in the factoring based construction, these EIB values are random primes. That's not the case here. These EIB values are actually correlated. In fact, we choose this by sampling alpha, some random value of alpha, and setting these EIB values to be consecutive, just like this. So E10 is going to be alpha plus two, E11 is going to be alpha plus three, and so on. So EIB would be alpha plus two Y plus B. And now if you want to evaluate one function on input X, the function just courses the same way. For each column of the matrix, we pick one entry depending upon the bits of X. You multiply these values, so that's EJXJ, product of EJXJ, and you expand your G with this value. If you expand this function, then you're going to get G power product of alpha plus two G plus XJ. Actually, I want to tell something here. Remember in the factoring based construction, what was the secular assumption there? Given H and E, it's hard to compute H over one by U. Since we're using similar techniques here, we also need a similar assumption here. So given H and E, we require that, it's hard to compute H over one by U. But that assumption is not true here because these are primordial groups. So what we do is to keep these E values secret so that computing H over one by E might be hard. And that means we don't give out alpha in the public parameters. We don't give any of this matrix entry as part of public parameters. But then how do you compute Y? You need alpha to compute Y, right? So we include Geo power alpha, G power alpha square and so on up to G power alpha power N as part of public parameters. And now the assumption states that given these public parameters, it's hard to compute G power one over alpha. And given these values, you can expand this polynomial over here all product of alpha plus two G plus XJ in the exponent of Y. And you can compute these entries from public parameters. And now let's see how to encrypt a message. Let's say you want to encrypt with respect to encryption key Y and index I, bit B and randomness row. The ciphertext format again looks pretty similar to factoring based construction. It's G power row times EIB and Y power row times M. Remember in the missing block framework we gave two elements as part of the ciphertext and we missed one block over there. Here we do the opposite. We only give out one missing block that and ignore the rest of the blocks. And that's where we get our efficiency. We only have two elements as part of the ciphertext. And now suppose you want to decrypt the ciphertext with the decryption key X. Here X is the pre-image of Y. The decryption again was pretty similar to the factoring construction. But we have a problem here. The decryptor doesn't know these EJXJ values because those are not given part of public key. And so decryptor can't explain it like this. Here we use a pairing trick. The ciphertext instead of consisting of Y power row it turns off E of the pairing of G and Y power row. And the decryptor works by pairing the first element of the ciphertext with G power product of the EJXJ values. And since FFX equals to Y, this entry over here it's equivalent to E of G comma Y power row. And once the decryptor computes this element it can easily compute the messes given the second element of the ciphertext. In our paper we also give a way to solve this problem without pairings but I'm not going to do those details here. As you can see the techniques for both factoring construction and pairing construction are pretty similar. It's just that we need to make a few modifications here and there to make it work. And that's all I want to discuss about the constructions. Let's move on to the results part. We implemented our constructions and measured our runtimes and here's a comparison for 128 bit security. You know, we optimized for encryption time and ciphertext size. As you can see, whatever parameters we tried to optimize our constructions work pretty well. The DDH construction time, encryption time is 1.0 upon 1 4 seconds. Whereas for our pairing this construction it's 0.002 seconds. So that's 70 times faster. And the DDH ciphertext size is 32.7 kilobytes whereas our ciphertext size is 0.67 kilobytes. So that means there's a 50 times improvement. So that's pretty good. The time taken by other algorithms is slightly higher than DDH construction. I agree. So there's a trade-off here. There's a reasonable trade-off. You can choose which construction to use based on whatever the application is. And let me conclude the five five. Let me finally conclude the talk. In this talk, we discussed about one-way function with encryption. We saw that previous papers are using missing block framework and we proposed a framework that is different than a missing block framework and we had improved efficient, we got efficient ciphertext because of this framework. In this paper, we also extended the same techniques for hinting priorities as well. And that led to hinting priorities with shorter public parameters. We finally evaluated our performance and we believe that the techniques can be extended to other very related printers as well. It would be cool to have more techniques and frameworks for these. And with that, let me conclude the talk. Thanks for listening to my talk until the end. And here's the reprint version of the paper.