 Thanks, Neer, for the introduction. I'm going to talk about breaking the sub-exponential barrier in Afustopia, and this is joint work with Sanjam Garg, Omkan Pandey, and Mark Zandri. So, let me start with the notion of program affiscation. So, program affiscation was introduced in the seminal work of Barak et al, and it is modeled as a compiler that takes a program P as input and outputs another program O of P that preserves the functionality of the original program, but hides all the implementation details. So, this security notion of hiding all the implementation details has been formalized in different ways, and in this talk, I'll be focusing on the indistinguishability-based definition. Indistinguishability affiscation or IO, in short, guarantees that for any two circuits, C0 and C1, that compute the same functionality, affiscation of C0 is computationally indistinguishable to the obfuscation of C1. And starting from the first candidate construction given by Garg et al in 2013, we now have several candidates of IO from assumptions on multi-linear maps. Okay, the security guarantee provided by IO seems extremely weak, but somewhat surprisingly, IO plus other standard assumptions, such as one-way functions, has been used to construct several cryptographic primitives. To give you a few examples, IO has been used to construct functional encryption, deniable encryption, non-interactive key exchange. It's been used to construct two-round multi-party computation protocols. It's been used to prove hardness of the complexity class, P-pad. And it's also been used to construct trapdoor permutations. And these are just a handful of examples, and there are several other cool applications of IO. But if we were to use IO to build cryptographic primitives or in certain applications, we would run into what we call as the sub-exponential barrier. So let me explain what I mean by the sub-exponential barrier. So intuitively, this sub-exponential barrier refers to the sub-exponential loss in the security reduction that incurs in the construction of IO. So let me start with an observation that all known constructions of IO either require an exponential number of assumptions, which is essentially one assumption per each pair of functionally equivalent circuits, or incurs in sub-exponential loss in security, if we were to base on a small number of assumptions. And this is no coincidence, and we strongly believe that this sub-exponential loss is inherent with the construction of IO, and this sub-exponential loss carries over to any application of IO as well. And furthermore, several applications of IO, including some of them that I mentioned in the previous slide, require sub-exponentially hard indistinguishability obfuscation to prove their security. So for such applications, there are two sub-exponential losses, one in the construction of IO, another from IO to that particular application. So the question that we ask in this work is this, is this sub-exponential loss inherent to construct applications of IO as well? Or can we somehow circumvent this sub-exponential barrier for certain applications of IO? In this work, we show that certain applications of IO can in fact be based on a polynomial falsifiable assumption. And the assumption that we use in this work is the existence of compact public key functional encryption. And we know constructions of compact public key functional encryption from polynomial hardness assumptions on multi-linear maps. Whereas all known constructions of IO require sub-exponential hardness assumptions on multi-linear maps. So the applications that we get in this work are, first, is that construction of trapdoor permutations. And the second, a construction of non-interactive key exchange for unbounded number of parties without a trusted setup. And prior to our work, the construction of trapdoor permutations required sub-exponentially hard indistinguishability obfuscation. And the construction of non-interactive key exchange with these two specific properties was known only under polynomial hardness of indistinguishability obfuscation. And at this point, I would like to remark that it was shown in independent works by Anand Tanjain and Bitansky and Vaikunthanathan that sub-exponentially hard compact public key functional encryption already implies the full fledged indistinguishability obfuscation. But whereas in this work, we just rely on polynomial hard compact public key functional encryption, which seems to be a quantitatively weaker security assumption when compared to IO. So in this talk, I'll be focusing on just the second result. That is how to construct a non-interactive key exchange protocol. And for the sake of simplicity, I will assume a bound on the number of parties. And I also assume that there exists a trusted setup. And I encourage you to look into our paper for the construction of trapdoor permutations and how to remove these restrictions of bounding the number of parties and assuming the existence of a trusted set. Okay, so the outline of the rest of the talk is as follows. I'll start with the notion of functional encryption, give some intuitive notion of definition and security. I'll then give you a brief outline of the Anand Tanjain and Bitansky and Vaikunthanathan's FE to IO transformation. And I'll explain why this approach incurs and sub-exponential loss in security. And I'll then give you the key technique that we use to break this so-called sub-exponential barrier for certain applications of IO by modifying the technique FE to IO transformation approach. And then I will tell how to use this technique to base, to construct non-interactive key exchange protocol, okay? So let me start with functional encryption. So functional encryption is just a generalization of public key encryption that provides fine grained access to data. So in a functional encryption scheme, the ciphertext is generated using some public parameters. And there is a master secret key that is associated with this set of public parameters. The master secret key allows you to derive function keys for various functionalities. The correctness guarantee requires that the decrypting, some ciphertext encrypting some data D, using a functional secret key allows you to learn the output of the functionality on the underlying data. And the security guarantee is that nothing apart from the output is leaked, okay? So now let me give you a brief outline of the FE to IO transformation of Anand Tanjain and Bitansky and Vaikunthanathan. So let's say we have a circuit C that takes n bits of inputs and outputs n bits. And we want to give out an obfuscation of this circuit using functional encryption. So the first step is to view this circuit as a full binary tree of depth n, where n is the number of bits of input that this circuit takes. So the leaves of this binary tree are labeled with the all strings of length n, starting from the all zero string and ending at the all one string. And the root is denoted by the empty string, let me call it as epsilon. To evaluate the circuit on a particular input, you just traverse along the root to the leaf path, where leaf is given by the input that you want to evaluate the circuit on. Compute the circuit at the leaf and then output the value. So with this view of a circuit, let's see how the obfuscation of the circuit looks like. So the obfuscation of the circuit consists of a bunch of functional secret keys, sk1 to skn, along with the final functional secret key skc, and an initial ciphertext that encrypts the root epsilon. So the final functional secret key skc implements the circuit that we want to obfuscate. And the intermediate functional secret keys, which are one per every level of this binary tree, implement the bit extension functionality. So let me explain what does this bit extension functionality mean. So suppose I have an encryption of an i-1 bit string, x1 to i-1. And I decrypt this using the secret key ski. Now I get two ciphertexts, one that containing the extension of this input by the bit zero, and the other containing the extension of this bit by one. So all of these intermediate secret keys implement this bit extension functionality. So let us see how to evaluate this obfuscation on a particular input, let's say x. So the first step is to take this ciphertext that encrypts the root, decrypt it using the first secret key sk1, and you will get two encryptions, one encrypting the bit zero and the other encrypting the bit one. So depending on the first bit of an input, you choose either the zero encryption or the one encryption. And you then recurse this procedure using the second secret key. And at the end of end decryptions, you get a functional encryption that encrypts your actual input x. And now you can use the final functional secret key that just implements this circuit and obtain the output of the circuit on this input x. But the way that I just described this construction does not give full-fledged IO because the final functional secret key skc is not guaranteed to hide this circuit c. And in order to hide the circuit, the approach just encrypts the circuit using a symmetric key, and then add the symmetric key to the initial ciphertext. So let's try to get some intuition on why this approach incurs and sub-exponential loss in security if it has to get full-fledged obfuscation. So let's say we have two circuits, c0 and c1, that are functionally equivalent. And we want to prove that obfuscation of c0 is computationally indistinguishable to obfuscation of c1. So obfuscation of c0 can be thought of as a full binary tree, where c0 is being evaluated at every leaf node. And we want to change to an obfuscation of c1, where c1 is being evaluated at every leaf node. This is just a high-level overview. And the approach that these works take is via a hybrid argument, where in the first hybrid, they change the obfuscation of c0 to an intermediate circuit that evaluates c1 on the first leaf node, which is the first input, and c0 on the rest of the inputs. And this change is possible by using the security of functional encryption. And in fact, they just need a weaker notion of security, namely the indistinguishability-based security of functional encryption to make this change. So the next hybrid is to change to an intermediate circuit that evaluates c1 on the first two leaf nodes and c0 on the rest of the leaves. And this is done for every leaf, one by one. And at the end, you get an obfuscation of c1, where c1 is being evaluated at every leaf node. So the number of hybrids required in this approach is equal to the total number of inputs, which is 2 to the n. And that is why this approach incurs in sub-exponential laws in security. Okay, so now we have understood why the previous approach incurs in sub-exponential laws. Let us look at the key technique that we use to break the so-called sub-exponential barrier for certain applications of IU. So the main observation that we use is that the circuits that are usually encountered in IO proofs have similar structure. So let me explain what I mean by this similar structure. So in a typical IO proof, I have an obfuscation of c0 in one hybrid. And I want to change to an obfuscation of c1 in the next hybrid, so that c0 and c1 are functionally equivalent. So by the security of indistinguishability obfuscation, this hybrid change is indistinguishable with respect to an adversary. Now if we look into this c0 and c1 circuits more closely, then we would realize that they are not only functionally equivalent, but have lots of similarities in their structure. So to give you a concrete example, consider the c1. So c1 evaluates a new circuit c0 prime on a special input. And on rest of the input, it evaluates the same circuit c0. So this c0 prime is constructed such that its output on the special input is same as c0's output on the special input. So this is just to ensure that c1 and c0 are functionally equivalent. But notice that c1 is evaluating the same circuit c0 on all but one input. And it just evaluates a new circuit c0 prime on a one special input. So if we look into the binary tree structure of this particular c1, then we would realize that it is evaluating c0 prime at one leaf node. And it evaluates c0 on all the other leaf nodes. So in order to change from an obfuscation of c0, where c0 is being evaluated at every leaf node to an obfuscation of c1, it is sufficient to change the distribution at only one leaf. And this step can be realized just by using the polynomial hardness of functional encryption. And if you are wondering where do we encounter such c0 and c1, the answer is in the punctured programming approach of Sahai inverters. Usually in the punctured programming approach, we change from one circuit to another circuit that has an additional if statement. So if the input is some hard coded value, then you output some hard-wired value in the circuit. And on the rest of the inputs, you just perform the same computation as in the previous circuit. And for such hybrid changes, we can in fact realize that hybrid change by using polynomial hardness of functional encryption. So let us see how to use this technique to build non-interactive key exchange. So in a non-interactive key exchange protocol, we have several parties and they wish to derive a shared key. So there is a public bulletin board, and the parties publish some public information to this bulletin board and retain some secret information with themselves. So the key derivation algorithm takes all the published information and uses the secret information of a party and derives a shared key. So the correctness guarantee is that the key that is derived by every party is the same. And the security is that given just the public information, the shared key is indistinguishable to a random string. So let us take a look at the bonus and re-non-interactive key exchange protocol from indistinguishability obfuscation, which will serve as the basis to construct it from polynomially hard functional encryption. So in the bonus and re-non-interactive key exchange protocol, the public information is just a public key of a semantically secure encryption scheme, and the secret information is the corresponding secret key. So the shared key is given by a pseudo-random function evaluated on the set of public parameters. And in order to evaluate the pseudo-random function in a secure manner so that adversary does not learn this value, the parties take help of a trusted party. So this trusted party samples a pseudo-random function key, let's me call it as S, and it constructs this program P. So this program P takes as input the public parameters, and first it computes the shared key by evaluating the pseudo-random function on the set of public parameters, and it just outputs the encryption of the shared key under each one of those input public keys. And this trusted party obfuscates this program P and publishes it on the bulletin board. Now the parties, the actual parties that are involved in the scheme take this program P, run it on the set of public parameters, and obtain the ciphertext. Now using their secret key, they can decrypt one of the ciphertexts because they know the secret key SKI for the public key PKI, and they can derive the shared key. The security of this construction is proved using the punctured programming approach of Sahay and Waters, and at a high level, if you look into the security proof, the place where we will be using the indistinguishability obfuscation guarantee is in a hybrid where we change from a program P that is constructed honestly by this trusted party to another hybrid, to a program P prime that on a special input X, that is if the set of the PK1 to PK4 is equal to the special input X, it outputs some hard coded value Z. On all other places, it computes this PRF, it computes this PRF value on the set of inputs and outputs the ciphertext just as in the previous program P. So this fits well into our paradigm of basing it on polynomial hardness assumption, and if you look into this binary tree structure again of this new program P prime, then we would just be changing the distribution at just one leaf, and this can be realized by using polynomial hardness of functional encryption. To conclude, we identify a property that is shared by many applications of indistinguishability obfuscation that enables us to base security on polynomial hardness assumptions. But we haven't been able to obtain all applications of IO, even the application that used the punctured programming approach. One such application is the non-interactive zero-knowledge protocol of Sahai and Waters. So we haven't been able to base this on polynomial hard functional encryption, and one other example is the deniable encryption from the same paper. So that is also, we haven't been able to do that. It would be a nice open problem to base at deniable encryption on polynomial hard functional encryption. So in a follow-up work by Liu and Zandri, they provide a simple interface for constructing applications from polynomial hardness assumptions by using the techniques that we developed in this work as well as a previous work. And that's it. Thank you for your attention. Thank you, Akshay. We have time for some questions. Yeah, Brent? Yeah, it's just one. Do you have any kind of conjecture of, let's say, undeniable encryption? Are there some applications which you think this won't apply to? Do you think it... So the... Yeah, let's just conjecture. About why doesn't NISIC work? That would be simple. Because in the soundness, to prove the soundness, we need to prove that for every X that is not in the language, we need to say that you cannot generate a proof. And so the Sahay and Waters NISIC proof actually checks if the X, it takes as input the statement X and the witness for this relation and it generates a signature for that on the statement X. If the X, W belongs to that relation. So the signature is generated using a punctured PRF key. And so in order to use our techniques, we have to puncture the PRF key for every witness. And since the number of witnesses is actually exponential, we need to incur an exponential loss in that setting. Is denial of encryption a similar problem or is that more... Yes, denial of encryption, I think it's a similar problem. The problem is because the current proof that uses indistinguishability of fiscation does not seem to be amenable to our techniques. Maybe there's a smarter reduction which could be done. More questions? Do you think there is hope to not base directly some of those application you mentioned like NISIC and denial of encryption directly on only polynomial hardness assumptions, but to do as was advocated in this paper by Mark Zendri to push all the sub-expansion hardness to some primitive, which is safer than IO, and to only rely on polynomial hardness for the more involved primitive, such as function encryption. Using, for example, extremely lossy functions. Sorry, so the paper by Liu and Zandri actually constructs a primitive called as exploding obfuscation, which is based on polynomially had functional encryption, and you can obtain the application that we discussed in this work by just using this exploding obfuscation primitive. So... Yes, but so far, the other application like NISIC, do you think there will be hope to base them on polynomial hardness assumption for functional encryption plus sub-expansion or exponential hardness assumption? So for example, we can base NISIC on polynomially had functional encryption plus witness encryption. But witness encryption requires sub-expansion hardness assumption in a similar manner to indistinguishable. Okay. Yeah.