 Thanks a lot, Mariana. So I'm going to talk about our work revisiting the cryptographic hardness of finding a Nash equilibrium. This is based on joint work with Sanjamm Gargan Omkant Pandey. Let's start with the notion of Nash equilibrium. Nash equilibrium, as you may all be well aware, is an extremely popular solution concept in game theory. Informally speaking, a set of solution strategies is said to be Nash equilibrium if no individual player tends to deviate unilaterally from this current strategy to get a higher payoff. To illustrate, let's consider a toy example of a game between two players with the following payoff matrix. And from the informal description that I just gave, it's easy to see that AA is actually the Nash equilibrium for this game. If we allow the players to define a probability distribution over the set of strategies available to them, then Nash in fact proved that every finite game has such an equilibrium. A question that has intrigued complexity theorists for a very, very long time is that, given a game can the Nash equilibrium be efficiently computed. Before moving on, let's review what we know about the complexity of finding Nash equilibrium. So a theorem of Nash actually proves that Nash equilibrium belongs to the complexity class TFNP. So TFNP is the set of all NP search problems where every instance of the problem is guaranteed to have at least one solution. So since Nash theorem proves that every finite game has a Nash equilibrium, this puts Nash equilibrium in the complexity class TFNP. A consequence of this is that it cannot be NP hard unless NP is equal to co-NP. An important step in understanding the complexity of finding Nash equilibrium was taken by Papadami through in 94 who showed that it belongs to a complexity class Ppad. Ppad is a subclass of TFNP that reduced to a special problem called as end-of-line. So let's see what end-of-line means. So we have a large graph where every node has indegree as well as outdegree at most one. So this is a directed graph and you can see from graph, the basic graph theory that this is actually a disjoint union of cycles and lines. And this large graph is succinctly described by a function f. So what does this function do? Given a node y and a special symbol next, it gives the next node which is next node in the graph which is the out neighbor of y. So this can be thought of as some sort of successor of this node and similarly given y and a special symbol previous, it gives the previous node x. So this large graph is succinctly described by this function f. We are also given a source node where it has indegree 0 and outdegree 1. And the goal is given this source and this succinctly describable function f to find the end of the line which is either another source or a sink. Notice that such a point, such a node is guaranteed to exist by a parity argument in this graph. So through a series of subsequent works, it was established that Nash equilibrium is not only in P-pad but is in fact P-pad complete. So understanding the complexity of P-pad is extremely crucial for algorithmic game theorists. So problems in P-pad are hard to solve in polynomial time but proving unconditional hardness results is extremely challenging. This is similar to the state of assumptions in cryptography and it was in fact suggested by Papadami through that maybe cryptographic assumptions could be used to prove hardness of P-pad. And in fact, for some super classes of P-pad this was in fact true because cryptographic assumptions like factoring and collision resistant hashing was used to prove their hardness. But such a result was not known for the most interesting class which is P-pad until very recently when Bitanski, Panept and Rosen showed that quasi-pollinomally hard IO and sub-exponentially hard one-way functions imply P-pad hardness. This is proved by our two lemmas. The first lemma shows that quasi-pollinomally hard IO and sub-exponentially hard one-way functions can be used to construct hard instance of a problem called a sink of verifiable line which I'll describe it very soon. And the second lemma shows that sink of verifiable line in polynomial time reduces to end of line which is the complete problem for P-pad. These two lemmas imply the first theorem. So before moving on let's review what is this sink of verifiable line because it will be important for our work as well. So it is similar to that of end of line in the sense that we also have a large directed graph but instead of a disjoint union of cycles and lines we have a single directed line. And we have the source node. And recall that end of line instance was succinctly described by a function F that acted as some sort of predecessor as well as the successor circuit. But here we are going to just have a successor circuit that gives the next node in the graph. And in addition to the successor circuit we also have something called as the verification circuit. So what does this do? Given a node label X and an integer n it outputs one if and only if X is the nth node from the source. And given these the source the successor and the verification circuit the goal is to find the end of line. It was shown by Bitansky at all using ideas from an earlier work of Abort at all that is real in polynomial time reduces to end of line. Before moving on let's review the notion of program obfuscation. So obfuscator is a compiler that takes a program P as input and outputs another program that preserves functionality but hides all implementation details. So this is actually formalized through the notion of virtual black box which says that an obfuscator code is equivalent to that of black box implementing the same program. And this notion seemed to be too hard to be achievable and Barak et al in their work in 2001 showed that there exist certain functionalities that are not VBB obfuscatable. So they considered a natural relaxation of the notion called as indistinguishability obfuscation. So indistinguishability obfuscation requires that for any two circuits or programs that are functionally equivalent and obfuscation of one is indistinguishable to obfuscation of another. And the first candidate was given by Garg, Gentry, Hallevi, Recoa, Sahai and Waters in 2013. So before moving on let's try to get some intuition on how obfuscation and one way functions could be used to construct hard instance of sync of verifiable line which seems nothing to do with any crypto that we know. So to get the main intuition let's assume for now that VBB exists and let's ignore the verification circuit. So we need to construct hard instance of sync of verifiable line which defines a directed line. So the instance is as follows. The ith node from the source is given by a node label i followed by a prf computation on i. You can think of this prf computation as some sort of signature on this message i. The source is given by 0 prfs on 0 and the last node is given by 2 to the k minus 1 prfs on 2 to the k minus 1. k can be thought of as some sort of security parameter so there are 2 to the k nodes in total. Since we are ignoring the verification circuit we need to now define the successor circuit. So let's see what the successor circuit does. So it has the prfs hardware in it and it takes as input x comma sigma which is a candidate node label. It first checks if sigma is a valid signature on this message x. If it is not the case then it outputs the special symbol invalid. If signature is valid and x is less than 2 to the k minus 1 that is x is not the last node then it outputs the next node which is given by x plus 1 prfs on x plus 1 and if x is the last node then you have found the end of line which means that it outputs a special symbol bin. So the hard instance consists of the source node followed by an obfuscation of this successor circuit. So let's see if the obfuscation satisfies the VBB security guarantee then this constitutes a hard instance. So what does this successor circuit do? It just computes some prfs, checks the validity and outputs another prf value and note recall that VBB security guarantee guarantees that any giving the access to the obfuscated circuit is equivalent to giving black box access to the same program and we all know giving black box access to a prf is equivalent to giving black box access to a random function and in particular the signature on this last node is some random value and notice that no adversary can guess this random value except with negligible probability but we have made a very strong assumption that VBB exists and VBB for pseudo random functions has strong limitations and the main contribution of bitansky et al is that even if we assume sub-exponentially hard hyo this in fact constitutes a hard instance of SVL but a natural question to ask is that can we base p-pad hardness on simpler assumptions? So let's view things in a larger perspective. IO is now a central tool in cryptography with leading to new and improved constructions of almost all known primitives and a drawback is that all known constructions of IO either require exponential number of assumptions or results in exponential loss in security and this seems to be inherent with IO and in fact several applications of IO require sub-exponentially hard IO to base their security on. So a natural larger goal would be can we base IO applications on simpler assumptions or in more general do applications of IO require the full power of IO? For Nash equilibrium which is the focus of this talk we first improve the bitansky et al's result by showing that using polynomially hard IO and polynomially hard one-way permutations we can base p-pad hardness and towards a larger goal of basing applications on IO on simpler assumptions we show that we can use polynomially hard compact public key functional encryption and one-way permutations to to base p-pad hardness on. Notice that polynomially hard compact public key functional encryption is a polynomial falsifiable assumption. So using this recent work of bitansky et al we can in fact weaken our assumption to just assume single secret key functional encryption and public key encryption. So let's concentrate first on the first theorem where we show that IO, polynomially hard IO can be used to base p-pad hardness. So before moving on let's try to see why bitansky et al's construction incurs an exponential loss in security. Recall that the hard instance is as follows the ith node is given by IPRFS on I and the successor circuit is as follows. So the so the main crux of this proof using IO is as follows. Suppose if we indistinguishably change the successor circuit such that it has this extra if condition that is on x equal to 2 to the k minus 1 which is the last node irrespective of whether signature sigma is valid or not it always outputs invalid. Suppose if we are able to change the successor circuit to this one in an indistinguishable manner then we are essentially done because no matter what an adversary who is given the successor circuit cannot win this game and this actually involves several steps. So the first step is to change the successor circuit such that on some random u where u is some integer in 1 to 2 to the k minus 1 we change the successor circuit such that on input x equal to u irrespective of whether signature sigma is valid or not we always output invalid. So this can be thought of as cutting the successes cutting this directed line at u because given this circuit you cannot go from u to u plus 1 and this can be thought of as puncturing the successor circuit at u. So this step is actually a straightforward application of the punctured programming approach of Sahai inverters. So let's see the next step. Now we have the successor circuit punctured at u and the next step is to increase the punctured interval to include u plus 1 as well. So for this step we are going to use the fact that since it's punctured at u it does not output the PRF computation on u plus 1 but recall that this PRF computation is in fact used to check the validity of the signature at this node that is on input sigma it has to check and if x is equal to u plus 1 it has to check if the signature is same as that of PRF is on u plus 1. But the main observation of Bitansky et al is that this check can be done in some sort of encrypted form and this encrypted form can be hybrid away to some random junk value so that for no signature sigma this test passes. So this is a very high level overview and this can actually be formalized using the punctured programming approach. I won't be going into the details of how this is done. So using this fact that it does not output PRF is on u plus 1 we can increase the punctured interval to include u plus 1 as well and this is repeated for u plus 2 u plus 3 and so on up until we have the guarantee that this circuit does not output the PRF computation on the last node and hence we can increase the punctured interval to include 2 to the k minus 1 as well. So we are essentially done because given x equal to 2 to the k minus 1 no matter what if sigma is valid or not this circuit always outputs invalid. But this actually needs exponentially many hybrids because at every step we are constrained to increase the punctured interval by only 1 and this seems to be inherent with this approach. So let's see if we can do better and it seems that we can do better and our main idea is to increase the punctured interval to include an exponential number of points at every step. And the key inside that enables us to do is by considering a richer structure for node labels. So let's see how. So we have this SPL instance and now let's consider what is the ith node label. So let's consider i and let's consider the binary representation of i which is given by i1, i2 up to ik. So i can be thought of as k-bit binary strength. The ith node label is given by i followed by a PRF computation on i1 which is the first bit followed by a PRF computation on first two bits i1, i2, so on up till PRF computation on the entire i. So notice that if we just have this PRF computation on entire i, this is equivalent to Bitansky et al's approach. But we are considering instead of just a single signature, we are considering k signatures on every bit, every prefix of i. And now the successor circuit checks the signature on every prefix and if every prefix is valid, then it outputs the next node. And let's see how this considering this richer structure actually enables us to base it on polynomial hardness. So for this, let's consider the case for k equal to 3. We have 8 nodes. And let's go through the proof steps. So the first step is same as that of Bitansky et al. We puncture at some random point. Let's say it's 0, 1, 1. And we now extend the punctured interval but this time with larger size. So let's see how. So let's assume that we have the circuit punctured at 0, 1, 1. And we have the guarantee that it does not output the PRF computation on 1, 0, 0. Notice that this PRF is used to check the validity of the input signature at this node, which is 1, 0, 0. And using this checking the signature in encrypted form approach, we can extend this punctured interval to include 1, 0, 0 as well. But notice that we haven't gained anything because this is same as what Bitansky et al's approach also does. But that's not the end of it. We also have the guarantee that given this successor circuit is punctured at U, it does not output the PRF computation on 1, 0. And this PRF is used to check the validity of the second prefix at both 1, 0, 0 and 1, 0, 1 because they both share 1, 0 as a common prefix. And using this ideas as that I described earlier, we can increase the punctured interval to include these two nodes. We have gained something now because we can increase the punctured interval to include two nodes instead of just 1. And that's not the end of it. We also have the guarantee that it does not output the PRF value on 1. And notice that all the subsequent nodes share the same first bit, which is 1. And this is actually used to check the validity of the input signature on all these subsequent nodes. So we can in fact use similar techniques to increase the punctured interval to include all these nodes in just one shot. So more generally, if we can increase the punctured interval at all the prefixes that are not output in some sense. And this technique of increasing the punctured interval at an exponential number of points in every step as leads us to construct trapdoor permutations from polynomial hardness of IOS. So in the remaining time, let's see how we construct, how our hard instance follows from polynomially hard compact functional encryption. So let's review the notion of functional encryption. So functional encryption generalizes public key encryption in the following sense. We have some data which is in encrypted form. Now the secret keys correspond to certain functionalities. So let's say the secret keys correspond to F1, F2, F3 and F4. And the correctness guarantee is that given the secret key for certain functionality and a cipher text encrypting some data, you can get the output of the functionality on the underlying data. And the security guarantee is that nothing else about the data is leaked. So let's see our main ideas behind our second theorem. So the main idea is to simulate the effect of obfuscation using compact functional encryption. So one might think that we could use the FE to IO transformation of Anand Tanjain and Bitansky and Kuntanathan. But this incurs an exponential loss in security. So an intuitive reason for this is that this has so recall for IO we need to argue that for any two functionally equivalent circuits C0 and C1 obfuscation of C0 is indistinguishable to obfuscation of C1. So this FE to IO transformation actually changes from the obfuscating evaluating C0 at every input to evaluating C1 at every input. And this has to be done for one input at a time. And since the number of inputs is actually exponential, this leads to an exponential loss in security. So this seems like some sort of non stutter because we wanted to base our P-pad hardness on polynomial hardness of compact functional encryption. But the question we ask again is that does this application of IO require the full power of IO but or can we deal with some sort of weekend notion of IO? And it seems that we can deal with some weekend notion of IO for this application. So recall that instead of changing the distribution of the obfuscation on every input, the key insight is that if we can change the distribution at only a polynomial number of input points, then we can achieve, we can show P hardness of P-pad. And at a high level, the polynomial number of input points that we want to change the obfuscation actually is given by a given by theorem one. We call the theorem one shows that we can base P-pad hardness on polynomially hard IO and which which and the proof technique that we used was to change the distribution at certain points. And the number of points was guaranteed to be polynomial. And and this actually gives the polynomial number of points at which we need to change the distribution. And realizing this idea is actually in incur several challenges which we saw in the saw in a paper. And I won't have time to cover all such all those challenges. And the techniques that we developed seem to be generic enough and we actually have, we can base applications on IO on polynomially hard compact FE. So let me tell you some of them. So in a joint work with Mark, we construct trapdoor permutations from polynomially hard compact functional encryption. Prior result were known only from subexponentially hard IO. We construct non interactive key exchange for unbounded number of parties from poly hard compact public key encryption for functional encryption. And prior result was known from polynomially hard IO. And we could construct adaptively secure FE with against unbounded collusions from single key selectively secure compact FE. And I'm sure that there would be many more applications. Thank you.