 So, hi, I am Akshay Degwejkar. This is structure versus hardness through the obfuscation lens. It is joint work with Neel Bhitansky and Vinod Vaikunthanathan. So, very structured algebraic problems like discrete log are the bread and butter of cryptography. Most of our modern cryptography, especially the public key kind, is based on these problems. And in this talk, we would like to understand, is this inherent? Right? So, first of all, we know that this structure, algebraic structure is great for functionality. In, moreover, we could say that this structure has been purated in getting cryptography, in the development of cryptography itself. For example, public key encryption really became possible when we assume the hardness of these number theoretic problems like factoring. Similarly, the first zero-knowledge proofs we had were for quadratic residuality. And more recently, as we have assumed hardness of problems on lattices and so on, we have a host of fantastic new applications such as fully homomorphic encryption. Okay. So, it is evident that this structure is great for functionality, but it does not come for free because structured, algebraically structured problems are a bit more easier to solve. And this gives us a trade-off between hard problems on one end, between hardness and algebraic structure. On one end, we have SAT. So, SAT is extremely hard. At the same time, it is very unstructured. Unfortunately, we don't know how to base any crypto on the hardness of SAT alone. Then we have assumptions in mini-crypt. So, these have somewhat more structure. And finally, we come to cryptomania, which is what we would be interested in this talk. Most of the... So, these consists of primitives of the public key kind. So, most of these are based on a handful of assumptions such as QR, DDH, and so on. All these assumptions are extremely algebraic. It turns out that we know some surprising quantum and sub exponential time algorithms for these problems. So, these problems are easier in a different sense as well, that they are unlikely to be NP-complete. The reason is because they lie in these lower, more structured complexity classes like NP-intersec, co-NP, and SDK. So, for now, SDK is the class of problems that have statistical zero-knowledge proofs. For now, think of it as a small complexity class. I'll define it later. So, the first natural question here is, how do we go about capturing this algebraic structure? In this talk, we make the choice that structure is equated with a hardness in low complexity classes. Okay. So, why is this a good idea? So, first of all, we know that most of our assumptions actually lie in these two classes. Moreover, they lie in these classes because they have some deep global properties. For example, factoring lies in NP-intersec, co-NP, because factoring is unique. And quadratic residuosity lies in SDK because of its random-cells reducibility. So, we do actually want to capture these kinds of properties. And finally, we know that some of this structure actually implies crypto. Ostofsky showed that any average case hard language in SDK actually implies one-way functions. So, we do want to understand these connections better. Maybe they are tighter. Okay. So, right. So, this brings us to the main question of this talk, does crypto actually require these structured assumptions? So, sometimes it turns out that the answer is yes. In some cases, the cryptographic primitives themselves imply hard problems in these classes, in which case the fact that the assumptions have to be structured is very natural. For example, fully homomorphic encryption implies hard problems in SDK. And some very special kinds of public encryption schemes and one-way functions also imply hard problems in NP intersect co-NP. But this is quite, this is not understood very well. Let's take the case of public key encryption. So, we don't actually know if public key encryption implies hard problems in these classes. Given that public key encryption is usually based on assumptions which imply hard problems in these classes, it would seem the case, but we don't actually know. Similarly, the case for other primitives like functional encryption, oblivious transfer, IO, and so on. Okay. So, this is the main question of this talk, which cryptographic primitives require complexity theoretic structure? So, basing cryptography on minimal assumptions is a major goal in cryptography. And understanding the complexity theoretic implications of these primitives helps shed light on what the minimal assumption has to be. Again, for the, for FHE, we know that it implies hard problems in SDK. So, it has to be based on problems like lattice problems which also lie in SDK. On the other hand, with one-way functions, we can dream. We could hope to base them on NP hardness alone. Great. So, in this work, we try to collect evidence that structure may not actually be that necessary. So, how do we go about doing it? Let's think of the simple case that showing that one-way functions do not imply hard structured problems. The best way to do this would be to do it unconditionally, which would be in something like even if P was actually equal to SDK, one-way functions still exist. Unfortunately, this is too much to ask for because this would in particular imply proving P not equal to NP. So, we go for the next best thing, which is we try to prove limitations on our techniques. An approach which was pioneered by Impaglia's and Roudic. Okay. So, let me tell you about commonly used techniques in crypto. Right. So, black box constructions are very pervasive in all of crypto. In this case, let's think about how do we construct language in NP intersect with NP using one-way functions in a black box way. Right. So, a black box construction has two components. The first of all is a construction, which is a language in NP intersect with NP defined using the one-way function, which only uses input-output access to the one-way function. And additionally, we have a security proof, which says that given any adversary which breaks the language, we can use this adversary again only in an input-output access to invert the one-way function. So, it turns out that we can rule out such reductions. And in particular, Roudic showed that one-way functions actually do not imply hard problems in NP intersect with NP in such a black box way. Right. So, what do we know about these black box separations? So, when it comes to understanding the relationship between two complexity theoretic primitives, we know quite a bit. The landscape is quite well understood about how different primitives are related to each other in a black box way. On the other hand, when it comes to the complexity theoretic implications of cryptographic primitives, less is known. So, basically, we know that one-way functions do not imply hard problems in NP intersect with NP, and they do not imply average case hard problems in SDK. So, we would like to understand these better. For example, does public key encryption imply hard problems in NP intersect with NP? How about IO? Does it imply hard problems in SDK? With that, we come to our results. So, what do we show? So, right, at a high level, this is how the world looks like right now. There are some primitives which actually imply hard problems in NP intersect with NP and SDK, and there are one-way functions which don't. So, we show that public key encryption does not imply hard problems in NP intersect with NP and SDK, the same for OT. So, what we actually show is that IO along with one-way functions does not imply hard problems in either of these two classes. Proving separations for IO is great, because it lets us infer the same for all the primitives which are implied by IO in a black box way. Okay. And that list keeps going. Some remarks. So, first of all, isn't IO very non-black box? I guess thanks, Amir, for giving away the points line. It turns out that we can capture these, why IO does use most constructions in a black box, non-black box way? It does so in a fairly restricted manner. Most constructions first construct some primitive using the one-way function in a black box way, and then they obfuscate it. Ashrova and Segev give a framework of oracle-aided circuits, like circuits which have one-way function gates, which captures such constructions. And so, we also work in this model. And we show that in this model, IO cannot construct hard problems in SDK or in NP intersect with NP. Second, does IO exist? So, the nice thing for this work is it doesn't matter. Even if IO did not exist, we have still learned something. For example, this lets us know that public encryption does not imply hard problems in SDK. And this holds regardless of whether IO exists or not. So, a bit more formally, what do we show? So, most of these constructions of hard, most of these black box separations construct a special oracle world where one of the primitives exists, but the complexity class is easy. And this lets you infer that the complex primitive does not imply hard problems in the complexity class. So, in this case, what we show is that there is an oracle where one-way functions exist. IO, for these oracle-aided circuits, also exists. And at the same time, the complexity classes SDK and NP intersect with NP are easy. Okay. So, because of the lack of time, I would not be talking about the whole thing. So, in this talk, we would show that one-way functions do not imply hard problems in SDK, even worst case hard problems in SDK in a black box way, which in particular means we would show. Okay, good, sorry. So, which in particular means that we would show an oracle where one-way functions exist and yet SDK is easy. I would write it this over. Okay. So, first let me tell you about statistical zero knowledge. So, when we think about the class of statistical zero knowledge. So, when we think about zero knowledge, we think about proofs or prover and a verifier and so on. It turns out that statistical zero knowledge has a nice characterization in terms of complete problems. So, in this talk, we will be focusing on that characterization. In particular, the problem statistical difference, which are shown to be complete for SDK by Sahay and Wadhan. Okay. So, the input in this problem consists of a pair of circuits which define two distributions. So, given a circuit, the output distribution of the circuit is what we consider when the input is a uniformly random input. Okay. So, the problem is we have to determine if the output distributions of these two circuits are close to each other or if they are far from each other. So, if the statistical distance between these two circuits is distributions is small, less than one-third, we have to say no. Otherwise, if it's more than two-thirds, we have to say yes. Note that this is a promise problem. So, in the case where this statistical distance is between two-one-thirds and two-thirds, we don't have to say anything correct. Okay. So, if that is a language in SDK and some x which is an instance, it gets mapped to a pair of circuits. If x is in the language, it gets mapped to a pair of circuits which whose output distribution is far from each other. And if x is not in the language, it gets mapped to circuits which are close to one another. So, let's see how good a hard problem in SDK from one-way functions look like. So, the first half of it is a construction. The construction starts with a one-way function and outputs a pair of circuits. These circuits again make only input-output access to the one-way function. And then the second half is a security proof which says that given any adversary which breaks a ZK, in particular it tells are these two circuits close to each other or far, can be used to invert the one-way function. So, how do we rule this out? There is a very canonical recipe given by Impar Gleason-Rudich which says first construct a oracle for the one-way function, construct an adversary to break a ZK, and finally prove that even given this adversary that breaks a ZK, you cannot invert the one-way function. Great. So, just to repeat, we have to do three things. Design a one-way function, design an adversary to break a ZK and show that the one-way function is secure even given this adversary. So, the first half, designing a one-way function is pretty standard. We would pick a random permutation. It's a great one-way function on its own. The second half, which is we have to design an adversary to break a ZK. So, just recall what the goal is. We are given two circuits. We have to determine if the statistical distance is small or high. So, the oracle, right. So, here's the first candidate. What it does is it first computes the statistical distance between the two functions. Given two circuits as input, it first computes the statistical distance between them. Note that this is not really efficient. It could make exponentially many queries to the one-way function, but that's okay. Then if the statistical distance is less than half, it says no. If it's more than half, it says yes. So, what's good about this? So, what's good is that it actually breaks a ZK. So, at least we are halfway there. Unfortunately, it turns out that this is too powerful. It actually breaks all of NP. And so, it doesn't really help. So, we cannot really show that the F is one way, even given this oracle. So, just to give you some intuition about why it breaks NP. So, on any input X, here's the construction. One of the circuits is simply the verifier. It takes as input a witness, runs the verifier and sees if it accepts or not. The other input is identically zero circuit. So, now, if X was not in the language, these two circuits are identical. On the other hand, if X was in the language, the statistical distance is more than zero. We can actually, oh, sorry. So, we can actually detect this using padding. Come close to half and half, and then it's possible to detect it. Okay, so we need an oracle which is somewhat less powerful than this one. Okay. So, the issue with this oracle is that it is very sensitive to small changes, especially around half. So, our fix is that we would add noise. How do we do it? So, the new adversary to break SDK, again, first computes the statistical distance between the two circuits. Then it picks a random threshold between one-thirds and two-thirds. And if it's less than the threshold, it says no. If the statistical distance is more than the threshold, it says yes. So, what's good about this is that it still breaks SDK. Because to break SDK, all we need to do is to be correct if it's less than one-third and more than two-thirds. In between, we don't really care. The challenge here is that it is also still making exponentially many queries to F. And so, we still have to show that it's not powerful enough to actually break the one-way function, right? So, this is our goal. We have to show that F is one way even given this breaker oracle, right? So, the earlier bug has now been transformed into a feature that this new oracle is actually insensitive to random local changes. And it turns out that for one-way functions, random local changes are sufficient to change the answer the adversary has to give. So, to illustrate, consider the following. That there is a one-way permutation and the challenge is this red box. So, the adversary has to return the position of the red box to succeed. Now, we can do a random change, which is pick a random place and swap these two. Now, the answer the adversary has to give has changed completely. And yet, we can show that every query the adversary makes to these, every query the adversary makes to the adversary, to the breaker oracle, it would receive the same answer with very high probability. And this lets us show that the adversary cannot invert the one-way function, even given access to this SDK oracle. So, this was really quick and dirty. Look at the paper for more details. Okay. So, with this, let me conclude. So, first of all, we showed that IO does not imply hard problems in SDK or NP intersect co-NP in a black box way. Okay. So, the concept of IO has garnered a lot of attention over the last couple of years. It's extremely powerful, and its existence still remains somewhat questionable. This work, interestingly, does not rely on the existence of IO. A few years ago, we would have written a couple of papers showing each of these results separately, but due to IO, we can show all of them in one chart. And secondly, so, this work supports a theoretical possibility that we can construct public key encryption from very unstructured assumption. Yet, the reality remains quite different. Most of our constructions of public key encryption schemes are based on very structured assumptions. There has been some work which has tried to diversify the assumptions behind public key encryption, yet our success has been quite limited. Bridging this gap of constructing public key encryption from unstructured assumptions remains a major open question even today. Okay. Thank you.