 So, I'll be talking about the complexity of compressing obfuscation. This is joint work with Gil Adashrow, Ilan Komargotsky, and Raphael Pass. So, over the last few years, indistinguishability obfuscation, or IO, has become one of the most exciting primitives in modern cryptography. At a high level, an obfuscator is a compiler, which transforms one circuit into another in such a way that it should satisfy the following two properties. First, it should preserve functionality, meaning that both circuits should have the same input-output behavior. Second, the obfuscated circuit should be unintelligible, or hard-to-reverse engineer. In the case of IO, this is formalized by requiring that for two circuits, which compute the same function, their obfuscations are computationally indistinguishable. The main reason why IO has become such an exciting area of research is its power. There's a large body of work showing that IO implies nearly all concepts in cryptography, ranging from those in classical crypto, like one-way function and public encryption, to more modern concepts like fully homomorphic encryption, and even to those beyond the reach of any other assumption, like deniable encryption or moldy input-functional encryption. Because of this, the main question we should be asking is, how do we construct it? There have been two approaches towards constructing IO. The first tries to reduce IO to the existence of seemingly weaker building blocks, such as taking some cryptographic building block and transforming it into IO in some way. The second tries to reduce IO to new concrete assumptions. However, in this case, the assumption is not standard, so the security of these assumptions, unless these constructions are not well understood, and thus they're vulnerable to attacks. For this reason, let us focus on this first approach of reducing IO to weaker building blocks. There are quite a few of these building blocks which have been shown to imply IO, such as different types of functional encryption, randomized encodings, and others. However, one drawback of these types of constructions is that for many of these, not only do we not know how to base them on standard assumptions, we also don't know how to base them on something that is even weaker than IO in some provable way. In particular, the only way we know how to obtain many of these is based on IO itself. So if we want to understand the foundations of IO, it's crucial to know what the minimal building block is upon which we can base IO. Thus, the question that motivates this work is understanding what is the weakest building block which is known to imply IO. Towards answering this, one thing we can notice is that each of these building blocks require some sort of compression. In particular, in the case of compact functional encryption, the ciphertexts are short. In the case of collusion-resistant functional encryption, the ciphertexts don't grow with a number of functional keys, and similarly in the case of randomized encodings. So if we want to base IO on something weaker, it seems that compression might be inherent towards achieving IO, or at least inherent in no techniques. For this reason, we focus on the weakest primitive which implies IO, namely compressing obfuscation. So throughout this talk, I'll be talking about circuit C with size S and input length N. With that notation, we can define a TL compressing obfuscator, as one where the time to obfuscate is some function T of S and N, and the resulting size of obfuscation, that is, the size of the obfuscated circuit is some function L of S and N. Now, parameterizing IO, or rather obfuscation in this way, already captures some known primitives. First and foremost, IO is a compressing obfuscator, where all we require from IO is to be efficient. Namely, polynomial and the size of the circuit that obfuscates. On the other extreme, we could conceive of a trivial obfuscator, which receives some circuit, runs it on all two then inputs, and then just outputs the truth table of size two then. This obfuscator is not powerful enough for any cryptographic applications, but by strengthening it a little bit, we can get something that is powerful enough for cryptographic applications. In particular, one such obfuscator is exponentially efficient IO, or XIO. In XIO, the running time is just as in the trivial obfuscator, namely, it's polynomial and the truth table of the circuit, so it's very inefficient, but the output length is just slightly smaller than the truth table. In particular, it's two to the N times one minus epsilon, usually for some constant epsilon, times polynomial and the circuit size. This can also be strengthened to strong XIO, in which both the running time and the output length are as in the output length of XIO, namely, slightly smaller than the truth table of the circuit. One thing to note, though, is that in both XIO and SXIO, the output length is exponential, so we can only use these obfuscators on circuits with short inputs, such as logarithmic size inputs. So parameterizing obfuscation, these four settings of parameters already give us a sort of compression hierarchy for compressing obfuscation, based on no results. For example, XIO, along with LWB, is already known to imply IO. If we are willing to strengthen the assumption from XIO to SXIO, we can weaken the assumption from LWB down to only functions. However, both of these constructions are in the sub-exponential regime, meaning they require sub-exponential security from the underlying primitives. So one thing we can ask already is what can we get from these primitives with only polynomial security? For comparison, the holy grail of this area would be to base IO on something like one-way functions. But based on known impossibility results, this is very unlikely to be the case, such as the impossibilities of Mahmoudianol and Gargadol. Nevertheless, there's still room for improvements between these known results and this holy grail result. For example, one such improvement is why do we need LWB here with XIO? For example, can we use only one-way functions? I'm gonna come back to this in a little bit. Another thing, though, is that one strong motivation for looking at relaxations of IO is to see if there's some relaxation which retains much of the power of IO, but is easier to construct. Towards this end, suppose we're willing to assume sub-exponential one-way functions. Then, assuming SXIO, we can already get IO in all of its applications, like public key encryption, et cetera. On the other hand, if we're only assuming one-way functions, then over here at the trivial IP skater, we only have mini-crypt that is no public key encryption. Thus, there's a huge gap in our knowledge surrounding XIO. For example, how powerful is XIO, or how weak is it? In this work, we focus on this setting of parameters and try and answer some of these questions. In particular, we have the following main results. We look at compressing obfuscation as an independent cryptographic primitive, and first, we look at its power. In particular, we're able to show that XIO is weak. That is, with one-way functions, XIO does not imply public key encryption in a black box way. Now, this holds even if the XIO and one-way functions are sub-exponentially secure, and even if the public key encryption is only polynomially secure. Moreover, since IO does imply public key encryption, when assuming one-way functions, this actually separates between XIO and IO, thus showing that XIO is the first thing that does not come from standard assumptions, which is provably weaker than IO, yet implies it with standard assumptions. Thus, XIO is very weak. It's so weak, in fact, that we even show that it exists for non-trivial classes of circuits. In particular, we're able to show constructions of XIO with statistical security for classes of circuits like AC0. Now, AC0 is already a non-trivial class. It contains things like weak PRFs. So, and this is also in contrast to IO. We also match this with a lower bound, showing that if you improve the running time here and the output length, it's unlikely that XIO, that this type of obfuscator exists for AC0. However, I should note that this positive result is for an XIO, which has somewhat weaker compression than what I've discussed so far, but I'll get into that a little bit later. Finally, we look at weakening XIO under computational assumptions. And we're able to show that if you want XIO, it suffices to only assume a version of XIO, which is only approximately correct. This is called correctness amplification, where we take something that's approximately correct and transform it into something that's fully correct. Now, I won't be able to talk about this third result in this talk. I'm only gonna focus on the first two, but I'll just say about it is that it might seem like we could adapt to known results for IO to do this correctness amplification simply by scaling the parameters, but that's actually not the case. So, our solution to this uses information, theoretic tools, like different types of error correcting codes, and also computational tools like non-interactive zero knowledge to achieve this result. So, if you're interested in just this third result, please see the paper, but I'm gonna talk about the first two. Before I do that though, we can look at the impact of our results. So, this is the picture we had before, where if we assume when we functions, with SXIO we get apistopia, trivially we get when we functions, because we're assuming them. And our first result, that XIO doesn't imply public key encryption, implies that S, sorry, that XIO doesn't imply IO, but also implies that XIO doesn't imply SXIO, because SXIO does imply public key encryption in a model that fits into our black box model. Therefore, this result actually shows that assuming XIO and moment functions, we still only have mini-crypt. Thus, we've simultaneously made the gap in our knowledge smaller, and also weakened the assumptions underlying IO. Moreover, we further weaken these with our correctness amplification, showing that you only need a very approximate version of XIO to get with standard assumptions, all the way to IO. So, with that in mind, I'm gonna start by talking about our first result on the power of XIO. So, recall that XIO with LWE does imply IO, so if we wanna understand its power, it's useful to look at it by itself with maybe the minimal computational assumptions necessary. And one good starting point here is one of the original applications of IO, that is transforming a secret key encryption scheme into a public key one. As we've already said, XIO does not suffice for this transformation in a black box way. So, to gain some intuition for this result, consider the construction of public key encryption from IO and LWE functions due to Sahayan waters. In this construction, the public key is an obfuscation of the encryption circuit for a secret key encryption scheme with a secret key hard-coded inside. To encrypt a message, you simply need to sample the randomness R, and then evaluate this obfuscated circuit on M and R to obtain the ciphertext. Now, consider what happens if we replace IO here with XIO. XIO runs an exponential time in the input length, so if we want to have an efficient public key encryption scheme, we need the input length to be small. But if the input length is, say, logarithmic, the adversary can trivially, upon receiving the public key, learn all possible ciphertexts of the scheme, which would make such a scheme trivially insecure. Now, this is only one construction, but we show that this is inherent for any construction, which starts with XIO. But one thing I want to add here is that this intuition is a little bit oversimplified. In particular, this intuition on the one hand does carry through throughout our proof. Our adversary for public key encryption learns all the whole truth table for every obfuscated circuit. But on the other hand, this intuition also holds for SXIO. And SXIO does imply public key encryption, so it can't be that we're ruling out a known construction. But actually, the reason that this intuition is incomplete is because while we use the fact that the adversary can learn all the ciphertexts, we also need the obfuscator to know the truth table, which only holds in XIO and doesn't hold in SXIO. So the general model for showing these types of lower bounds is a black box model. So in this context, that means that we'd like to show that there's no construction of public key encryption which uses XIO and one way functions in a black box way. So the thing is that obfuscation is inherently non-black box. For example, take this construction. This secret key encryption circuit, it's built from a one way function F, so it has the concrete implementation of a one way function F inside of it. And the obfuscator needs the code of the one way function to obfuscate it. However, using the model of obfuscated circuits, we can capture non-black box constructions as follows. We give the obfuscator oracle access to an oracle implementing the one way function, and then we replace all concrete one way function gates in the circuit with oracle gates. And this allows us to capture non-black box constructions. So we do use this model of oracle aided circuits. And this model was first used for circuits only containing one way function gates as in the construction on the previous slide. And in particular, this model was first introduced by Berkersky et al. and by Asher Ovansegev. But the problem here is that new constructions overcame this type of model because they use obfuscation in a nested way, such as an obfuscator, which obfuscates circuits, which itself outputs an obfuscated circuit, and so forth. One such construction is public key encryption from SXIO. So if we want to capture the most, the right class of constructions, it's important to capture these types of constructions. And this is exactly what was done by Garg Mamouni and Muhammad, who extended this to, in this context, work for obfuscators, which accepts circuits with obfuscation gates and with one way function gates. And we are exactly in this extended model in our results. So this captures all the techniques that have been used in this context, and in particularly, it captures the self-feeding techniques that are used in these IO constructions, where there's an obfuscated circuit, that obfuscated circuit, which is for obfuscated circuits, et cetera. So we view this as really ruling out the right class of constructions. One thing I'll add here is that, even if you don't care about IO or about XIO, this is a non-black box extension of the classical impogliozzo-rudex separation, ruling out black box constructions of public key encryption from one way functions. So this does shed light on this very long-standing open problem. So now I'm gonna tell you a little bit about the existence of compressing obfuscation with statistical security. So if we want to construct obfuscation with statistical security, the main advantage that we have over constructions of, for example, IO with statistical security is that we can take advantage of the running time of XIO, namely we know the truth table. So the main point here is to see how we can use the truth table to help us in these types of constructions. We're able to obtain a statistically secure XIO with perfect correctness for AC0, but with output length slightly worse than what I've talked about thus far, namely output length two to the n times one minus little o of one. So not very strong compression, but still XIO. The main tool we use to get this construction is called circuit compression. And this was recently studied by Chen et al in 2015. Circuit compression, in circuit compression, one is given the truth table of some function f. And the goal is to output some circuit which computes f, but the circuit has to have a non-trivial size. Can't just output the truth table, it has to be something smaller than two to the n. But that's exactly the situation that we have here with XIO. So if we want to obtain XIO for a class which has a circuit compression algorithm, we can just take the circuit, run it on all the inputs, and then run the circuit compression algorithm on the truth table. So by using these new results for a circuit compression for AC0, we get XIO for AC0. Now this construction is very simple. All we're doing is getting the truth table and then we're using no results. But it shows this intrinsic connection between XIO and these concepts and complexity theory. And thus we do view it as one of the central takeaways of this work. One thing I'll add here is that this construction is completely black box in the circuit. It even implies compressing virtual black box obfuscation. So if we want to use this to get XIO in a, even in a computational setting, if we want to use these types of techniques, we need to use the circuit in a non-black box way. So we also show a lower bound here. We show that if you improve both the running time and the output length, we get non-trivial speed ups for the unsatisfiability problem. But actually, we show something a little bit stronger. We show that if you have an obfuscator with both running time and output length, two to the epsilon n for any epsilon, be it constant or sub-constant. This implies that the unsatisfiability problem has an AM protocol in which the verifier runs in time, roughly two to the epsilon n. So what I want to point out here is this conclusion. This conclusion is one you might not have seen in some of these impossibility results. And actually, it's not always false. So by the recent breakthrough results of Williams, it's actually true when epsilon is a half. So we don't rule out these compressing obfuscators when epsilon is a half. But for smaller epsilon, it's somewhat believed to be true that, because otherwise, unsat would have this non-trivial protocol, even though it's not, it's just a little bit non-trivial. So this is just another connection between compressing obfuscators and complexity theory. So I have two takeaway messages for this work. The first is that compressing obfuscation, it's really, it's unusual. It doesn't fit into the classical model of primitives that we have so far. And by this, I mean the following. In crypto, we really like to classify primitives into these bins like mini-crypt or crypto-mania. And these are, they're generally separated. And if I take something from a powerful class, like I take IO and I add it to mini-crypt, then all of a sudden I'm gonna get basically everything. But this is not the case with XIO. With XIO, if I add it to mini-crypt to this desert containing basically all my functions and all of my functions applications, I get nothing. I just get mini-crypt. But if I add it to crypto-mania, all of a sudden, in the right environment with the applications of LWE that it needs, it gets to Apistopia and all of its applications. So that's XIO, it's just, it's not, it's very non-standard. The second takeaway message of this work is that XIO doesn't compress running time and this type of compression in running time and in output length, it seems inherent for IO. And as some evidence for this, we have the following. So recall, as I've said, that XIO and LWE implies IO. Now XIO here, it compresses, you could view it as saying which compresses the size of the function that it receives. It doesn't compress in running time, it only compresses in size. LWE in this transformation is used to create a type of functional encryption due to Goldwasser et al. Which you could view as compressing only in time but not in size. So it seems that XIO and LWE here are compressing along different axes and somehow working together to give us IO and both of these type of compression seem inherent because XIO without LWE gives us nothing. So for that reason, it seems that compression is really one of the right approaches towards studying IO. Thank you.