 So if you look at the title, it has this funny word, implausibility, it sort of sounds like impossibility, but not quite as confident, which seemed to be hedging a little bit. So what's going on here? Well, we're going to show, our main result is showing that different inputs of obfuscation, this type of obfuscation, I'll explain what it is later, cannot exist. So it is an impossibility result. But in order to prove it, we'll need to assume that another form of obfuscation does exist. So it's a funny sort of result. It's not immediately clear what to make of it or how to interpret it, but I'll try to convince you that you should interpret it as giving some strong evidence that differing inputs obfuscation doesn't exist. So the result will, the toggle will have two components. There'll be some science, real theorems and proofs, but also an equal part of the toggle will be some sort of interpretation, trying to explain what to make of these results, how should we view them. So at best you might call it philosophy at worst it will be some sort of like an interpretive dance with a lot of hand waving, but I do think this is important. I'm not going to shy away from it, but I want you to be skeptical. Everything is not science, you should be skeptical about it. Great. So let me start with sort of an ancient history of obfuscation starting with the year zero, 2000, until last year. So obfuscation was first formally studied by Hada and Barakadal, and especially this work of Barakadal gave a really comprehensive study of it, and they defined a strong notion, something called virtual black box obfuscation, or VBB obfuscation. And roughly speaking, this notion says that an obfuscated code should only be as good as having black box access to the program, so you can feed inputs obfuscated code, get outputs, but you shouldn't learn anything else about the code by getting the obfuscated code. You shouldn't learn anything else. Unfortunately, they showed negative results that this notion of VBB obfuscation is impossible in general. In fact, there are several, I'll call them pathological functions, so highly contrived functions, things that you wouldn't probably think of obfuscating naturally, that cannot be obfuscated. No obfuscator can achieve this VBB property for these functions. And so even though these functions are contrived, this already says that we cannot have a general notion of VBB obfuscator, any kind of obfuscator will have to fail to be VBB for at least some functions. And also, we don't have a good way of characterizing what these functions are, what they look like. So we don't have a general class of functions that we could assume to be VBB obfuscatable. So for any sort of class of functions that you might think of that you'd like to obfuscate, so for example, you might hope that you can get VBB obfuscation, let's say, for all symmetric key cryptosystems. No, there's some contrived symmetric key cryptosystem that will be one of these pathological functions that is not obfuscatable. So it's hard to really figure out what you could hope, what class of functions you could hope to VBB obfuscate. On the other hand, we did have some positive results that certain very simple functions, things like point functions are obfuscatable. And this is already highly non-trivial, but it's a far cry from trying to obfuscate some sort of complicated program. So it's really much simpler types of functions. And so here's a useful picture to have in mind. This is what our view of obfuscation would have looked like, or would look like. So there are all of these unobfuscatable red functions scattered throughout, very contrived functions, not really natural things, but they're all over the place. And then there's some small region of obfuscatable, very simple type of green functions that we do know how to obfuscate. And I think the view that our community had of what this picture, and the rest of the region here, the white region is really unknown. We don't know one way or another. And I think the view that our community had of what this might look like at least before last year would be something like this, that probably most interesting things are unobfuscatable. And yeah, there's some small, simple region of VBB obfuscatable functions, like point functions. And the reason that I think we had this interpretation, at least I had this interpretation, is that it was very hard to conceive of some sort of a general way of obfuscating complicated functions, and also that there were these functions that we knew to not be obfuscatable scattered all over the place, so it seemed reasonable that maybe nothing interesting is obfuscatable, we just don't know how to prove it. So this really changed last year with this work of guard, gentry, Halevi, Raikova, Sahay and Waters, who gave the first general candidate obfuscator. And so this is a scheme that at least syntactically can be applied to any polynomial time program, it's really a candidate way of obfuscating any program. Of course it fails to be a virtual black box obfuscator for some of these pathological functions, which we know to be unobfuscatable, but it doesn't seem to have any other weaknesses. If you apply it to a natural function, you could even assume that it's VBB obfuscation, it seems reasonable. We don't really have any concrete attacks on this. And so I think after this work came out, suddenly our view of the world changed and now it became conceivable that the world could look something like this. So almost everything is obfuscatable, it's a big field of green, we're all happy, except that there's like these annoying red dots, some very contrived functions that aren't obfuscatable, but they're really just these contrived things, nothing really natural. Of course it's not very satisfactory because if you wanna use obfuscation, you do wanna use it on some particular functions you have in mind, and now you're not sure, is it a green function, is it a red function? You don't know. So this doesn't really let you, even if this is your view of the world, it's not very usable, you don't know how to use this type of obfuscation to do anything. And so the goal is, can we have some sort of a general assumption obfuscation, which would be general, simple to state, useful, and could plausibly hold, and that we'd like to make some such assumption obfuscation. So of course, we could just, every time you wanna use obfuscation on a new function, you could just assume that it's, you have BBB obfuscation on this particular function, that it's one of the green functions, but that wouldn't be a general assumption, you'd need a new assumption every time you wanted to use obfuscation on a new function. It wouldn't be a simple to state assumption because maybe the function you're trying to obfuscate is complicated, so your assumption would need to have this function embedded in it. So that's not what we want, we wanna have something general, simple to state. And in fact, two such candidate definitions were proposed in this work of Barakatal. One of them is called indistinguishable obfuscation and the other is called differing inputs obfuscation. So let me tell you what these are. So you already saw indistinguishable obfuscation in the last two talks, but just as a reminder, it's as the following. So it says that if you take any two circuits C and C prime, which are functionally equivalent, so on every input X, C of X is equal to C prime of X. They produce the same output. If you take any two such circuits, but of course their description could be very different, they could look very different even though they're functionally equivalent. If you obfuscate them, the obfuscated circuits will look the same. They'll be computationally indistinguishable from each other, okay? That's the notion of indistinguishable obfuscation and tri-O. And at first it doesn't really seem that useful, but it turns out to be surprisingly powerful. We now know how to get all kinds of amazing results, functional encryption, witness encryption, deniable encryption, sourcing zero knowledge, non-interactive multi-party key agreement, broadcast encryption, all kinds of things that we could have only dreamed of a year ago. We now know how to get from this. On the other hand, there are still many reasonable properties that we could hope an obfuscator to satisfy that we can't prove from IO alone. And even the properties that we can prove all of these amazing results, they're sort of harder to use than seems absolutely necessary. There's often simpler constructions using obfuscation of these primitives that we think could be secure, but we don't know how to prove it using IO. So different inputs obfuscation is a variant of IO that's actually more powerful. Let's us do more stronger assumption. And it goes as follows. So it still says that if you take two circuits, C and C prime, their obfuscation should be indistinguishable, but now we're not requiring that C and C prime are functional equivalent that they agree on all inputs. We're just requiring that it should be hard to find an input X, such that C of X is not equal to C prime of X. So it should be hard to find an input on which they differ, even though such inputs exist. And so now we're thinking of the C and C prime, these circuits as being sampled from some distribution, and if the distribution has this property that's hard to find an input on which the circuits differ, we'll call it a differing inputs distribution. And so the DIO assumption says that for all differing inputs distributions, if you obfuscate C or C prime, you shouldn't be able to distinguish. So this is actually the form of the assumption that was stated by Baragadal, but I don't know any application of this assumption as is. So in order to actually be able to use it, people have modified it a little bit to add this notion of auxiliary input. So now we're going to sample the circuit C and C prime with also some auxiliary input, I'll call it AUX, and the differing inputs property says that if given C, C prime, and AUX, it's still hard to find an input on which the circuits differ, then even given the input AUX, it should be hard to distinguish obfuscation of C and obfuscation of C prime. So here's an example of how you would go about using this notion of differing inputs obfuscation. So think about C of X as being a circuit that always outputs zero, doesn't do anything interesting, and C prime of X is being a circuit that has some output Y of a one-way function hard-coded in it, and if it gets an input X, which is a pre-image of Y, then it outputs one. Okay, so these two circuits, they look very different, and they're functionally different. They act differently on this pre-image of Y, but even given the auxiliary input, which over here is just the output of the one-way function Y, it's hard to find an input on which these two circuits differ. That would require breaking the one-way function. And so the DIO property says that if you obfuscate C or C prime, it should be indistinguishable, okay? So this notion of differing inputs obfuscation was recently explored by works of Anand et al and Boyle et al, and they showed many interesting applications of this, such as obfuscating Turing machines, adaptive circuit functional encryption, and extractable witness encryption, and maybe even more importantly, many of the results that we know how to get from IO can actually become now simpler if you try to get them with the stronger property, using the stronger form of obfuscation. So here's our result. Unfortunately, our result says that we think that this notion of differing inputs obfuscation is too good to be true. So we say that general differing inputs obfuscation, we show that general differing inputs obfuscation cannot exist, assuming that some sort of a special purpose obfuscation assumption holds. And I'll tell you exactly what that is in a bit, but roughly speaking, it says that some specific function can be obfuscated in a way so as to hide some very specific information. Okay, so it's a very particular assumption. We also have similar results for extractable witness encryption, but I'm not going to describe what that is or talk about that any further. So here's how we will show our result. We're going to show a distribution on circuit, CC prime and auxiliary information ox, such that it will be very easy to distinguish the obfuscation of C versus the obfuscation of C prime given this auxiliary input ox. Okay, so that'll break differing inputs obfuscation, but of course we still have to show that this distribution D is a differing inputs family, that it's hard to find an input such that, an input X such that C of X is not equal to C prime of X. And in order to show that, we'll need to make the special purpose obfuscation assumption. Okay, so we'll need the assumption to show that this is a differing inputs family of the right format. So this is what we need, we need this distribution on CC prime and auxiliary input that has these two properties. It's easy to distinguish, but it's hard to find an input on which the two circuits differ. So here's the construction. We're going to set the circuit C of X to be the circuit that always outputs zero. Doesn't do anything interesting. On the other hand, we're going to set the circuit C prime of X to output one if X is of the following special form. It consists of a message M and a signature sigma under some verification key VK that's hard coded in the circuit C prime. So if you give it a valid message and signature paired outputs one, otherwise it also outputs zero. So at least on their own, if you see a CNC prime, it's hard to find an input on which they differ. You don't have the signing key. Can't produce valid signature. But now we're also going to create some auxiliary information and here's where things are going to get a little messy. So to create auxiliary information, we're going to define one more circuit called that circuit C star and C star gets us input another circuit C and it's going to do the following. First it's going to hash C just using some collision resistant hash function to get some small message M. And then it's going to sign M using the signature key that it has hard coded inside it to produce a signature sigma and then it's going to run the circuit C that it got on the message and the signature sigma and just output whatever the circuit C outputs. I'm assuming the circuit C only outputs one bit. These are Boolean circuits. Okay, so somehow C is going to define a message. We're going to sign the message and then we're going to feed the message and signature pair to see itself. And the auxiliary information that I'm going to create is going to be an obfuscation of the circuit C star under some other possibly obfuscation scheme. Doesn't have to be related to the obfuscation scheme here. Okay, so that's going to be my auxiliary input obfuscation of C star. Now, oops, I don't know what happened here. Okay, so the first thing to notice is that if you get this auxiliary input, the property one holds. It's very easy to distinguish an obfuscation from C from an obfuscation of C prime Y. We just feed these obfuscations into C star and see we get one or zero, right? So that's great. We can easily distinguish. The only thing that remains is to show that this is a differing inputs family. And here's where we're going to need to make our special purpose obfuscation assumption on this obfuscator O. And the assumption says the following. It says that if you get an obfuscation of the circuit C star, it should be hard. And even if you've given a verification key VK for the signature scheme, it should be hard to come up with any valid message signature pair. That's the assumption. And it turns out that it's easy to show that these holds, if you were just given black box access to C star. The intuition for that is that even if the attacker got to choose many different messages and distinct messages and get one bit of leakage on the signatures for the message, it would be hard to come up with any valid message signature pair. So you're just not seeing enough information on a particular signature. And that's essentially what's going on. By feeding inputs to C star, you're just getting one bit of information on many distinct messages. So under this assumption property two holds, this is a differing inputs family. Great. So this is what we showed. We showed two types of assumption. One is general differing inputs obfuscation. The other one is the special purpose obfuscation assumption and only one of these can survive. Well, at most one of these can survive. Possibly they're both false. So which one should you put your money on? So this general differing inputs obfuscation assumption has this fairly complicated format in some sense, more complicated than most cryptographic assumptions. So it says that some indistinguishability property holds. This is like most cryptographic assumptions or like many cryptographic assumptions, but there's an additional quantifier which says for all differing inputs families, this certain indistinguishability property holds. And if you think about it, this type of assumption is not falsifiable in the sense of not or. So normally if you think of standards for cryptographic assumption like factoring is hard, if you wanted to show that this assumption is full, you just give false, you just give an efficient algorithm for factoring. On the other hand, if you wanted to show that this assumption is false, what would you have to do? You'd have to come up with a differing inputs distribution, you'd have to show that it's a differing input distribution and then you'd have to give a distinguisher. But in order to show that it's a differing input obfuscation, it's not just coming up with an algorithm, it's coming up with a proof. So to give an attack, you actually have to prove something. And that's what we try to do. We come up with such a candidate differing inputs distribution, but in order to, and we easily show that it's distinguishable, so we give an algorithm for distinguishing, but in order to prove that it's a differing inputs obfuscation, we need this additional assumption. On the other hand, if you look at the special purpose obfuscation assumption, it just says that given an obfuscation of a specific circuit C star, it's hard to recover a valid message signature pair, or a valid signature. That's a falsifiable assumption. In order to break this, you just give me an efficient algorithm that breaks it and I can check. So that's falsifiable. Here's another way to say it. If you assume the general differing inputs obfuscation holds, you're assuming that there exists an efficient algorithm that breaks this assumption, right? We showed that that must exist, but we don't have any candidate for this algorithm. So you're assuming some algorithm exists, but we don't have any potential candidate for it. So to conclude, let me just talk about what I think, as a view of what we should think of DIO at this point, given this result. So I think we give a fairly good evidence that this differing inputs obfuscation for all differing inputs families is impossible, cannot be achieved, but still DIO and even VBB obfuscation, even very strong notion obfuscation can plausibly hold for most natural candidates that we'd like to obfuscate, just not in general. And I think it's still better to rely on this differing inputs obfuscation rather than VBB. It's sort of a more specific assumption. It clarifies what property you really need in the proof. So if you have a choice between these two, even though neither of them seems to hold in general, it still seems better to rely on DIO. And the search continues for a useful plausible general obfuscation assumption. And let me end with one more thought, which is that obfuscation might be the new random oracle model. I wanna give an analogy between obfuscation and hash functions. So hash functions are these super practical things. We know a lot about them. We understand them well. But just like in hash functions, we seem to have candidates that satisfy very strong security properties and we have a hard time with defining exactly what these properties are. So we can define specific things like collision resistance, but we think that really hash functions satisfy more that collision resistance doesn't capture. So I think the same way in obfuscation, we can define things like indistinguishable obfuscation, but we think obfuscation seems to satisfy more that this property doesn't capture. On the other hand of the spectrum, we have the random oracle model that seems to capture too much. We know that there are things that hash functions, that random oracle model captures that hash functions cannot satisfy. Same way we have virtual black box obfuscation, it seems to capture too much. And this search continues both in hash functions and now in obfuscation for finding the right or some sort of a nice assumption that captures all of the properties that we really think and practice should hold, but doesn't capture the properties that we know cannot hold. Great, thank you.