 Okay, thanks for the introduction. Hi, everyone. So yeah, so this is joint work with Amit and Mark. So if you've been in this room today, you probably already know what IO is, but let me just say one slide about it anyway. So IO is this notion of compiling a program P in a way that preserves its functionality, but for any program P prime, P prime, which is equivalent to it, the compilations of these two are indistinguishable from each other. Hence, indistinguishability obfuscation. So this was first proposed by Barak et al in 2001, but the first candidate didn't come until 12 years later in the work of Garg et al. So at this point, we have many, many applications of IO, and it has become a central hub of crypto, a holy grail of crypto, whatever you wanna call it. It's become a very important thing, and we have many candidates for it, but the security of these is not very well understood right now. Okay, so you just saw talk about multilinear maps. So indeed, all of the IO candidates we have rely on multilinear maps. You can think of multilinear maps at a very high level, sort of like homomorphic encryption, in the sense that you have a bunch of encoded elements, and you can do arithmetic on them. The difference is that in multilinear maps, there's no decryption. Instead, we have this thing called a zero test procedure, and you can't zero test everything. Only certain encodings that are well-formed can be zero tested. I'll say what well-formed means when it becomes important. So yeah, we have several M-map candidates. Tancred was just talking about the last of these, and in this talk, I'm gonna focus on the first of these, which was the original candidate due to Garg Gentry and Halevi. Okay, so I just mentioned the zero testing. So ideally in these multilinear map candidates, zero test should just reveal whether an encoding is zero or not, and that's it. But in practice, all of the zero test procedures actually reveal some sort of leakage. So you can visualize this as follows. You have some initial set of encodings. You do some homomorphic computation on them, and you get a top-level encoding. Here, top-level, well-formed, it's the same thing. And then you might feed that through the zero test, and indeed, you will get out a bit that says zero or one, but you also get some leakage. And I'll say more what that leakage looks like. Okay, but let me first tell you what our results in this paper are. So we give the first polynomial time cryptanalysis of several IO candidates, and so this is a distinguishing attack, so we produce two equivalent programs and we show how to distinguish their obfuscations. And in this paper, we also have an attack on the order-revealing encryption scheme due to Bonnet at all. I'm not gonna go into those details because it's very similar to the attack on IO. Also in this paper, we propose a new security model, which for the first time actually captures all of the known polynomial time attacks on GGH 13. And I'll say more about the model at the very end. But just to give you a sense of the high level structure of the attack is very high level. So we observed that the leakage produced by the zero test can be represented or can be thought of as an explicit polynomial which is computed over the underlying multilinear map variables. So we take the obfuscation, we evaluate it on many inputs, and zero test the result and collect all the leakage. And then we find what's known as an annihilating polynomial that basically that cancels all of this leakage. So it takes all of the results of the leakage and you feed it through this annihilating polynomial and it spits out zero if you had one of the programs. And if instead the obfuscation was of the other program, then the annihilating polynomial will spit out something else. So in this way we get a distinguishing attack. Okay, so this is the outline for the rest of the talk. First I'm gonna give a little bit of background on IO and multilinear maps. And then I'm gonna give an overview of the attack. And then finally I'll talk about the new security model I mentioned and some subsequent work that's already been done regarding this model. Okay, so first of all, the sort of model of computation in this talk I'm gonna care about is what's called matrix branching programs. So there are many candidates for IO starting from matrix branching programs. There's also many candidates for IO growing body that start from just circuits. I'm not gonna talk about the ones that start from circuits at all. I'm just gonna focus on IO for matrix branching programs. And this is enough to get IO for all polynomial time computation via some bootstrapping theorems but I'm just gonna focus on branching programs. So a matrix branching program is basically a set of matrices that are sorted into layers. And then you have two bookend vectors. And each layer is associated with some input bit. So that's it. It's just matrices and two vectors on the end. And the way you evaluate the matrix branching program on some input X. So you always have the first bookend and the last bookend vector there. And then for the middle ones, you just pick the matrix that corresponds to the value of the bit that you're reading. So if XI1 was zero, then the first one would be this matrix. And if XI2 was one, then you get this one and so on. So each input selects a subset of the matrices and you multiply through. Okay, and in particular, the output of this function is zero or one depending on whether this product is zero or one, right? So we just care if it's zero or not, not what the actual value is beyond that. Okay, so what's the standard recipe for IO? At this point, most of the matrix branching program, IO candidates follow this recipe with some different bells and whistles on each one, but this is sort of the core technique here. So we start with the branching program as I just described. First, we randomize the matrices. And this is a two-step process. The first one is Kylian's technique where you sort of put these random matrices between the layers and multiply on either side. And then you also multiply with these independent scalars where there's one per matrix, okay? So you can see that this doesn't change the functionality, right? Because these will cancel and the alphas will not change something from zero to nonzero. So randomizing that way preserves the functionality. And then the second step is to encode each of these matrices, excuse me, each element of each matrix in an M-map scheme. So again, I'm gonna focus on GGH 13, but you can think of, in terms of an IO candidate, you can think of the other M-maps as well. And the only important thing here is that this encoding is done in a way such that I can evaluate the branching program. So specifically each honest evaluation of the encoded branching program gives you an encoding that's well-formed, which means I can test if it's zero or not. So this allows functionality. Okay, so I wanna have two slides on the GGH 13 M-map scheme. So these encodings live in a ring of polynomials, which is, I believe, the same ring as the one Tom Craig was talking about. So this is a ring of polynomials, mod some cyclotomic polynomial. And GGH 13 has basically two sort of important classes of secret parameters. The most important is this parameter G, and this defines the plaintext space. Okay, so in particular, if you take the ideal generated by G and you mod R by that, that's gonna be our plaintext space. And by choosing G appropriately, you can show that this is isomorphic to a prime field, so you can think of the plaintext space as a prime field. And then the other important secret parameter is these Zi, which are gonna define the level of each encoding. Okay, and so what do you do with these? You have a value A, a plaintext value, and you wanna encode it at some level S, and here the levels are gonna be a subset of one through K. And note this K here and this K here are the same. Okay, and to encode A at level S, we put A in the numerator plus a small multiple of G, so we choose some small randomness, multiply it by G and add A to it. And then in the denominator, we multiply the Zi's corresponding to the level, okay? And we're doing all of this mod Q, where Q is some big integer, so we're taking the coefficients mod Q. And note that without that, this division might not even be well-defined in the ring R. Okay, good, so that was the sort of setup for GGH 13. So how does the arithmetic work? So I'm not gonna, I'll spray you the technical details here, but it's exactly the same as what Ten Crade was saying. You can add at the same level, and you can multiply at, well, okay, this part is a little different, you can multiply at disjoint level. So if your two levels are disjoint sets, then you can multiply, otherwise you're not allowed to. And in both cases, the new level is the union of the operands level. So if you're adding at the same level, you just keep that same level, otherwise you take the union of the things that you're multiplying. Okay, and so this, when you do this, the sort of randomness grows a little bit, but you can tolerate up to a certain degree of the computation. And specifically, you can tolerate up to degree K. And again, these parameters are all chosen such that functionality is preserved. Okay, so for zero testing, what does it look like? So you have this zero testing parameter, which is this kind of funny-looking thing here. And you can see the reason for it when you look at the zero test procedure. So the procedure is just taken in coding of zero. So note that it's zero because I'm not adding anything in the numerator here. Okay, and also crucially, this encoding of zero is at level K, right, it's at the top level. And I multiply by the zero test parameter. The Z's cancel, the G's cancel, and I'm left with R and then this value H, which is just some small randomness that we put in the zero test parameter. Okay, but the point here is that if you've done this right, these two things are very small relative to the modulus Q, right? And if not, so if your encoding was not zero or it was not at the top level, then the output would have size roughly Q. So you can tell if something is zero or not by checking if it's small or not. Okay, good. So what are the attacks that we know of on GGH 13? So there's only three, including the current work, I believe. The first one is actually due to GGH 13. They observed that for some distributions on initial encodings, you can actually recover some of the, you can either recover some of the secret parameters or some information that you would hope to be hidden. So even in the first paper, there was already an observation that some of these encodings are attackable. There was a work by Hu and Jia, which attacks specifically the key exchange protocol. So Tan Craig showed you an attack on the key exchange protocol using GGH 15. Hu and Jia have one using GGH 13. And then in this work, we give an attack on IO. But all of these works have the same high level structure. First, you start with a set of initial encodings and you compute some top level zero encodings using the allowed arithmetic, right? The add at the same level, multiply at the strain levels. You then zero test each top level zero encoding and you collect the leakage. And then you perform some arithmetic on this leakage to recover an element in the ideal G, right? So at the very beginning, when I was saying we're gonna cancel the leakage, this is really what canceling means. It means you take these ring elements that are produced by the zero test and you do some arithmetic over them to recover something in the ideal G. Okay, so that's the kind of crucial step in each of these attacks. That element is then used in various like application-specific ways to break whatever you're looking at, but recovering an element of the ideal G is kind of the core step there. Okay, so let me say what our attack is. So think of a branching program in which every matrix is the identity matrix, okay? So this is probably the simplest branching program you can think of. And this computes the all zeros function. If you choose the book end vectors, right, this computes the all zeros function. This is good for us as an attacker because we can get as many top level zeros as we want by just evaluating the branching program on any input. Okay, so recall that in GGH 13, the output of the zero test for a successful zero test, right, is this element H times the randomness R of the top level encoding. Okay, so in our attack, the first thing that we do is we sort of examine what is the structure of that randomness. And we write it, we sort of stratify by this variable G, right? So we write the randomness as everything that's divisible by G plus everything else, okay? And so what are these P1 and P2? These are polynomials over the underlying multilinear map variable. So both over the input branching program, but also over all of the randomness that was chosen by the multilinear map. And right, we as the attacker, we don't know what those random values are, but we do know, we can compute explicitly what is the polynomial over those random values that is output by the zero test procedure, okay? So once we compute these polynomials, we then find this polynomial Q and this is our annihilating polynomial, okay? And what does it do? It has a property that if you evaluate Q on the set of all these P1s times the H, it will output the zero polynomial, okay? And if given such a Q, it's easy to observe that if I then just evaluate on H times R, this part is gonna go away and I'm gonna be left with something that's divisible by G, which by definition is something in the ideal G, okay? So that's kind of the structure of the attack. But saying it this way, I'm actually sweeping kind of all of the hard stuff under the rug because the core task is actually finding that annihilating polynomial, right? So given a set of polynomials, but I'm calling P1, P1R, the goal is to compute an annihilating polynomial Q that cancels them all out, right? That evaluates to the zero polynomial, okay? So in general, this is actually, seems to be an intractable problem and if you could solve this problem in general, it implies the collapse of the polynomial time hierarchy. This is due to Kayal. It's actually easy to see how to break many crypto applications if you could solve this task, right? So that seems bad for us as the attacker, but it turns out that our polynomials that we're working with here are highly structured and in fact, they are also simplified by the choice of the identity matrix branching program. But in general, these polynomials have roughly the structure of iterated matrix multiplication, which is a very explicit, well-defined polynomial. But still, it has many variables and we still don't know just at first glance how to find the annihilating polynomial for that. So our technique here is to use change of variables, basically. So we take these polynomials over the underlying multilinear map variables and we apply a series of changes of variables that actually reduce the task to annihilating a constant number of polynomials over a constant number of variables. And then we can apply brute force search, which is what we did and we find a polynomial queue, which so even though we're using computer search here, this polynomial queue provably annihilates the polynomials that arise from the zero test procedure. So I'm leaving out the technical details here. They're a little technical for this short talk, but I encourage you to look at the paper. It's not that complicated when you're sitting down looking at it. Okay, whoops, oh no. Oh, I just stepped on my joke. Okay, anyway. Right, so, okay. So I just mentioned how we recover an element of the ideal G, but we actually wanna distinguish the programs, right? So for the second branching program, we choose almost the same thing, except we flip some of the matrices to be this reverse identity instead of the standard identity. This still computes the all zeros function because this thing is its own inverse, so as long as you're always multiplying an even number of these, it's still the all zeros function, right, which is important because we need to distinguish two equivalent programs. And we show that in the same way that this polynomial queue does annihilate the first branching program, it does not annihilate the second, okay? And so if we repeat this process many times, we get many ring elements that are either all in the ideal G or they're not all in the ideal G, and then we can heuristically test this with a Gaussian elimination, I guess, because you can think of these as lattices, basically. Okay, so by doing this, we distinguish the programs, right? So we as the attacker have one, right? But we're not attackers. We're cryptographers, we wanna be secure, and this is really bad because we're not winning anymore, right? So we need to do something about this, and so let's make obfuscation great again. I wish I hadn't stepped on my joke there, but I did. Okay, so I just have a few minutes left. Let me say what we can hope to do in the face of this. Okay, so how have we modeled multi-linear maps so far? Actually, before I say that, let me say a quick caveat about the attack. So our attack, we don't know that it breaks all of the candidate IO schemes. There are many schemes that we can break with it, but there are some that we can't, and in particular the original candidate, we don't know how to break. And similarly, candidates that sort of start by converting a circuit from Barrington's theorem, we also don't know how to break these. But the point is that for all of these candidates, both the ones that we have broken and the ones that we haven't, these are all proven secure in this ideal multi-linear map model, right? So even though you have some candidates that you can prove secure in this model that we don't know how to break, we know that this model is inadequate. In fact, we already knew this for M-maps in general, but we now know it for IO in particular, right? There are IO candidates that are secure in this ideal model, but which are broken in the real world. So that's clearly a problem. So what is actually the problem with the model? And the problem is that the model assumes that the zero test just outputs a bit, right? It kind of pretends that this leakage is not there. And so it artificially prohibits this kind of attack that uses this leakage. Okay, so our solution is to provide a new weak multi-linear map model that actually allows the adversary to perform these post-zero test computations. Okay, so what does the model do roughly? Basically, the model actually keeps track of the formal polynomials over these underlying variables throughout all of the encoding. So the standard model kind of just keeps track of the ring element and basically forgets how it got there, right? But this new model keeps track of exactly how each of these computations were computed. And the key new feature of this model is that when you do a zero test, you actually get back a pointer to that polynomial and you can do some further arithmetic on it. And in this model, we say that the adversary wins if it finds some post-zero test polynomial Q whose output is the zero polynomial mod G. So Q is evaluated on all these post-zero test polynomials and the adversary wins if it gets something mod G, which again corresponds to something living in the ideal G, which as I mentioned, all of the attacks that we know of have to go through this, right? So this model really captures a crucial part of each of the attacks that we know of. Okay, and then in subsequent work, there's been two new IO candidates that are proven secure in this weak multi-linear map model. So the first one is due to Guard, Mukherjee and Srinivasan and the second one is due to the same authors of this paper. And again, these are provably secure in this model, assuming a pseudo-random function in NC1. In ours, the assumption can be made slightly more general, but again, this is a standard crypto assumption. And so these are thus secure against all of the attacks that we know of against GGH 13. Okay, so I will conclude there and thank you very much.