 So thank you for the introduction. So as I said, I will present yet another crypt analysis of multiliner maps, this time for the GGH 15 candidate. And it's a joint work with Jean-Sébastien Coran, Monsung Lee, and Mehdi Tiwushi. So let's start with multiliner map application. So one of the first application multiliner maps were described for was non-interactive key exchange for more than four players. So you have four players here, and you have a public board with some public parameters. And the idea is that each player is going to generate a secret value. And it is also going to encode this secret value in a public encoding and publish, broadcast this public encoding. And these public encodings are going to be on this public board. And then each of the party can take the free over encodings and their own secret value and apply this key generation procedure on their secret value and the encodings of the other parties. So for example, the party A here is going to take the value A, the secret value A, and encoding of B, encoding of C, and encoding of D, and apply this key gen procedure. And B here will take his secret value B and then take the free public encodings A, C, and D, and apply the same key gen operation. And the thing is what you want is that key gen in these four cases give the same result so that you can extract the same key K. So if we look at instantiation of such key generation algorithm for two parties, it's essentially Diffie-Allman. And it comes back to 1976. So the secret values are element of ZP, and the encodings are G to the secret value. Then in 2001, Zhu explained how to use biliner maps, how to use bearings in order to achieve three parties. And in 2003, Bonnie and Silverberg, they said, OK, if we don't have a biliner map, but a multiliner map, then we can achieve n parties for n as big as the size of the multiliner maps, plus one. But also in this paper, they say that it might not be possible to use the same technique as we use to obtain bearings in order to obtain multiliner maps. So it remains an open problem, and it's still an open problem to get the multiliner maps from Bonnie and Silverberg. In 2013, what we used instead of multiliner maps is approximate multiliner maps were introduced. And these approximate multiliner maps they also allowed to derive a key generation procedure for n parties. So we essentially have three candidate schemes of this approximate multiliner maps. So the first one from Garg Gentry and Alivie at EuroCrypt 2013, and then the CLT scheme, Crypto 2013, and the Gentry-Gorbenoff-Alivie scheme of TCC 2015. And in this talk, I will actually describe a cryptonesis on this third candidate. So why do we care about multiliner maps? So nearly everyone here know that they are really interesting. So one of the first application that is described is this non-interactive key exchange. So it's a simple application, because it's a generalization of Diffie-Hellman. But of course, the key thing that we could build from approximate multiliner maps was IO, so indistinguishability of investigation. And this has on itself a lot of really exciting application, a lot of theoretical consequences and everything. So it's really, really the big construction from approximate multiliner maps. And also, we can build a lot of really exciting new technologies like multi-input functional encryption, optimal broadcast encryption, witness encryption, AB for circuit, and a lot more. So it's really, really interesting to understand how to construct such multiliner maps. So if we look at the candidates we have currently for approximate multiliner maps, they're unfortunately based on new hardness assumption that are not standard. So you have a lot of assumption. So some of them I named them myself, but you have this multilinear DDH, which is a generalization of DDH in a public setting and in a private setting. You have decision linear problems, subgroup membership. You have these problems here that are kind of interesting on their own for some theoretical work to get to construct IO. You have this kind of Stradlich set induction assumption that also are used to build IO, subgroup elimination, or graph induction. And what's happening right now in the community is to try to figure out how hard these assumptions really are. So if you look, for example, at the GGH 13 scheme, you have some assumption we know that they are broken and we have attacks against them. Some assumption they're kind of broken. Sometimes it depends on your instantiate them. So for example, in this work we'll show that a graph induction assumption on GGH 13 is also broken in a way. When you look at CLT 13, it's not exactly the same thing. So this one was orange before, now it's red. This one, I put it in green, but it's more like we don't really know if it's secure or not. We're still trying to figure out what's happening. And these are all like a subset of all the attacks. Again, these are our assumptions. For GGH 15, so it's kind of a different multilinear map. It's mainly this graph induction or an assumption, and in this work we show that there are some graphs, for example, the graph of the key exchange that is not secure. OK, so our result is the following. So there exists a polynomial time attack against the deferment key agreement protocol when instantiated with the GGH 15 multilinear maps. So what we do is actually we don't we're looking at a protocol that takes place, and we generate an equivalent encoding for one of the user. And if we're using this encoding in the key exchange, then we recover the secret key. OK, so just small comments on the result. Our attack comes from the fact that the graph that was used in the graph induction or this assumption for the key exchange protocol is there is a way to attack it. It does not mean that all the graph induction assumption on GGH 15 are broken. And in particular, you can build a candidate for indistinguishability of investigation following a certain graph structure, and we don't know how to break that. And also, there is a paper on the imprint, but I didn't check the detail that says that some of the graph gives a random encoding. So you have security for some of the graphs. Also, so Alevi in this note on the imprint tried to take this graph induction thing and to put it for GGH 13. And in that case, also for the key argument protocol, we can have another attack. It's not really an extension of this one, but it's similar in some of the idea that breaks also in that case the key agreement protocol. OK, so in the rest of this talk, I will describe multilinear maps, what is the candidate, what is the protocol, and how we can break it. OK, so if we go back to asymmetric multilinear maps, so you have an encoding of an element A in a group G to the L. So that's how it was described by Bonin-Silverberg. You can add and subtract elements in the same group, so that's easy, that's just the addition. You can zero test in any group, so the zero testing, it's true, so it means it encodes a zero if it's a natural element of the group. So it means the element that is encoded is zero. And you can multiply, so in the asymmetric multilinear map described by Bonin-Silverberg, you can take D elements and put all of them together and get an encoding in the target group of the product. When you look at approximate multilinear maps, it's nearly the same. So instead of being in the group GL, you're encoding with respect to a label L. And now you can still add for the same level. You can zero test, it's the same except you can only zero test for a specific label. And the multiplication is slightly different. Here it's graded, so it means you can multiply two encodings corresponding to label I and label J if they are in relation for a certain relation. And this certain relation, it's what will be different in the candidates we have. And for example, for the GGH 15 candidate, that we'll look at, these relation are given by graph structure. OK, so what is this candidate? So first we're working over a ring. So it's polynomial mod F, where F is an irreducible polynomial of degree N. And you're also working in RQ, so it's R divided by QR. And we have some public values. And these public values will be vectors in R that correspond to certain vertices. So what is an encoding? So actually, you're encoding something with respect to a label. And this label will be a path. So a path from a vertex U to a vertex V. And an encoding for this label is actually a matrix in R such that this vector times this matrix is the secret value times the vector that correspond to the vertex V plus an error vector, mod Q. So once again, the encoding here, it's a matrix. And it sends this vector to this vector multiplied by the secret value. And it's a noisy encoding. So obviously, you can add or subtract. So if you're taking two such matrices for the same label, then when you multiply on the left by a to the u, you get a times a to the v and plus b times a to the v. So it's a plus b times a v and plus a vector of noise. The question is, what's happening for the multiplication? And the multiplication will be possible if the paths are compatible. So it means if you have an encoding for the path uv and an encoding for the path vw, then you can have an encoding that will be for the path u to w. So if you write it down, it's just the matrix multiplication. So you multiply these two matrices. And when you write it down and you multiply on the left by the vector a u, then you have a times b times a w. And you have an error term here. And this thing, it reveals a lot. So first, it means we can only encode small plaintext. Because if you're encoding large plaintext and then you're multiplying, then this might become really big. So if you keep on multiplying, it will be really big. And then you cannot do anything. And the other thing is that here, the encoding itself is present. So actually, the encoding also are small. They actually are nearly trap doors for these vectors here. They are noisy trap doors, in a way. So if we sum up, we can do multiplication when the paths are compatible. It's just the matrix multiplication. And the plaintexts are small and the encoding are small. It doesn't matter that much for the rest of the attacks. I wanted to point that out. Also, how do we zero test? So actually, there is an extraction procedure. And the extraction procedure is you have an encoding of a secret s for a path u to v. And if you have the a u, the vector a u, you can multiply. And you get a times a w plus e. And this thing is this e here is small. So it means the most significant bits of this value here will only depend on the secret exponent s. So if you have another encoding that also goes to, that encodes the same value, but it's randomized differently, it will still give them the same most significant bits. So it means you can extract something that only depends on s. So in particular, it's really easy because if the s is equal to zero, then when you will multiply these two things, you will get something that is small. So you know that the uncoded value is a zero. So zero testing means that the extracted value is a zero. OK, so if we sum up on this slide, in the graph induced multilinear mass, you encode relative to a path. It's a noisy encoding. You can add or subtract such encoding for the same path. You can zero test. So the zero test is one if a zero and u is actually the source, third text. And you can multiply a long path. So if you have a path u to v1 and an encoding from the path v2 to w, you can multiply them as v1 is equal to v2. And you get an encoding related to the path u to w. OK, so what is the key exchange protocol? So I will describe the protocol for free parties, but you can extend the attack for more parties. So when you are using GGH 15 in application, you have to specify the graph that you will be using. So this is a graph for the key exchange in GGH 15. So the first row will correspond to user 1. The second row will correspond to user 2. And the third row will correspond to user 3. So what is an encoding? So the encodings, they will be from their text to these vertices. And an encoding, so ij, for example, it's such that aij times this encoding. It's small a times aij plus 1 plus a vector noise. So actually, the encodings will be here on the edges. And you're doing, so you're multiplying here, and you go here, and you're multiplying here, you go here, and you're multiplying here, you go here. OK, so what is, OK, it's here. So what is a key exchange protocol? So the user are generating encodings of secret values s1, s2, and s3. And they're putting them in the graph in a round-robin fashion. It doesn't really matter that it's a round-robin fashion, but it's described like that. So the thing is here, you can see that each user can take this first, the user 1 can take the first row, multiply the encoding, and get an encoding of s1 times s2 times s3. And same thing for user 2 and user 3. So what they do is actually they publish these encodings, but keeps these encoding secret. And then this vector here, they're not public. So what they have is the public parameters contain these three vectors here, and they only reveal these values here. So it means if you're just looking at that, you cannot recover s2 here, because you cannot infer it from here or from here, because it's not the same matrices. So the user here, but the user i himself, he knows the secret value on the missing edge, and he can compute the resulting encoding and extract the key as the most significant bit of the result. So that's a protocol. And how do we do that if we don't have all the small vectors? So actually what happens is that you have a lot of public encodings over these edges, and you're computing a subset sum. So here, user 1, for example, it takes a set in 1 to the n, and it generates an encoding of s1. It doesn't know s1, but this encoding, it can generate it on this path here, on this path here, and on the path here. And user 2 is doing the same, and user 3 is doing the same. Question about the protocol? OK, so let's try to break it. So the setting is the following. We have all this public parameter. So this public parameter is just what I have shown here. So it's these matrices, Cij, and they verify that the vector times Cij is equal to this secret value, ti, tij plus, et cetera. So you have all these public parameters. And what you have is that you're actually looking at an ongoing key exchange. So you have all this relation from the key exchange, but these one you don't have because these are the secret from the users. Otherwise, the relation is with s1, s2, and s3, and that's the secret of the users. OK, so the goal, what is it? It's to recover the shared secret key. So it means you want to recover k, which is an extraction, a key gen of s1, s2, s3, over one of the path. So either the first row or the second row or the third row. So our attack is as follows. We extract a linear relation over s1, over the t1j's. And it's a variant of the Shea and Real attack against the CLT-13 scheme. And it does not break the protocol right away because the coefficients are large, but then we can compute an equivalent encoding with a small error, and then we can use that in the protocol. So what is the idea in the nutshell is that we have encodings here of s1 that are public. So we'll just forget about the first row, and we'll try to fix s2 and s3 for the same value and then compute this line here and this line here and compute the difference. It will give an encoding of 0. And we'll do that for a lot of different values of s2 and s3. And this will allow to construct a matrix. And then we can do something with the matrix. So if we look at the public parameters, so these are the public parameters of the row 2 and 3. What I do is first I fix t3. I forget about the indices in the index j for t3. And then it's going too fast. And then I'm merging the two first equations. So anyway, I'm merging the two first equations. So it means I can have two encodings. And this time it's from the first vertex to a second vertex here. And it's an encoding of t1 times t3, or t2 times t3. And then an encoding of t2 and t1. And what I do then is I fix all the g's. And that's the thing I will be varying after. So then I get two encodings here. And the thing is when I look at the difference of these encodings, as I told you, we have two rows that compute the same thing. So I look at the difference. When I look at the difference, it will give something small. That's what the protocol is. So when you compute exactly and you write everything down, you get that this difference is t1 times t3 times this error vector plus an error vector times the encoding. The encodings are small. Minus t2 t3 times an error vector minus an error times an encoding. But all the values are small. It means we get something that is true over the ring, and not mod q. So then now if we only look at the first element of this encoding, we can write this equation as an inner product of two vectors. And then I can do that for a lot of different public elements. So it means I can extend that in two matrices. So you have a product of matrix A times B. And so if you're doing that, but you're not building square matrices, you're building one matrices with one line more. It means that you can find a vector in the kernel of W. And since with i probability B will be invertible, you can remove that. So it means you have a vector in the kernel of A. So it means you have a linear combination of the t1j that gives 0. And the interesting thing about that is that we only use public elements. So if we use one secret element instead of one public element, we have a relation with s1 and the tijs. And so once we have this relation, the problem is that when we look at the encoding we can create, because we have the coefficient, then the noise vector is really big. So that's a problem. We cannot use it directly in the protocol. But what we can do is instead of using line 2 and 3, as we did, we're using line 1 and 3. And we're running exactly the same thing. And then we can correct this error vector, get something with the same error vector. So we can subtract. And then we get an element that is verifying what we want. And then we can use it in the protocol and recover the shared script. So to conclude, we described an attack, again, the key agreement of GGH 15, even with the safeguards that were in the papers. And it extends in a way to the variance of Hallevi. And we don't know, with this attack, we don't know to build an uninteractive four-party key exchange that is not going through IO. But it gives some insight on the graphing list assumption. Yes. So these are some open problems. And thank you for your attention.