 Shai, I've known Shai for a very long time. He was my office mate at MIT during our doctorate. He graduated in 1997, and soon after graduating, went to work at IBM Research, where he's still his research staff member. Shai's work has been impressive in many, many areas of cryptography, too many to mention in this introduction. But one thing that I would like to point out in this introduction is that Shai was one of the first people to sort of foresee the application of hard problems in lattices to cryptography. And one of the first people to propose cryptography schemes based on lattices. And 15, 20 years later, we're seeing where that took us in terms of all the application and all the beautiful new crypto schemes and math that came out of some of the ideas that were proposed early on. And now Shai is sort of doing it again with multilinear maps. He's been at the forefront in the research area that has been basically revolutionized crypto in the last two years. And even if we're still trying to get it right when it comes to multilinear maps, and I think Shai's talk is going to be about that, I think it's very important to remember that there is something new and something potentially transformative in this works on multilinear maps that we're seeing presented in the last couple of years. And so with that in mind, and with no further ado, I'll leave it up to Shai to give us a survey of the state of DR in multilinear maps. Thank you, Rosario, and thank you for the crypto program committee for inviting me. I'm very happy to give a talk here at the state of cryptographic multilinear maps. That's based on works of all of these people that are listed here on the slides and many other people that I talked to over the last couple of months about it. Let me start with some perspective on the tools and constructions that we have, at least for public encryption. This slide was adopted from a presentation by Amit. So in the beginning, God created the heaven and the earth and gave us a discrete log and factoring. And the land was plentiful, and we had these nice flowers growing. We were able to do public encryption and signatures, and even more than that, zero knowledge and multi-party computation and things like that. And later, well, we have some more seeds of crypto in the peering revolution of the early 2000s. And many more beautiful flowers came with that we didn't know how to do before, like identity-based encryption and short signatures and efficient non-interactive zero knowledge and attributes-based encryption for simple functions and many things like that. And later we find that, well, later before concurrently, we find that lattice is a good source of crypto. And we were able to grow even more attributes-based encryption and predicate encryption and this beautiful flowers of a fully homomorphic encryption. And then a couple of years ago, we found this new seeds of multilinear map candidates, and out of those grew all kind of strange and wonderful things like cryptographically strong code obfuscation, witness encryption, functional encryption, all kind of weird and wonderful things. And this talk is about the roots. This talk I'm going to talk about the multilinear map candidates, trying to understand what is it, what are they giving us, what's the hardness there. So what are cryptographic multilinear maps? First and foremost, cryptographic multilinear map is a tool. It's something that we use in order to build other stuff from. And the main thing that they do for us is that they let us compute on hidden data. In some way, we hide the data that we're interested in by encoding it in some way, and we can still operate it even in this encoded state. Computational and hidden data is by no means a new thing. It's a very common theme in cryptography. And depending on how you encode, it lets you do different operations. So an early example is discrete log. You have a value a that you care about. You encode it by putting it in the exponent G to the A, where G is a generator in some algebraic group. And the discrete logarithm problem essentially says that if you encode the random A like this, then recovering it in the clear form is a hard problem. Nonetheless, you can still compute in this form. In particular, if you have a linear function that you want to evaluate, you have G to the AIs, and you have some coefficient UIs, and you want to compute the inner product in the exponent, well, you can do that just by exponentiating and multiplying in this group. And another thing that you can do is you can check if an encoded value is equal to zero, because if AI is zero, then G to the AI is one, and that you can check. On the other hand, we need hardness in crypto, and here is the hardness for discrete log. Anything that's not linear seems to be hard. In particular, quadratics seem to be hard. So if you have G to the A1 and G to the A2, getting G to the A1 times A2, well, that's the Diffie-Hellman problem. It seems that it's hard to compute. Even testing it, if somebody gives you an alleged solution, testing that that is a real solution, still hard. Now that you have this thing where you can compute some things, but not others, you can use it for crypto applications. Diffie-Hellman key exchange is perhaps the most obvious example here. You have secrets A and B, messages G to the A, G to the B, and the shared key G to the A, B. Now the parties, the legitimate parties, know A or B in the clear, so they need to compute a linear function, and attacker knows G to the A and G to the B, so the attacker needs to compute a quadratic. This is why it's easy for the legitimate party to compute the secret key, but hard for the adversary. Many other applications, CCA secure encryption, commitment, zero knowledge proofs. Many, many other. There is an old survey by Dan Bonet from 98 that begins with the world, the DDH assumption is a gold mine. This survey is all about what you can do when you can compute linear functions, but not quadratics. Beyond the DDH, in the early 2000, we learned about bilinear maps and their application to cryptography, and the thing that makes them useful is that you can compute more. You can compute quadratic functions in the exponent in bilinear groups, but computing or checking cubics is hard, and once you have that, you can do a lot more. Quadratics are a lot more expressive than linear functions, therefore the legitimate parties can do a whole lot more. So you design your protocol or your application so that the good guys need to compute quadratics, the adversary needs to compute cubics or more, and you get identity based encryption, you get predicate encryption, you get efficient non-interactive zero-knowledge proofs. I listed some of those before. So why step two? Can we find groups that would let us compute cubics in the exponent, but not fourth powers, or in general for every k, which would be a design parameter, let us compute kth power, but not k plus one. And this is what multilinear maps are, at least in their pure form. This, if we have it, is more useful than bilinear for exactly the same reason, the legitimate parties can do more. So you design your system so that the legitimate parties would only need to compute degree k, the adversary degrees k plus one or more. Soon after the pairing revolution, Bonnet and Silverberg explore that option, and they sketched or showed some applications of multilinear maps if we had them, and also make the argument that the exact same way you build bilinear map is probably not gonna give you anything more than two. The next thing I want to show you is, when you talk about computing on encrypted data, the thing that comes to mind today, six years after 2009, is homomorphic encryption. We already know how to compute on hidden data, right? We encrypt it and using homomorphic encryption, and then you can compute. And indeed, multilinear maps are similar in some ways to some automomorphic encryption. The difference, they both have this way of hiding the data that you care about. It's either encoding or encryption. They both allow you to compute some low degree polynomials on the data once it's in this hidden form. But the really big difference between them is that in homomorphic encryption, you can compute whatever you want on this. If it's a fully homomorphic encryption, then really whatever you want. Otherwise, just load the degree polynomials. But when you're done computing, the thing that you have in your head, in your hand is a ciphertext, and you can't make sense of it. Well, unless you know the secret key. But if you know the secret key, then you can also read the inputs and all the intermediate values of the computation. Multilinear maps, we want something else. We want to be able, once you've done the computation, test if the result is zero, but you do not want to be in the clear without knowing any secret key. But you do not want to be able to compute intermediate values or figure out what the inputs were. So the main ingredients that separates multilinear map is this testing for zero. To be useful, you need to be able to test if two degree k expressions are equal. And this thing is the same thing as testing if a degree k expression is equal to zero because you can subtract. So the approach that all current construction of multilinear maps take is start from something that looks like homomorphic, somewhat homomorphic encryption scheme, and then publish some handicapped version of the secret key that is called the zero test parameter. And that's sort of a defective secret key. It lets you check that a degree k expression is zero, but it does not let you decrypt. Of course, the distinction is only meaningful if you work with large spaces. If you encrypt bits, then testing for zero and decrypting is the same thing. But if your a's are taking from a large space, like they do in the discrete log case, then that's meaningful. By now, we have many or a few different constructions and many, many different variations of those constructions. The first one was due to Gar Gentry and myself from Idealadis as soon after there was another one by Koran Lapuan and Tibocchi from Composite Integers and somewhat later by Gentry Gorbunov and myself from Standard Ladis as in two trapdoors. And then we have many variations. There is a strengthening of the CLT-13 construction that is called CLT-15 by the same author as that in response to attacks. There is a variation of the GGH-13 by Gentry myself and Lapuan that I'm going to talk describe. In this talk briefly, it's not out yet. Hopefully it would be within a few days or two weeks maybe. But I'm going to describe at least the high level idea here. You can also take the GGH-13 and 15 and mix and match the all kind of elements that you can combine between the two of them. So I'm going to describe the GGH construction today mostly because it's the easiest one to describe and I'm going to talk about the modification, the attacks and the modification to resist those attacks. One thing that I want to do in this talk is to set it in a somewhat approachable way so that people who want to do crypt analysis would know where to look. And one of the things that hinder that is the syntax. So we have, the syntax is more complex than we want. Different schemes actually have somewhat different interfaces. So I'm going to describe a somewhat high level syntax that sort of applies to all the constructions that we have and I'm going to try to frame the problem of security with respect to that syntax. And I hope that that would simplify things so that it's easier to say what's safe, what's not safe, et cetera. In all the constructions we have, there are three parts. There is initialization. This is derived from an encryption scheme. So the initialization naturally can be thought of as it generates a public and a secret key. Then there is an encoding. This is where these construction differ from things like the script log. You need the secret key in order to encode elements. And then there are the operations. Once you have the encoded elements you can use the public key to add, multiply, and test for zero under some restrictions. I'm going to spend one or two slides talking about what these restrictions are. Not every two encodings can be added or multiplied. Different schemes have different way to restrict what operations you want to make. The general way that I want to describe it is each encoding has a tag. And only encoding relative to the same tag can be added. Just like in bilinear map, only things that are encoded in the source group can be added in the exponent here too. Only things that match can be added in the exponent. And only things with compatible tag can be multiplied and the result is somewhat related to the tags of the arguments. And you have one designated tag where you can zero test. So as you compute not only the values but also the tags evolve and there's one tag where you can test for zero. Here's an example, and this is the example that I'm going to use. Our tags are levels and we think of levels as subsets on some universe. So we have a design parameter just like the K that I was talking about before. Here it's called kappa. It's actually a set without loss of generality, the integers. And a level, each encoding is relative to a level and a level is a subset of this universe. And you can add encoding relative to the same level, multiply encoding relative to levels that are disjoint. And when you do that, the level of the result is the union. Levels could be just the numbers and then when you multiply the levels adds or levels could be paths in some given directed acyclic graph. This is the GGH 15 way of doing tags. And you can mix and match those as well. So there are many ways to design what levels we are. In this talk, we're gonna stick to a level is a subset of the universe. So that's the syntax. This is the functionality. Now let me say something about security and I will go back to that many times throughout my talk. I described what you should be able to compute. What are the things that you should not be able to compute? What is the source of cryptographic hardness here? Well, intuitively everything else. It should be hard clearly to recover a non-zero plaintext from its encoding. Identifying zeros at tags that are not at this designated tag or cannot be brought there by multiplying should be hard, all kind of other things. So the intuition is we really would like to have a construction where we don't know how to attack any of these other things, anything that doesn't follow. I'll talk about what extent we do or do not have those later on. Let me describe the construction, the GGH 13 construction. Syntax first, so as I said, there's an initialization. You have your security parameter, you have the levels that you want, it outputs a public key and a secret key pair. It also specifies the plaintext space. In fact, the plaintext space is only specified within the secret key, not even the public key. And that's consistent with the fact that you need the secret key to encode. The encoding, you have some X in your plaintext space and you have some level, your target level were to encode it, use the secret key to encode X relative to L. And then you can add encoding with respect to the same level and multiply encoding with respect to this joint level and you can zero test relative to the full universe. So you can bring things up until you have the entire thing and then you can test for zero. So that's the Syntax and this is the construction. The construction works in a cyclotomic ring. For most part, if you don't care for cyclotomic rings, you can think about it as just the ring of integers. I will say explicitly when things are not, don't behave as if they were in the integers. You need a high dimension ring to protect against lattice attacks, never mind that. And there are some algebraic consequences here. But there is a ring where we work. So we're looking at the ring itself, we're looking at the ring mode Q where Q is a large integer. Think about it as sub-exponential in your security parameter. The secret random element in the ring mode Q and then an additional small element in the ring, call it G. And the plain text space is the quotient ring R over GR. So it is some quotient ring. You can choose this element G in such a way that that quotient ring is isomorphic to the field with the elements. And for the most, and actually throughout the stock, I would assume that you do that. It's not strictly necessary, but it is convenient. And P is not made public. What is an encoding? I'm gonna show, I mean, it's gonna be clear how to encode once I describe what is an encoding. So you have some element in your plain text space and you want to encode it. The format of the encoding is you have some numerator and some denominator mode Q. Then the denominator is just the product of all the ZIs in the target level. The numerator is a small element in the coset of alpha. Now, again, the plain text space is a quotient ring, which means that plain text elements are cosets and you just choose one element in that coset which is small. If you know G and if G is small, you can do that. And once you have this form of a numerator in the right coset divided by some denominator that depends only the level, then you can add and multiply. When you add two things with the same denominator, the numerator is add. When you multiply two things, both numerator and denominator, multiply. And as long as there is no mode Q reduction in the numerator, the mode Q and the mode G things do not mix, right? As long as everything is over the base ring without any mode Q reduction in the numerator, then the numerator remains what it should be, both mode Q and mode G. And the thing you care about is mode G, but the operation that you do are mode Q. So that's basically the entire thing of how to encode element. You just encode them by dividing them by random stuff. It's essentially all there is and you choose a particular representation. How do you zero test? You need to be able to test whether an encoding at level corresponding to the entire universe is an encoding of zero. Now an encoding of zero has a numerator which is in the coset of zero, which means it's of the form G times something. And it has a denominator relative to the entire set. So all you need is to give me a way to cancel out these two things and this is exactly the zero test parameter. So the zero test parameter has G in the denominator, the product of all the ZIs in the numerator and then some multiplied by this randomizer H, which is small or small h. And then when you multiply, if this thing was indeed an encoding of zero, then the G cancel out, the ZIs cancel out, all you left with is this H, the randomizer and whatever thing multiplied the G in the original encoding. And since all of these were small by design, then the result is a small element in RQ. So the way you zero test is you multiply and test if the result is small. And you can prove that this thing was a valid encoding of a non-zero, then the result is not going to be small. So that's in two slides, three slides, the entire GGH13 construction. Let me specify a few properties that I'm going to use. So the encoding is related to this numerator. So many times I would not want to specify the denominator and everything else. I just wanna highlight the important thing so the encoding U is related to the numerator E. Essentially if you find the numerator, you've broken the encoding. Finding the numerator is essentially equivalent. It's essentially the same as breaking the scheme. And an encoding of zero has a numerator which is G times something. Adding, multiplying and coding would work on the numerator over the base ring without mode Q because if you ever get mode Q in the numerator, this is an error condition. So you always set your parameters so it doesn't happen. Zero testing, if you have an element at the top level that's related to the numerator R times G, and you apply zero test to it, what you get is the element R times H. You get that element in the ring R. There is no mode Q reduction in that step. So you essentially took your numerator, move it out, multiply it by this one randomizer and you give that to the adversary. And given that I just said that if you find the numerator then you're dead, these things seem a little dangerous and indeed the attacks on the GGH13 cryptosystem take advantage of exactly that. So let me describe the attack and the defenses against them. So the first attack that was described in the original paper was as follows. A priori, you only get encodings. You don't even know what plain text space you're working over. But let's say that you knew that your encodings have this form. You have one encoding of zero and then you have many encodings of other stuff. So you can try to multiply one of these UJs, these many encodings by the encoding of zero. Now you know that you have an encoding of zero and let's say that the levels are matching so that you get something at the top level so you can apply the zero test to it. So you can compute many zero tested values this way and they all have the form Y sub J which is H. This is a system parameter. This is a system parameter that's random and small. R zero. This is an encoding in the first encoding of zero. Times some EJ. This is the numerator in the other encoding that you did. So you actually get, you started by looking at this encoding of E sub J, the encoding U sub J. You multiply it by a zero, by encoding of zero, you multiply it by the zero test. You almost got E sub J out. Up to some common factor that I call here H prime which is just the product of H and R zero and those do not depend on J. So we almost got all of our numerator up to some, just multiply it by some element in the ring. So the intuition, well now that we have that, let's just do GCD, find this common factor and factor it out and then we recover our entire E sub J that we want and that doesn't quite work and the reason it doesn't quite work is that the ring of integer in a number field doesn't really behave like the integers in every aspect. When you do GCD, you don't get the actual element, you get the ideal spanned by that element, you get some representation of that ideal and if E sub J and G are co-prime, then giving me the ideal E sub G that spans by E sub J doesn't tell me anything about the coset of E sub J mod D. Just like if I give you the ideal spanned by seven and I ask you to find a number of the form seven plus some random number time five, the fact that I told you that this thing was a seven didn't give you any information. It's just the same phenomena here. But what happens if some of these E sub J's were themselves and encodings of zero? In which case E sub J is also a zero. It's also in the same ideal. And now I have many representation of the ideal G times something times the ring R. And the common factor there is G. Now you can do another set of GCDs and find the ideal. Now, a priori I didn't tell you what the plain text space is. Now you know what the plain text space is. So the first phase of the attack is to recover the plain text space, the ideal G, the ideal spanned by G. And that you can do if you have encodings of zero, sufficiently many encodings of zero that you can multiply by each other and zero test. And now that you have the, now that you know what the plain text space is, you can go back to your E sub J primes, which was a system param, a system secret, sort of times the EJ. And now you can try to break many assumptions. So for example, if you wanted to know something like DDH, that sort of translate to knowing whether the E sub Js had one particular ratio and that you can do even if you multiply this E sub J primes by some constant because now you know that you need to reduce it more G. So the moral of this attack was that encoding of something that looks like zero times zero are harmful. Because and the reason they're harmful and that I want to stress, at least relative to the GGH 13 crypto system, it's really bad if you give out the ideal G sub I. So actually telling you what the plain text space is is bad for this particular encoding scheme. Okay, I have time to do that. This is probably the most technical part of this talk and it lasts something like five or six slides. So I'll alert you when the technical part is over if you don't care to look otherwise. Here it is. You can try to attempt to fix it in some ways and this is one way that I think is instructive. Instead of giving you an encoding of an element, I'm just going to give you a matrix and the implicitly encoded value there would be an eigenvalue of this matrix. So all these matrices would have the same eigenvector and the thing that I really want to encode would be encoded as an eigenvalue of that matrix. So you do the same thing. You choose the same G, the same denominators the eyes as before. You also choose a random matrix P. This is sort of the thing that translate between the representation, the eigenvalue, the eigenvectors. And you choose two vectors. The vector S is the eigenvector. The vector T plays the role of the randomizer H from before. And now when you want to encode an element, alpha, then you choose a small matrix E that has S as its eigenvector and alpha as the corresponding eigenvalue and then you just multiply by P and P inverse to encode it and divide by all the Z eyes. So an encoding of zero in particular would be you choose a matrix E that has zero as the eigenvalue which means that S times E is G times some vector. So it's zero mod G, it's G. All right, the zero test parameter you modify accordingly. So you need to cancel out all the Z eyes and the G and the matrix P. So you lump all of these things into your S and T, S prime and T prime vector and you give them out. And when you do zero test, everything falls out and the only thing that's left is S times E. This is the thing that you really wanted to encode times the system secret T. So what you get is an inner product between, I mean, S times E is G times R and then when the G falls out, all you have is an inner product between the matrix, between the vector R that's small and T that's a small secret. And the intuition was, well, the problem was multiplying two zeros and now we're not giving any native encoding of zero. So maybe that helps and unfortunately it doesn't. The attack, however, is a little more complicated. I want to show it here just as an example of the things that you can do. This follows the line of the attacks of Chernotal and in particular it's described by this work by Kornotal that would be presented tomorrow morning. Well, not that part of it, but it is in that paper and it's an imprint. So before we needed two sets of things to multiply, in order to break this variant, we need three sets of things to multiply. We have the UJs, the VJs and the UIs, the VJs and the WKs, the UIs encode zeros and VJs are the things that we encode and WKs are the things that just we needed to get some redundancy going. And again, when you multiply UI times VJ times WK, you always get a top level encoding of zero and in particular the element that you get looks like S times AI over G times BJ times CK times T. Now A, the matrices A, B and C are the matrices that were in the numerator before. So you can use this tri-linear form in order to break it and the way you do that is you fix the middle thing. The middle thing is essentially the target of your attack. You fix the middle thing, you vary the left and the right and you get many YIJKs and then you put them in a matrix. And you notice that this matrix you can write as a matrix A consists of all the rows that were the vectors AIs times the fixed matrix BJ times the matrix C. And now you have a matrix equation that you know holds over the ring. There's no mod Q reduction here. There's matrix Y that you know that you have as an attacker in your hand and the things that you're trying to attack are the BJs. And now that you have these matrices, you can start doing things that you do with matrices. Look at eigenvalues, look at eigenvectors, compute determinants, things like that. Again, you can compute all of these things, all of them, when you compute a determinant, for example, all of them would have the common term of determinant A times determinant C and you want to take it out, so you do GCD and take it out, you get the determinant of all the BJs. Again, you don't get it, you only get the ideals that are spun by that, but okay. And now if some of the BJs were also encoding of zeros, some of the BJs were also encoding of zeros, then the determinant of BJ is the visible by G because zero was an eigenvalue mod G. So now you compute the determinant of BJ and now you get many things that are divisible by G, so you again compute GCDs and get the ideal G times R. So even though we multiply it by all kinds of random matrices on the left and right and didn't know, we still can get elements out of it that are divisible by G and then we get the ideal G sub R and once we know the ideal G sub R, again, we can use it to do stuff like, for example, computing the eigenvalues of B1 times B2 inverse. So these are the type of attacks that you can do and they rely very heavily on the fact that everything is nice and linear in this zero test. You just multiply by one linear thing and then if you had an encoding of things that is a product of many different things, you actually get that product in the clear, shifted by some value and then you do the GCD tricks to get rid of that. That's essentially the attacks. So the moral so far, at least for the GGH 13 encoding scheme, the ideal that defines the plain text space is something you really don't want to give out. So when you argue about security of that encoding scheme, recovering the ideal G sub R should be thought of as breaking the scheme. And what can you do if you want to have security? Well, we already said that or at least all the ways we know how to recover the ideal was if we give you enough encoding of zero, so maybe we don't need to. And in particular in some of the constructions like the obfuscation constructions that we have for multi-linear map, we don't need to. So that could be a good way of using GGH 13 and claiming that you probably still have security, but many other schemes and in particular hardness assumption, it's very, very useful to give out encoding of zero. It helps you to do reductions, it helps you to do all kinds of things like that. So you really want to protect against this attack. And again, the attack rely very heavily on the linearity of zero testing. So can you make the zero tests a little less linear? And the answer is, well, maybe. Here's one way to make it non-linear. Just make it non-linear. Multiply it by itself, you get something quadratic. Here is sort of the simplest, I don't know if the simplest, it is a simple way of thinking about it. So here it is. Every, we're working now over a ring of large dimensions. So you can express it as a vector field over ZQ. And every coefficient of this result Y that you compute when you do zero test is some linear expression over ZQ. And you have N such linear functions. You know, the first coefficient, the second coefficient, the third coefficient, et cetera. Let's just consider a quadratic functions which is a sum over AI, alpha ij times li times lj. That's a quadratic. That's easy to see. You multiply two linear functions. The alphas are random and you choose them during system setup and you choose them as small scalars. And you set now your parameters that this thing would be much smaller than Q if C encodes zero. And otherwise it's gonna be, so you need things, you need Q to be slightly larger than it was before because before just one thing had to be much smaller than Q. Now you multiply two things and it still has to be smaller than Q. So you set your parameters accordingly. So this is a zero test. The thing that you publish is the quadratic form. And actually, because it's the same C that you multiply on both the left and the right, then you don't need n-square coefficients to give out. You can give n choose two coefficients to give out because ci times cj and cj times ci is the same thing. So you give this matrix out and this is going to be your zero test. You compute, you compute, you get an encoding at the end. You think of this encoding as a vector. This is the vector of coefficient in the representation of that thing. And you compute this by linear form on that vector. It's a way to make it nonlinear. What can we say about it? In truth, not much. We were looking at it for, I think, two months by now, at least maybe three. We have some partial directions that we thought would be useful to attack. None of them actually worked. This thing feels a lot like hidden field equations from the late 90s. We don't know of any techniques from there that would be applied here. You don't actually have a structure hidden in this thing, which is why we don't see how any techniques from there would help, but we don't know that they don't. This, I believe, should be a target for cryptanalysis. It seems that recovering the old zero test from the new one is hard. It seems that it's hard to recover the ideal span by g, even if I give you encodings of zero. So in principle, that could be a safe way to use the GGH 13 encoding scheme, even in settings where you need or want, for some reason, to give out encodings of zero. One thing that I want to say, there are some settings, not very many, but there exists where quadratics is unsafe, but degree four expressions are safe, so if that, there's no reason to think that this would be one of them, but if it turns out that this would be one of them, you can, in the same obvious way, make it cubicle, quadratic, or quartic or whatever you want. Right, so with that, let me, this was sort of, it's more, this was more examples than everything and anything else. Let me describe how I see the state of where we are relative to the attacks. The security landscape is that the zeroizing attacks on the original schemes that we have from 2015 make, break many of the hardness assumptions that we would like to make on this and also some of the schemes that were used. Not all, obfuscation is a notable one, where construction that we have typically are not broken. We have some attempts of strengthening these schemes, so there is the CLT 15 that's gonna be presented tomorrow morning. There's this fix that I just described here. They make zero test less linear. It's plausible that all these assumptions that we knew and loved from before are safe if you instantiated this way. We don't know that it's not the case. The GGH 15, I didn't talk about that the situation is fairly similar. There's the zeroizing attack on GGH 15 as well. And actually the fix that I just described is applied, can be applied also to GGH 15. We're still very much in a break and repair mode and there's a lot of room for more crypt analysis and more theory and I want to describe the way I think we need to frame this. How do you think about multilinear map security? What can you say about what thing is safe and what's not? So in multilinear maps, the syntax that I just described has these three parts. There's initialization, there is encoding of some values and then the operation. And the way I like to think of security is the real question is what kind of values is it safe to encode? Is it safe to encode zeros or not? What kind of distributions of values you can generate that would be hard to break and useful in constructions? So when you construct some scheme from multilinear maps, there is some distribution of values that your scheme says should be encoded. So these are the useful distributions. When you try to break these schemes, you try to see which kind of distribution you can use like this zeroizing attack in order to break it. So the cryptanalysis thing should be, I believe, phrased as show us more general types of distributions and encoding values that you can break. Or in the case of the CLT-15 construction and the GGH plus the GHL fix, show us any distribution where you can break it. And the construction is once we get some confidence into this type of distributions seem to be hard, then build useful cryptosystem from them. What we know, the weak distributions for GGH-13, I essentially describe them, it's zero times, anything that you can recover zero times zero from is bad. CLT-13 goes listen to the talk tomorrow morning. There are many classes of things that are hard there. They all are somehow related to the attacker that has too much freedom in generating top-level encodings of the zero. GGH-15, again, is very similar to GGH-13 in at least the type of distributions that are bad. For the new things, right now we don't know anything. So again, a lot of cryptanalysis. I want to say something about our confidence. This is, I don't think I'm saying anything new. We don't know that something is, we don't have a reason to expect that something is secure until we see weak classes of it. The reason we believe that block cipher is secure is because we can break up to half of the many rounds, but we never were able to extend it. The reason elliptic curve we believe are good, because we know classes of attacks and we have some ideas how an attack would work. And the elliptic curves that we use don't fall onto that. We need things like that also here. We need weak classes of these constructions. Right, some other attacks that I want to mention. Quantum attacks. CLT-13 is built on factoring, and therefore you can use Schor's algorithm to break it with a quantum computer. GGH-13 with zeros, once you recover the plaintek space, then you can apply new quantum attacks to break them. And the same thing can be said for sub-exponential attacks. The status of the newer construction relative to quantum and even sub-exponential attacks is unknown. I don't think anybody have tried too hard, but definitely there's nothing that exists. I didn't talk about performance in this presentation. The reason is that there's very little that we can say about performance. It's really bad, I guess is the main thing. It's about in the same state, at least from an asymptotic perspective, as homomorphic encryption was in 2009. The parameters grow with the degree of the functions that we want to evaluate. So if we want to evaluate expression of degree K, this would be, if we want to evaluate the expression of degree 2K, that would be polynomial in two larger than think the functions in degree K. In homomorphic encryption, by now we have techniques that would let us grow the parameters only with the depth of the arithmetic circuit that represents the function. And there's an exponential gap between the two. So the parameters grow really, really fast. All the post 2009 tricks that we have to speed up homomorphic encryption don't work for the current generation of multi-linear maps. At least we don't know how to make them work. It doesn't seem like there's an inherent problem there. Just the things that we tried didn't work so far. At this point, maybe we can evaluate 50-linear scheme with realistic parameters or 30-linear schemes or 20-linear scheme. But this is sort of the range at which we can actually hope to evaluate it and have it run and finish running before next crypto. I don't know, maybe degree 50 you can run in a day or a week or a month or something, but it's really slow. So clearly there's much work on this front. Somehow, to me and to many other people in the field, it seems a little too early. We don't understand enough about security to be starting worrying about concrete instances and whether they run in an hour or a month. Somehow we need to know, we need to have better confidence in the security of it. And again, the thing that I would like, the people who do crypto analysis to focus on is finding distributions on inputs where it's unsafe to use those crypto systems. And with that, I'm done.