 All right, so welcome everyone, welcome Vinod for this TCS Plus. So it's a pleasure to have all of you here. So before I introduce the speaker, let me maybe go around the table and say hi to everyone. So we have a group led by Clément from joining us from Stanford University. Welcome guys. Then there's a group with Coupien from University of Michigan. Welcome. Then we have Esan joining us from USC. Welcome guys. Then there's Eric from Columbia. Hi. Welcome. Hi. Welcome as usual. And then there's a group led by Syed from BSU. Welcome guys. Shavas is joining us from NYU. And Janice is joining us from Caltech. And Vinod is joining us from MIT, I presume. So welcome everyone. So let's get started before I start. So I remind you that we shifted the schedule a little bit due to Thanksgiving. So there's no talk next week. And then the week after that we'll have kind of a slightly special talk. So John Kellner will give a talk and the talk will be dedicated to the memory of Michael Cohen. So that's a couple of weeks from now. Today we're very, very happy to have Vinod by Kuntanathan give the talk. So Vinod is now a professor at MIT. He got his PhD from MIT, advised by Shafi Koldwasser. But after that he spent some time at University of Toronto. He's now a professor at MIT. So Vinod's famous for too many things that I'm not going to try to go over them. Maybe a lot of his work on homomorphic encryption. But many, many other things across cryptography. So it's really a pleasure to have him talk today. And so Vinod, welcome to Floyd's Earth. Thanks, Tama. And thank you. And thanks to G and India and the TCS Plus team, everyone in the TCS Plus team for having me over. It's a great pleasure to talk. It's a little bit of an unusual mode for me. It's actually the first time I'm speaking to people that I can see surprisingly. But too many people. So my talk is about the problem of program obfuscation, which is sort of a hot topic in cryptography. I'll convince you for a good reason. And the connection of program obfuscation to random constraint satisfaction problems. And I'll convince you that sort of, you know, at first sight, there doesn't seem to be a sort of a relationship between these two topics, but they are very deeply and sort of seemingly inherently related. So this is a few results based on joint works with Rachel Lin from Santa Barbara and Alex Lombardi, who's a student here. And I have to say that many of these slides are stolen from Alex Lombardi and Omar Panit. If there's anything good about these slides, I'll take credit for it. And if the slides don't look good, you can blame them. It's supposed to be the other way around, right? But never mind. All right, so let's get started. So this talk is about two worlds. The first world is the world of program obfuscation, which I'll describe. And the second world is one of random constraint satisfaction problems. So let's get started with the first world. So program obfuscation, obfuscation is a big word. So if you don't understand it, you go to the Webster dictionary and see what that means. Obfuscation is an action of making something obscure, unclear, unintelligible. And for programs, what that means is an action of taking a program and converting it into another program or a circuit into another circuit or a Turing machine into another Turing machine in a way that, you know, the result in the output of this process is, it looks completely like gibberish, but still it's a program. You can run it. You can run it on inputs. You get the outputs that you expect. That's what program obfuscation does in words. So let me sort of start with the puzzle, right? So here is a program, I claim. Can everyone see what the slides are showing? Yes? Well, the resolution is not fantastic, so... Okay, well, that's part of obfuscation, I suppose. Let me see. The resolution of the text was clear, right? Oh, yeah, yeah. Actually, it's actually okay if I... Sorry, this was me. We can actually read what's written on the plane, so it's fine. Good. Okay. All right. That's even better. So here's a question. What does the... This is a program, I claim. This is actually a valid Corsair C program. What does it do? I'm not going to sort of request answers from you guys, but really, let me tell you the answer. It is a flight simulator, it turns out. So this program, if you run it, it actually does... It actually brings up, like, a picture and does a flight simulator. Here is another program. Any guesses on the chat? It turns out that this program actually approximates the value of pi by looking at its own surface area. So, you know, these are really... There is an art of kind of writing obfuscated programs, but both of these programs were taken from the IOCCC contest in the International Obfuscated C Code Contest, and they were winners in two years. So these are really, like, programs. You look at them, you don't have an idea of what they do. Maybe the first one you can guess, but not the second one. In the second case, if you're impatient, you can cheat with these programs. You can actually run these programs. These are actual programs. You can run them, you can see what they do, and that actually spills the beans, in some sense, with these two programs. The kind of programs that we want to obfuscate in cryptography are different from these programs. These are programs with secrets in their head. So what are secrets? Why? I'm a cryptographer, so cryptographic keys are a natural example of a secret. So here is an example of the kind of thing I want to do. Let's say, you know, I am Alice. I'm going on vacation, and I want to delegate sort of my email, reading a subset of my emails to my admin. So let's say all emails that start with, you know, that have, like, IST or MIT in them. It's something that I want to delegate. So what would I do? Here's an example of what I can do. I can write a program that takes an encrypted email as input. It has a secret key in its head. It's sort of, you know, the first line of the program is the hard-coded secret key. You decrypt. The program decrypts the encrypted message. It checks if there is a special string in the message. If yes, it returns a message. Otherwise, it says, you know what? Tough luck. You know, I'm not going to answer. Sorry, this email is private. Yes? So this is a program I write. I can ship it over to my admin, but obviously, the problem is that the program has a secret key hard-coded. So if she, if he looks at the program, he can read the secret key out, and that's it. You know, there's no sort of delegation anymore. So what I really want to do is I want to obfuscate this program, send it over to my admin. So now he can run this program on inputs and produce outputs. In other words, he can use this to read all emails that have ISD in its, as a substring, but he won't be able to do anything more. In particular, he won't be able to recover the secret key by looking at this program. So that's what I want to do. This is an application that I want to realize using program obfuscation. So what are other kinds of secrets? I mean, you can imagine sort of licensing information in digital rights management. You can imagine sort of even sort of a devious use of program obfuscation in putting in backdoors, undetectable by running the program, but they get activated on a special input, or sometimes even the algorithm itself is protected information, right? I came up with a clever algorithm to solve vertex cover. I can dream. And I want to sort of code it up, and I want to sell it to you, yes? I mean, I don't want you to kind of reverse engineer it and make my quadratic time algorithm into a linear time algorithm, right? So that's what I want to do. Now, with the program obfuscation, what I'm more excited about, I'm a theorist. I'm more excited about sort of applications of obfuscation in cryptography. I mean, what can obfuscation do, where does obfuscation fit into the cryptographic landscape? So it turns out that obfuscation is a cryptocomplete primitive in the sense that, you know, any cryptographic task that you imagined in the past, something that you will imagine in the next 10 years, you can probably realize it as an easy corollary of program obfuscation. So I'm just being a little, little facetious here, but the power of program obfuscation is such that it can realize a wide variety of cryptographic tasks, some which we knew was possible and some which we didn't. Just to give you an example, program obfuscation is not a sort of a new quest. In fact, if you go back and look at the 1976 paper of Diffie and Hellman, which started cryptography, started public cryptography, the way, the first way they kind of thought about getting a public encryption scheme is to take a secret encryption scheme where, you know, both encryption and decryption need the same secret keys. It's also called a symmetric encryption for the same reason. You want to take that and make it into a public encryption scheme where the encryption can be performed by anyone using a public key, but the decryption needs the secret key to do. Right? So this is what you... So, you know, the big bang of cryptography wasn't constructing, wasn't realizing the public encryption was possible, and the way Diffie and Hellman thought about it, if you go back, there is a little kind of, like, one paragraph in their paper where they sort of fondle this possibility of taking the encryption algorithm of a secret key encryption scheme, which needs a secret key, right? So it's a program that has a secret key in it, obfuscating it and publishing it for everyone to use. Right? So if you obfuscate it properly, then, you know, this obfuscated program acts as a public key. It doesn't reveal any information about the secret key, and you can use it to encrypt. Perfect. Right? They unfortunately kind of moved on after this paragraph to do other things, but already sort of the quest to do program obfuscation started in the 1970s. So you might say, public encryption, what's a big deal? You know, we know, like, you know, five different public encryption schemes now. Why are you telling me this fact? One, because of its historical importance. And secondly, because things don't stop there. There is this thing called fully homomorphic encryption, which, if you haven't seen it, it's a very powerful object which lets you take encryption, ciphertexts, encryptions of X and Y, and compute, lets you compute encryption of X plus Y, without actually knowing what X and Y are. So it's sort of like computing under the hood. Right? So this actually was open for a very long time from the 1970s until 2000s. And you know what, if you have a way to obfuscate programs, here's a completely trivial way to do fully homomorphic encryption. So I'm going to take this little program, this four-line program, which takes two inputs, two ciphertexts' inputs, C1 and C2, and an operation. Let's say it's a binary operation, you know, plus or times, decrypts the two ciphertexts, does the operation on them and encrypts it and returns the result. So this is a program that I can use to compute on encryptions, but if I obfuscate it properly, it doesn't reveal the secret. So again, two-liner, if you have obfuscation, you can get this. So things don't stop there. This is a laundry list of a long, long sequence of things that you can do from program obfuscation. In fact, you can use program obfuscation for static corollaries. For example, you can construct games where computing the NASH is hard, even its average case hard, it's even sub-exponentially hard, so on and so forth. So there's a long series of work that use program obfuscation for various purposes. So it's really sort of a holy grail of cryptography in some sense. Sorry, quick questions. You haven't made the distance between BBB and IO yet, so all these follow IO? Click. Here is where I make the distance. So there's a little bit of what Toma is pointing out is that there's a little wrinkle in the fantastical story that I made up, which is that all these applications, all these sort of two-liners that I described follow from a very strong notion of obfuscation called virtual black box obfuscation. I'm not really going to define it, but it is a formalization of the English statement that this obfuscated program should be executable, but it shouldn't reveal anything besides input-out-of-behavior. So one can formalize it. Unfortunately, it turns out that this is impossible. In the sense, you can come up with programs, even sort of natural-looking programs, which cannot be obfuscated in the virtual black box. This is a result of Barak and colleagues. But things don't seem that bad. So at this point, you might ask, well, why did you tell me all these things if program obfuscation is impossible? What it turns out is that you can do what seems to be achievable is a weaker notion of obfuscation called indistinguishability obfuscation. I'm again not going to define it because it's not the point of this. I'm not going to sort of use it in this talk, but just think of it as a weaker notion of obfuscation and it has three properties. Number one, there are no impossibility results. There are no sort of Barak type impossibility results for indistinguishability obfuscation. Two, in fact, we even have candidate constructions of indistinguishability obfuscation. So from weaker and weaker assumption, that's actually the rest of this talk. And number three, getting all these, you can recover all these applications that I mentioned from this weaker notion of obfuscation. It's not a two-liner anymore. It actually, you know, you put in your sweat and blood and IO and the outcomes of these applications. It turns out this can be done. And we have a pretty good understanding of how to recover these applications using this weaker notion. Okay, so that's the story. That's the obfuscation world. That's where we are. Let's move to the second world, apparently completely different world of random constrained satisfaction problems. Okay, so what are these? I'll describe this in the language of local pseudo-random generators and here they are. This is an object defined by Audit Goldreich in 2000. And here's what it is. So this is a function, a local pseudo-random generator is a function that takes n bits of input and produces n bits. Because it's a pseudo-random generator, m should be bigger than m. It should expand its input. In this picture, you should really, so the top, the red is the input layer and the bottom is the output layer. This pseudo-random generator is defined by two objects. One is this bipartite hypergraph. So this you can see in the picture. Each output is connected to L input bits in sort of a directed fashion. It's a directed L plus 1 uniform hypergraph, if you will. That's number one. Number two is a predicate p that takes L bits of input and produces one bit of output. Yeah, so that's, that defines the pseudo-random generator. Well, fair enough. So how do you compute this pseudo-random generator? You take x as input, x1 of xn and bits as input. For each output bit, you know that it's connected to L input bits. That's what the hyperrige tells you to do. So go look up these input bits, apply the predicate p on them and that gives you the first output bit. Of course, you can keep doing it for each output bit independently and that gives you the output of the pseudo-random generator. And of course, security says that if I evaluate this function on a random n-bit input, you cannot, a polynomial time adversary cannot distinguish between that and a completely random n-bit string. That's what pseudo-random is. Right? So that's a local pseudo-random generator. Right? So this is going to be the key character in our discussion. So if you have any questions, just feel free to interrupt and ask. Let me actually slightly generalize the local PRG notion to a block-wise local PRG which is nothing but just the same function except my input is not going to be bits. I'm going to treat it as coming from some alphabet of size Q. That's the only difference between the previous slide and this slide. Right? That's it. So you still have a predicate except it takes L symbols from the alphabet Q and still outputs one bit. Good. So this is a block-wise local PRG and of course, I have to say that there's no restriction that output bit is computed using the same predicate. You could actually use like different predicates but I don't want to complicate life so I'm just going to talk about this notion for now. Okay. So this is a local PRG world and the questions here, the kind of questions we ask in these worlds are question in the program obfuscation world. It can be construct indistinguishability obfuscation from standard cryptographic assumptions. Let's say the hardness of factoring, the hardness of finding short vectors and lattices, so on and so forth and that's the big sort of million dollar I don't know what a million dollar but a lot, you know, a million intellectual dollars worth the question. World number two, the kind of questions you ask is, you know, I want to get sort of random generators and in the process how simple can I make these predicates? Like can I make these predicates look at like one input bit? Well that's impossible. Two, three, four. Like how small can I actually make this? That's the kind of question you ask in the CSP world in the PRG world. These questions turn out to be extremely sort of, you know, inherently sort of deeply connected to each other. And here is a connection between local PRGs and indistinguishability obfuscation in one slide summarized. There's been a chain of sort of a long chain of works that construct obfuscation. I always indistinguishability obfuscation from weaker and weaker and weaker primitives. So the first in this line of work is a work by Nier Bidanski and myself and Prabanjan Anand and Abhijit Jain, who say that if you have something called functional encryption for NC1 circuits, NC1 is log depth circuits, functional encryption all defined in a minute, don't worry, but it's an object, it's a cryptographic object. If you have that, we say you can sort of upgrade it somehow to obfuscation. The second line of work will be said, well, where does functional encryption for NC1 come from? We don't know. We said if you have functional encryption for NC0, which is constant depth circuits where in particular each output bit depends only on a constant number of input bits, if you have that, then you can upgrade that all the way to NC1. Okay, where does functional encryption for NC0 come from? We said, well, if you have these objects called constant degree multilinear maps, again I'm going to define it, don't worry, you can construct functional encryption for NC0. So you have this really long chain of reductions that construct obfuscation from weaker and weaker and weaker primitives, hopefully at some point trying to reach something that we actually know how to construct, how to instantiate. That's a hope. Okay, so where does local PRGs fit into this picture? Nowhere in the slide, I mean, you don't see in this slide, I don't see it certainly, turns out that one of these steps, in particular the middle step of going from functional encryption for constant depth circuits to logarithmic depth circuits needs a local PRG. Okay, so this is a big picture of where we are with IO. A concrete instantiation of this big picture is, okay, so never mind. A concrete instantiation of this big picture is a theorem of Rachel Lynn from Crypto this year. She's already the result of a end of the line in a sequence of works. And what she says is that I can construct an obfuscation scheme assuming three things. One, L linear maps. So these are degree L multilinear maps, which again I'll define in just a minute once I finish stating the theorem. Number one, number two, locality L PRGs. See that the L and L, they are the same L. You know, it's not an accident, right? The locality and the linearity of the multilinear maps match. Yeah, so locality L PRGs with polynomial stretch, like n to the 1 plus epsilon stretch, n bits to n to the 1 plus epsilon bits and something called learning with errors. Do you know what, it is the small kid in the block you know, we all somewhat somehow believe this assumption, so I'm going to ignore this from now on. So really the A and B are the characters in the game. Okay, so this is a theorem. So now in this theorem you can ask, are we done? Do we have the Holy Grail? I mean, of course, theorem says there exists an IO scheme assuming A and B. So the question is can I assume A and B? Do A and B exist? All right. Just the L, can you say how the L relates to the other parameters? So is L a constant there? No, it's something that's... L, you should think about it as a constant. Okay, good. Great question. Let me say that constructing L linear maps gets harder and harder as L increases and constructing locality LPRGs gets easier and easier as L increases. So there is a sort of a tension between these two objects and you know, we want to find a point where both exist. Right? Does that answer? What is SXTH? Good question. So Klamas, what is SXTH? Never mind. You know, it's a variant of the Diffie-Helman assumption. The details are unimportant for us. It's a Diffie-Helman assumption. It's again one of the small kids in the block. Right? Really the question is can I actually construct these L linear maps which I'll define again in a minute? Okay, so again, this is the theorem. This is a theorem, this is a theorem. Now we can ask what goodness is theorem? Can I instantiate it with this? Good. Any other questions about the theorem before we move on? Good. Okay. Let's see what goodness is theorem. And before doing that, I want to go back to my previous slide where I showed this chain of reductions and I used, I threw around various terms without actually defining any of them. Let me actually define two of these terms and then we'll make sense and we'll dive deeper into lens theorem. Okay. Functional encryption. What is functional encryption? You know, I didn't want to sort of write down a complicated cryptographic definition. So I wrote, you know, this is my approximation of a haiku. So here is a functional encryption. Given an encryption of a string x and secret key for a function f, you should be able to compute f of x but nothing else. In particular, no other information about x should be revealed. And p s, an important p s, is that the size of the encryption of x shouldn't really blow up. In particular, you shouldn't be able to, you shouldn't enumerate all possible functions and encrypt each result in the, in the site of x. That's kind of cheating. So I want the encryption of x to grow proportional to the bit length of x. Proportional, grow linearly with the bit length of x. That's functional encryption. So is that I'm hiding many, many, many data. F here is public, right? It's not, you're not trying to... I'm not trying to hide f. I am trying to hide x, but I am willing to reveal f of x. You, you are the person with the secret key for the function f. Yeah? Fair enough? Good. All right. So this is functional encryption. What are these multi-linear maps? Well, let's start from one. What is a one-linear map? You know, it's really a group g where, you know, I can compute, give an x, I can compute g to the power x. You know, this is exponentiation. g is a generator, give an x, I can compute g to the x. And because it's a group, if I multiply g to the x and g to the y, and thinking of this as a multiplicative group, I get g to the power x plus y. Yeah, that's what, that's what groups do. This is just a group operation. But the Diffie-Hellman assumption says that given g to the x and g to the y, it's hard to compute, computationally hard to come up with g to the power x times y. In other words, given g to the x, g to the y, g to the z, and so on and so forth, you can compute linear functions in the exponent. That's why it's one linear. But computing quadratic functions is hard. That's what, that's what they say. Right? And this can be instantiated. We believe the Diffie-Hellman assumption when you instantiate the group with the, you know, z mod pz star, you know, the group of numbers, mod mod p. Right? So this we believe from the 1970s. Well, if we can do one, we should be able to do two, right? So what is a two linear map? You want, not one but two groups. And I should be able to take g to the x and g to the y, and I, and apply some operation on them. It's not group multiplication anymore. It's some other operation. I should be able to get g to the power xy. I should be able to compute quadratic functions in the exponent but I should not be able to compute degree three functions. Right? That's by linear maps. It's two linear maps. And this we know from the work of Antoine Jou and Bonnet and Franklin on the Godel Prize winning work from early 2000s. So I have a question here which is, am I correct in thinking that LWE is not good for constructing I.O.? That's a loaded question. LWE is not sufficient at this point for constructing I.O. I do not know a construction of I.O. from LWE, from the hardness of learning with errors alone. And that's a that's a long open question and many of us have put money, crowdsourced money towards us. So if you solve it, you'll get 100 bucks from me. And I hope you know, I hope this is not listening. Amit Sahay is not listening. He offered 100 bucks for it. Several of us have offered 100 bucks for the answer to this question. So good luck. I mean, I do suggest people to think about it. Okay, good. So let's go back to our game. So one linear map, we know. Two linear maps, we know. Thanks to Dan. If you can do one and two, you should be able to do three. What's the big deal? Again, I want to know the exact number of groups, G and G prime, where given G to the X, G to the X, G to the Y, G to the Z, I should be able to compute degree 3 monomials in the exponent, but I should not be able to compute degree 4 monomials. That's a three-linear map. And this we don't know. This is a long open question, open for 20, 25 years whether these things actually exist. We have no clue, really. No negative positive, nothing. So that's multi-linear maps for three-linear maps. We have the machinery to dive a bit deeper into the lin theorem. And I'm going to sort of decompose it into two lemmas. Lemma 1 says that if you have L-linear maps, so again, L-linear is what we showed. We showed one linear, two linear, three linear. If you have L-linear maps for some constant L, then you can construct functional encryption for degree L functions. Remember functional encryption where there was a function F? So if you have L-linear maps, you can actually construct a functional encryption scheme for that. That's first theorem. That's the first lemma of Rachel. Lemma 2 says that if there is a functional encryption for degree L functions, and there exists a locality L pseudo-random generator, then you can construct functional encryption scheme for all NC1 functions and therefore wouldstrap it all the way to indistinguishability of this case. So the L-linear and the locality L are very tied to each other. To apply both these lemmas one after the other, I would start from an L-linear map, I would get a functional encryption scheme for degree L functions, and then I would apply lemma 2 together with the locality L pseudo-random generator to get obfuscation. That's what Lin's theorem says. Great. I'm not going to be able to prove both lemmas. Each of them is a one-hour talk. Sorry. I usually never get calls. This is the opportune time that I'm getting now. I think your song god must talk. Maybe it's interference from the phone or something. Maybe. Can you hear me now? I can hear you, but there's a lot of noise on the line. It's like this for everyone. Can anyone write in the window? It got better. Okay. I think it's okay now. Okay, good. So I'm going to say okay, good. We have lemma 1 and lemma 2. Let's quickly in one slide see a proof of lemma 1 and a proof of lemma 2. Okay, good. So we change slides. Good. Okay, lemma 1. You want to construct a functional encryption scheme for degree L functions assuming L linear maps. This is a beautiful slide because everything I'm going to say below the proof is just plain false. But hopefully it'll give you a flavor of how these two things are connected. Okay, proof. Okay, so how do I encrypt X? X is a string of bits actually, X1 up to Xn. I'm going to encrypt it as G to the power X1, G to the power X2 up to G to the power Xn. What is G? G is a generator of the group. So the group guaranteed by the linear maps. This already, if you're a cryptographer you should sort of, you know, alarm bells because X1 is a bit. If I give you G to the X1 you know what it is. It's either G to the 0 or G to the 1, yes. So not so good. But, you know, flow with me. So let's sort of keep going along these lines. So what do I want? Given the secret key for a function F I want to compute degree L functions in the exponent. So I get G to the X1 up to G to the Xn and I want to compute degree L functions of X1 up to Xn. That's what I asked you to do. Function F is a degree L function. So it seems like L linear maps are necessary. You should be able to compute degree L functions so you need L linear maps. Yes. What the chain of works you know showed, you know, there's a long line of works showed is that all of L linear maps are sufficient. In other words, some constants times L linear maps are sufficient. In fact, before Rachel's work it was 3 times L plus 2, something like that. And Rachel actually shows that this is actually tight. Not only are L linear maps necessary to construct functional encryption for Frenzy for degree L functions, they're also sufficient. She shows the construction. Yeah, does that make sense? Good. So when you write necessary, this is assuming that encryption has this form of encryption that you talk about. Is there a more general necessity result? There is not a proof that you need sort of L linear maps. This is sort of an intuitive sort of necessary. So there is an open question of, can you show that if you have black box access to a group with L linear maps or rather L minus 1 linear maps, you cannot do functional encryption. That's a formalization of the statement that you said. We don't know of such a statement. We don't know of the truth of such a statement. This is just a sort of intuitive statement. Like if I start kind of writing down in a piece of paper, I say, well, you know, has to be that way, right? And it wasn't clear, though, that L linear maps are sufficient to do functional encryption precisely because the encryption that I wrote down in the first bullet is not a secure encryption. By far, it's not a secure encryption. So you really have to work much harder to do it. And the surprising thing is that you can actually do all these maneuvering with only L linear maps. Good. So I'm not proving anything here, but this is just sort of a beginning of a flavor of how this goes. Okay, good. So I declare a victory. Lama 2 is what is it? It's a way to bootstrap functional encryption. It's a way to go from functional encryption for NC0. So let's say degree constant degree functions actually constant locality functions. Together with a constant locality PRG, locality LPRG, two functional encryption for NC1 functions, logarithmic depth circuits. Okay, again, this is going to be a proof in quotes. I'm not really going to prove it, but hopefully it will give you a flavor of what I'm talking about. Good. So the construction, going from NC0 to NC1, crucially uses this tool called randomized encodings. It's a tool that Apple Bomb, Isha and Khrushchelev invented back in 2004 and has been extremely useful since then. And what it does roughly speaking is the following. So randomized encodings give you a way to take a function F, which is complicated, function F that acts on an input X. F is very complicated. Let's say it's in NC1 and come up with another function F hat which is much simpler. F hat is actually in NC0. But the only thing is that F hat is a randomized function. So F hat doesn't just take X as input. It takes a random string R as input as well. So what AIK say, what randomized encoding say is that computing F on X is equivalent to computing F hat of X together with randomness in the following sense. If I give you F hat of X, R, for a randomly chosen R, you can actually recover F of X from this information. On the other hand, F hat of X, R reveals no more information than F of X. In other words, information theoretically, F of X and F hat of X, R are equivalent. You can go from one to the other. So this is randomized encoding. So if you want an exercise, here is a cute problem. Consider the parity function. Consider F to be the parity function on n bits. So this is a complicated function in the sense that it's not computable in AC0. Turns out that there's a very simple randomized encoding for the parity function F hat for the parity function which can be computed in NC0. In fact it can be computed in locality 3. Looking at 3 bits of the input on the randomness. So this is an exercise while you're listening to the talk or maybe after when you're getting lunch. But this turns out that this randomized encoding you can do for all functions in NC1. So any complicated function in NC1 can be turned into a simple randomized function F hat in NC0. So this is something that I think everyone should know it's beyond, so this is a notion that is sort of beyond cryptography and that's what we, that's a notion we'll use. So the way you said it it seems like you just put F in NC0. So the F hat of X and R is let you compute F of X with high probability over R or? No, you can even say you can even think of sort of perfect correctness. So the point is that okay, so let's think about it. Good, so F is complicated right? How would I compute F? Either I compute F or I compute F hat of X comma R which is simple, right? And somehow go from F hat of X comma R to F of X. That is a process that is complicated. So somehow I'm splitting the computation of F into two parts. One I compute F hat of X comma R which is very, very simple and given F hat of X comma R you can recover F but that is a complicated process that is actually an NC1 function. Okay, but I'm not compressing. That recovery doesn't involve you needing to know X or anything. No, no, that's crucially the point, right? If I knew X I would kind of throw this away and compute F of X myself. So here what I'm saying is that I can compute this sort of intermediate representation very, very quickly. From that intermediate representation alone without X you can recover the answer that that takes time, that takes NC1 time and the point is that the intermediate representation does not reveal anything beyond F of X. You cannot learn any other information about X. Good, so Clamon asks is R part of the input? R I think of it as part of the input. Good, you can only look at O of 1 bits of R. Yes, that's right, that's right. So the whole F of F hat, considered as a function of X and R, is an NC0 function. Sounds magical, but really you can do it, you can do it for NC1 you can, simple exercises try to do this for the parity function. Right, Tamar, good? Yes, thanks. Good. Alright, so this is a magical object that we are going to use. So, you know, look, I mean I have a functional encryption for NC0 right, that's the only thing I can use it to compute NC0 functions. How do I compute NC1 functions? Well, I say instead of encrypting X, right in anticipation of computing an NC1 function, encrypt X, R. R is just a truly random sequence of bits. So now instead of generating a secret key for F, you generate a secret key for F hat, using the NC0 functional encryption. Put together you can learn F hat of X, R, and once you learn that you can actually run an NC1 process to recover F of X, and by the way, this whole process did not reveal anything more than F of X, that's a randomized encoding guarantee. Right, so you somehow sort of magically reduce the computation of a complicated function or functional encryption for a complicated function to functional encryption for a very simple function, NC0. Right, so that's what we are going to do today indeed. So where does local PRG come into picture? So far so good, right? I mean, no PRGs at all. The only wrinkle with randomized encodings is that the amount of random bits you need is proportional to the circuit size of F. So in some sense, randomized encodings, what they do is they squash the circuit using randomness into something that is very low depth. Because the number of random bits you need is proportional to the formula size or circuit size of F. And in particular, it could be much more than the length of the input. And remember, you know, our definition of functional encryption, we wanted the size of the encryption of X to be proportional to the bit length of X and not to depend on anything else. Yeah, so that doesn't work. Fortunately, there is a very simple solution to this, which is just don't encode all the randomness into the encryption of X, R. Instead, encrypt X together with a seed of a pseudonium generator, something that is small. And the functional encryption first, sort of the function that you're computing first takes this seed, expands it into R and then computes F hat on X together with this R. Right? So for this to work, I want the process of going from a seed of a generator, a pseudonium generator to the output to be an NC0. In particular, the composition of this process together with computing of hat has to be an NC0. So in particular, I need a local PRG to generate R. That's where local PRGs come into action. Right? Does that make sense? More or less? Okay, so this is this is Lemma 1, this is Lemma 2. Right? And now we can go back to Lenz Theorem and ask, you know, are we done? You know, can we sort of instantiate you know, both A and B somehow and be done? Well, it turns out that, you know, there's an old result of Al-Khanan, Amir and Luka that says that you cannot have these small localities pseudonium generators if you make the locality very small. You cannot, certainly you cannot do locality 2, which is actually an exercise. Locality 3 is a harder exercise. Locality 4 is a slightly more harder exercise. But you cannot do that. So the best you can get is locality 5 pseudonium generators and we do have constructions, candidate constructions for these locality 5 PRGs. So this construction kind of gets stuck at locality 5 and therefore needs 5 linear maps to construct. And as we saw before, we have 1 linear maps and 2 linear maps, but we don't have 5 linear maps. You don't have 3 even. Right? Okay, that's unfortunate. Very soon after Lin's theorem, Lin and Tesaro improved their theorem in the same conference, but you know, they're not concurrent to the following theorem where they relaxed the locality of the PRG that they need. So the theorem is exactly the same as before. L linear maps, same story, you still get an IO scheme, but instead of locality L PRG, you only need a block-wise locality L PRG. So in other words, you know, you PRGs that treat their inputs that is coming from a large alphabet, should really think of it as a logarithmic size alphabet. Rather a polynomial size alphabet, so the description length is logarithmic. And you still want polynomial stretch and so on and so forth. Okay, so that sounds nice, but now the question is do block-wise locality L PRGs exist and for how small a block length, how small a locality? Well, let's instantiate these two ways. Let's say L equals 3 so then you need three linear maps and you want block-wise 3 local PRGs expanding n blocks to enter the 1 plus epsilon bits and that actually turns out to exist. In the sense we have candidate constructions for these block-wise 3 local PRGs which resist a class of attacks, resist attacks using LPs, SDPs, they are epsilon biased and so on and so forth. So I can't sort of say that they are secure. I mean like I can't say that factoring is hard but we have tried several attacks and these guys seem to resist these attacks. So, you know, sounds nice. Sounds better than before. On the other hand, three linear maps as we saw we don't have a clue if they exist or not. So, you know, one exists but the other one doesn't. You know what? This theorem is very general. It works for any L so I can try to instantiate this with L equals 2. Why not? So this says that there is an IO scheme using which uses bilinear maps, two linear maps which you can construct from elliptic curves. You know this Bonnet, Jou and Franklin Roussants and you need two block-wise local PRG that expands n blocks to something like n times cubed bits. So the exact constants are not super important but it is sort of super linear number of bits. So now the tables are turned if you look at it. Bilinear maps exist but now, you know, do these two block-wise local PRGs exist? We don't know. So the owners of existence of IO just sort of overnight, it went from whether these linear multi-linear maps exist to whether a certain constraint satisfaction problem or a certain local PRG is actually secure. Or actually, can you construct a secure instantiation of a certain local PRG? That's amazing. For a week we were in this sort of limbo trying to figure out does IO exist now? And that is suddenly a question about CSPs which many of us in cryptography are not used to thinking about CSPs. So we learned CSPs in a week. By we I mean Alex learned CSPs in a week and unfortunately it turns out you can actually come up with polynomial time attacks on these block-wise local PRGs that expand by a sufficient amount. So any sort of stretch that is sufficient for the Lintas-Saru theorem we break. So that's a bit of a bummer. So that's the hate part of the love-hate relationship. So as far as we know the Lintas-Saru construction is stuck at three linear maps. So the story is a little bit more complicated than that and hopefully I'll tell you what that actually means, but this is more or less the high-level picture. So if you know it, this is probably silly but can you clarify the relationship between being a PRG and you're talking about it as a random CSP I think I know what a random CSP is but can you say what it is? Excellent question. I did plan to say about it. So there are two differences between, for the CSP folks there are two differences between this kind of function being a sort of random generator versus it being sort of a hard CSP one is well obviously the difference between worst case and average case so here I actually want the inputs are random somehow so it's an average case problem. The second difference is that I'm not asking the kind of question, the PRG question is sort of like a gap CSP question. So in other words what I'm asking is, am I giving you an M-Bit string which actually satisfies a CSP? Why does it satisfy a CSP? Because I constructed it, I planted it. Versus totally random M-Bit string which is very far from satisfying the CSP. It's not about deciding whether the random CSP is satisfiable or not. No, it's a promise problem. Exactly, so those are the two high level differences. There are minor sort of other things that are also different but these are the most significant bits. Yeah, just another question. I mean I guess because for PRGs you're, sorry this PRG oh, I'm sorry, you needed to be secure against what for your constructions to go through? Against sub-exponential time adversaries but doesn't matter, I mean this actually doesn't turn out to matter. So in the sense that secure instantiations of these PRGs, for example three block-wise local PRGs, they actually achieve large stretch even and these sort of SDP type attacks even if you have like enter the epsilon levels of the sum of squares hierarchy they don't, they seem quite robust in this sub-exponential sense. Excellent. So what I want to tell you about is actually how this attack works in the rest of my time. Okay, so our results, so it turns out that we were not the only ones sort of thinking about this sort of breaking this thing in the span of one week. Boas and Zvika and Ilan and Prabhavish also have results breaking these block-wise to local PRGs but in different regimes. So this is a big huge table of sort of the parameter regimes where our attacks works and their attack works. The high level picture is that you know is this sort of highlighted sort of row of the table which is a combination of sort of techniques from our previous, our sort of original work and the BBKK work where we break the PRG for stretch N times Q Q is the alphabet length and N is the input length the number of blocks. That's actually better than, that's actually less than the amount of stretch that Lynn Tesaro asked for. They asked for N times Q cubed. We say even if you stretch by N times Q we are going to kill it. So even better. One could ask about which predicates our attack works for and the answer is worst case and that's actually true across the board. Any predicates doesn't matter what it is. Which graph does it work for? Again, you know the combination of the two results actually works for worst case graphs. Doesn't matter which graph you're talking about. The only restriction in our result is that our attack works only if the predicates computing each alphabet are the same. You compute each alphabet using the same predicate of the blocks of the input. That's the only restriction. So the one thing that is open here is can you break the PRG across the board? Can you get the best of all worlds? And that's that we still don't know. I mean to the best of my knowledge that is still open. But my point is that it doesn't really matter for the Lynn Tesaro construction because that is already broken by the highlighted draw. Yeah, so it makes sense? Good. And in fact it even breaks sort of plausible extensions to the Lynn Tesaro theorem. Even if they somehow manage to require less stretch we still sort of manage to break it. One more thing I have to say is that there is an inherent reason why the so the Lynn Tesaro type constructions need stretch n times q as opposed to simply stretch omega n. And that's sort of a that's so inherent in the way they sort of construct these functional encryption schemes. So any plausible kind of extension of the Lynn Tesaro theorem will get stuck in n times q and that we already break. So this is actually part of the star story where I you know we don't break all possible sort of instantiations. There is still a very narrow window for opportunity left to kind of use these kind of two CSPs and reduce the amount of stretch that the Lynn Tesaro theorem requires even below n times q and hopefully something works there but it seems very unlikely at this point. Yeah. So what I want to tell you about is this attack is this highlighted role. Yeah. Any questions about so far about this slide it's a complicated slide but you know really you can forget about everything except the highlighted role. Good. Okay. I'm going to move on. All right. So how do we do this attack again you know my starting point is the q equals two case so in other words the binary case when the alphabet is actually bits well you know here are two very simple observations what can these predicates be they can either be and or or unbalanced predicate in which case the PRG is not secure I mean the bits are biased output bits are biased so it's clearly not a sort of random string the only other possible predicates are essentially XOR or variants of XOR and these are broken by Gaussian elimination so that sort of kills the q equals two case there's nothing really surprising here right I mean all I'm saying is that you know two local PRGs are completely insecure in fact it turns out that these two local PRGs are really horribly, horribly broken in the following sense so this is a theorem of Moses and Charikaran worth from a long time ago and reformulated in the the crypto language what that says is that there is a polynomial time algorithm that distinguishes between random strings and strings that are close to the image of the PRG so really all I wanted to ask all I'm asking for in a PRG distinguisher is to distinguish between random strings and strings in the image of the PRG the Charikar worth algorithm does even more they actually distinguish between random strings and strings that are even close half minus epsilon close to the image of the Surana geometry so if it's actually half then it's really all strings so things that are even a little bit close to the image of the PRG they can distinguish from truly random strings and this they can do when the stretch is like n over epsilon squared we'll come up later how does that compare to learning parity of noise like this new why is this easy so these are two exhausts these are two exhausts effectively the problem turns to two exhausts yeah yeah thanks if it were three this is again a two three then I don't know an algorithm that does as well as that then it's essentially learning parity of noise so these guys are extremely totally broken so what I want to do is I want to extend these attacks to the large alphabet case so now I have two choices either I can go into Moses head and try to understand what this algorithm means or I can try to use Moses as a black box so I prefer using Moses as a black box and that's what we are going to do our distinguisher for large alphabets will use the Charikar word distinguisher as a black box and it will make one call to the Charikar words distinguisher so the way I'm going to do it our main technique is what I call alphabet reduction various forms of alphabet reduction just one form I haven't seen it elsewhere again I'm not a CSP person but yeah so what we are going to do is we want to reduce breaking a blockwise to local PRG with alphabet size Q, large alphabet to strongly breaking a two local PRG with alphabet 2 so we know already how to strongly break meaning distinguish between random strings and strings that are even close to the output of the PRG so how to do for a two local PRG with bits of size 2 I want to say that I can bootstrap to breaking a blockwise to local PRG with large alphabet size that's what I want so how does it work well here is the idea suppose for a moment the two conditions hold, condition number one you can look at each sort of block of the input and you can sort of partition them into blocks that contribute to the first input of P so you can look at each output just says, does the block come up as the first input of P or the second input of P I can look at each input block and say well, this picture the first block is the first input in one case and the first input and the second case the last block is the first input and the second input in another case so, where does block fit does it fit in the first slot of P p. So a block could either be always contributing to the first slot of p or always contributing to the second slot of p or both, depending on which output there is contributing to. So I don't like those third guys. I wanna say each input not only contributes to the first or the second slot of p. So how do I achieve this? Turns out that it's not so hard to achieve. You can throw away some of these offending output bits and you can sort of get this condition to hold. So it's not a big deal. The second condition is actually a big deal. It is with a lot of loss of generality. And the second condition says that the predicate p is decomposable in the sense that the predicate p looks like computing a predicate p looks like taking the first input, applying a function to it which maps to a bit, applying a function g to the second input also maps to a bit and applying a predicate q, a bit predicate, a bit sort of a one bit predicate to f of x and g of y. So that's what I mean by decomposable. So another way to sort of think about decomposability is that q is that p is really a one bit predicate in the closet, right? So it really is a one bit predicate. I'm kind of like projecting my inputs down into one bit and secretly computing a one bit predicate on. So that really is not without loss of generality. Very few predicates that actually look like this. Yeah, she's believable, yes? So again, the first condition easy, throw away some of the some of the output bits and it doesn't do us any harm because we are still, we have a gap problem, right? It reduces the gap by a little bit, but not too much. So if condition two were true, let's see what happens. What is our attack strategy? We're gonna think about the PRG and sort of think about the mental experiment where computing the PRG goes in two steps. One, I take the input x and I map it into bits by applying either F or G depending on whether the block contributes to the first slot of P or the second slot of P. Some do one, some do the other, right? So these circles are actually bits. Some of these circles are obtained by applying F, some by applying G, yeah? So the computing the PRG is this two step process. So really, I can think of computing the PRG as applying the, so the PRG GHQ, where Q is actually a bit-wise predicate on a string X tilde and X tilde happens to be uniform. So really this PRG is a two local PRG the way we defined it before and Charika are worth really sort of breaks and in fact, it even strongly breaks it, right? So if condition two were true, one and two were true, you're done. You said condition one is true, you can make it true. Condition two is not true at all, right? So here's where alphabet reduction comes into picture. So it turns out, so the main lemma that we have, I'll state the lemma and I'll give the intuition behind it, turns out that every predicate P is close to another predicate P prime, which is actually decomposable. So again, I said, you know, not every predicate is decomposable, but every predicate is somewhat close to another predicate, which is decomposable and how close half plus one over square root Q close turns out, this is actually optimal, right? So that actually lets us go back to our sort of attack and actually make it work. So here is a, so how do we sort of show this, you know, this decomposability? You know, it really turns out that this theorem, this main lemma is a refinement of a lower bound result from the two source extractor literature. And really, this lemma is a strange way to say that every two source extractor has a certain error. In other words, every two source extractor on n bits and n bits, even with min entropy n minus one on both sides, needs to have error at least two to the minus n over two, rather square root of two to the minus n. And that's, you know, if you look at it the right way and if you sort of improve the parameters of this sort of two source extractor lower bound, you basically get this, get the main lemma. Does that make sense? So in other words, what I wanna do is I wanna think about the predicate as a table. X, the x indexes the rows of the table and y indexes the columns of the table, right? So that's p of x, y. And what I wanna say is that you can petition this table into, well, four parts. The first thing I do is I identify a subcube, half by half, q over two by q over two subcube, that has a non-trivial bias, where if you look at the, so the zeros and ones inside, the whole thing is biased towards one or zero for that matter by half plus one over square root q. So it's non-trivial bias to either zero or one. And it turns out that from there, you can actually sort of fill in the rest of the, so the, you know, you can compute a full partition where two of these, which has a sort of bias as I claim essentially. So given this partition, given this sort of kind of partition, what does q, what does p prime look like? What does q and f look like? f is a function that sort of takes an input x and it outputs zero or one, depending on which row, which set of rows x indexes. So in this picture, if x is in the first three rows, f outputs one. It says, well, you're in the blue part of the, so the rows. And if x is in the second half of the rows, they output zero, same with y. And now q only needs to know which set of rows I fall into to decide whether the output is zero or one. I'm hiding a few details under the rug, but really sort of what this is, is sort of identifying a sub-cube, a sub-matrix of this sort of predicate matrix. This is actually the true table of the predicate, which has significant bias. So the trivial thing to do, the first thing you would think of, if I, well, maybe you'd do better, but the first thing I would think of, if I gave it to myself as an exercise, is to come up with a half plus one over cube bias. But that's actually trivial. You can actually do much better than that. You can actually do half plus one over square root q. That's really what we need. Okay, so this is the main lemma. Not hard to prove. In fact, it is really sort of a two-source extractor lower bound and sort of in a couched in another form. And given this lemma, how does our attack work? Well, we've given predicate p a graph h for the pseudo-random generator and a target z, to see the pseudo-random or random. I'm gonna compute, take this predicate p and h, and I'm gonna compute q f and g, which is basically the decomposition of p. And the second compute, it turns out in polynomial time, polynomial in q time. And now I'm gonna call the charicard worth the distinguisher for q with predicate q, which is a one-bit predicate, and with error parameter one over square root q. So what happens is that if z is actually uniformly random, I am actually feeding a uniformly random string to the charicard worth the distinguisher. If z is a random output of ghp, it's actually close to a random output of ghp prime, which is basically the output of ghq, right? And that the closeness is precisely the one-half plus one over square root q. So putting everything together, our epsilon is really like how much more than half you're biased, and that's one over square root q. If you sort of plug in n over epsilon squared becomes n times q, and that's how we get the attack. Okay, good, all right. So too bad, but one can ask, where does that leave us? I mean, is all hope lost? Do we have IO? And as someone asked, do we have IO from LWE? Good, so I think I missed a question from Clamor. Why doesn't the possibility result for non-block, block, local? So is it still a question given the analysis, or was it a question before the analysis? Good. So yeah, it was a question before the analysis. I know the assumption that this is a decouple. Why can't you use the previous thing that says that for the non-block PRGs, you can't achieve that, knowing that no, you just have basically binary inputs to ghq, to rule out the existence of ghq. Yes, yes, so you were saying we can use the binary distinguisher to rule out this guy. Yes, I mean, if I understand right. Are you asking how can we rule it out? Good, so really the key point here is the following. I want to distinguish between random and something in the image of ghp, right? I claim that something in the image of ghp is actually not in the image of ghq, not necessarily, but it's close to the image of ghq. And the close actually is exactly the sort of the closeness between p and p prime. P prime is the sort of the predicate that was decomposable and it was approximately, it was close to p and that translates over here. This needs a little bit of work, but nothing more than cheddar pounds on this point. Good, so where does this leave us? I claim that there are two options, at least two options. One is actually that this line that I left open in my table, which is can you get PRGs, two block-wise local PRGs, which stretch n times q with some predicate, not a random predicate, with some graph and in fact, each alphabet is computed with the different predicates of the input. So there it turns out that our lower bound does not go through. In fact, you need stretch n times q squared for our lower bound to go through. So what can be done potentially is if you can show a candidate predicate with these parameters and Rachel and Stefano get their acts together and improve their theorem from n times requiring stretch n times q cubed to requiring stretch only n times q. So if both of these events happen, then you have a candidate construction of indistinguishability obfuscation, but I think of this as a very narrow and exceedingly narrow possibility. Okay, so if you don't wanna think about these parameters and CSPs and so forth, a potentially easier way to go about it is just go ahead and build three linear maps. You know, so going from one linear map to two linear map was a matter of going from number theory to elliptic curves, which is algebraic number theory. Now, if you can do like Uber algebraic number theory or algebraic geometry, hopefully we can construct three linear maps, who knows. This is something that people are sort of actively pursuing at this moment. Okay, so that's what it is. Good, so let me sort of, I wanna sort of end with saying this is really sort of a curious case of indistinguishability obfuscation. So the difference between two linear and three linear maps. So, you know, the fact that there's a difference between two and three shouldn't be surprising to us. In fact, we see it all the time in computer science. But the fact that the difference between two local PRGs and three local PRGs is somehow related to the possibility of obfuscating programs. You know, I still haven't quite gotten around to really understanding what's going on here. So it remains a mystery why this is the case right now, but why is this the case? I haven't quite understood it. Again, thank you for listening. Thanks for your question. I'll take questions. More questions. Let me ask one while everyone gathers their thoughts and gets to the microphone. So I was just curious, because I think when you started you mentioned that there's a completely separate candidate to conduct construction for IO based on different assumptions. How does that compare to the assumption that, for example, three linear maps exist? Is that so? So eventually all these constructions go back to the assumption that multi-linear maps exist. So three linear maps exist at this point. So we don't know of another construction. So. Okay. So I thought you said there was a candidate construction. So this is not an actual construction, right? You're missing the PRG. Good, good, good. This is missing the PRG. So the only candidate construction that I think I mentioned is actually the original work of Garg, Gentry, Halevi, Reikova, and Waters. This is where they first constructed a candidate. This was all actually based on multi-linear maps of some form. So here's the start. Here's the confounding status of this candidate. You can prove the security of these candidates from certain multi-linear maps, sort of polynomial degree multi-linear maps, or versions of it. And you can instantiate these polynomial degree multi-linear maps using sort of lattices in some way. So if you have an approximate sort of multi-linear map that you can instantiate from lattices. Unfortunately, the DDH assumption on these approximate multi-linear maps is false. It's just plain false. You can break it. But the obfuscation candidate itself remains unbroken. So these multi-linear maps are necessary to construct obfuscation, right? And they're broken. That doesn't mean anything about the obfuscation construction, which is not broken at this point. So very, is it not broken because it's a complicated construction and people haven't quite gotten around to understand it's an obfuscated construction and people haven't gotten around to understanding it? Or is there really something going on in the construction beyond multi-linear maps? Is it deriving its security from some other source? Who knows at this point? This is another confounding kind of state of affairs. Other questions? Okay, no way. Okay, go ahead. Yeah, so I should say the one thing that actually comes to mind that sort of, as an analogy is UGC, right? I mean, assuming UGC, you can derive sort of fascinating consequences. Sometimes you can actually remove the UGC assumption and you can get sort of pre-limpy hardness, which is again something that we have done in the IO setting. You've constructed things from assuming IO and we have DIYs these constructions. So we have actually made these constructions work from sort of nicer assumptions. But as important as UGC is to sort of theoretical computer science, IO is, in fact, I would say even more important to cryptography. So I think it behooves us to kind of like, this is my spiel to sort of recruit people to think about IO, it's applications, construction, CSPs, whatever. So in the same direction you mentioned earlier, you had, there was some PRG construction and you had sort of some evidence that would be secure against SDP attacks and things like that. So is it possible to have a formal treatment of, you know, an IO scheme that would be secure against, you know, not polynomial or Casio polynomial time? Oh, I see. But that would have to encompass also Gaussian elimination, right? Because you wouldn't want to say my cryptoscheme is secure. Yeah, yeah, yeah. So that's right. That's right. So somehow you can, if you treat both attacks kind of case by case, I suppose, right? I mean, one by one, Gaussian, you prove security against Gaussian elimination attacks, linear attacks, and then you prove security against these SDP type attacks, potentially. So the barrier is somehow that, you know, the reductions from breaking IO to breaking these multilinear maps and the PRGs, you know, maybe they don't fit within the sort of the, the SDP framework. And then one has to really sort of think about it. But it's a possibility. Right. Suddenly. Thanks. So if there's no more questions, I'll thank Vinod again for his talk. Thanks Vinod. Remind you all that in a couple of weeks we'll have John Kellner from MIT give the talk. And also before we close, let me thank everyone who's working for TCS Plus behind the scenes. So that's Clemence Cannon, who was around and had to leave. And India Day, Gautam Khamat, Ilya Versenstein, and Oded Begev. So thanks everyone. And I'll take us offline. Thank you. Thanks for listening.