 Hello, everyone. So I'm going to talk about IO from PRG and at C0, at the end, and by linear maps. And in particular, I want to focus on this facet of our work, which is a really intriguing insight about power of lattice-free cryptography. This is joint work with Rachel Lin and Amit Sahai. As you all know, lattices have really revolutionized the whole space of cryptography with such amazing applications and over the past 15 years. So we have managed to find this really appealing problem of learning the data. And the reason LWD has been immensely useful is simply because it has very nice worst case and rich case connections. And this problem has the promise of being as secure as several worst case lattice problems. And not only that, turns out it's our frontier problem to build post-quantum crypto. And that's because as far as we know, as of today, we do not know of any quantum advantage for solving learning with error. And it has just really been nothing short of a great success story, starting from conventions such as homomorphic encryption. As you know, homomorphic encryption is people are trying to deploy industry today. It's currently facing industrial push. And at the same time, recently, actually, the works on homomorphic encryption have also won go to price, which is a great thing for the entire field. But not only that, it's been useful in so many different things. And on this slide, I just listed just some of those applications, like a really sub-sample for those applications, attributes, waste encryption, multi-key FHE, functional encryption, and so many different things. So the real question that we want to ask is that the hardness assumption that goes in lattice-based cryptography, are they really essential in building primitives such as homomorphic encryption? Can we build it based on assumptions which have no known connections to lattices, no known reductions to and from lattices, assumptions that still may plausibly be conjectured to be secured in an unlikely, unfortunate event that the lattice-based hardness assumptions end a big broken. And what we show is a really interesting result. We show that you can build not only FHE, but most of the applications on the previous slide, but host of other applications relying on an interesting mix of three assumptions. I'm going to refer it as trio of assumptions. The first assumption is the decision-linear assumption over symmetric bilinear maps, which is a really popular bilinear map assumption. Another is the learning parity with noise over fields with an error probability of L to the minus delta, where delta could be arbitrarily small constant greater than 0, so just barely sub-constant amount of noise in the LPN, and you're using the field version. Third is existence of Boolean PRGs, which are implementable in constant depth, and they expand, say, kappa bits to barely polynomial. So kappa 1 to the epsilon for epsilon greater than 0, arbitrary constant greater than 0. So these three assumptions, we actually need sub-exponential security, meaning for every polynomial time attacker, that one dish or the distinguishing probability in these assumptions is just bounded by some sub-exponential. So we show these three assumptions. If all of them are hard, then you can build FHE, but post of other parameters. So now, before I proceed, I want to address a couple of questions, but first I need to answer. If I'm basing FHE on these kind of three assumptions, are these really incompatible to lattices? I really need to at least justify you that they are actually incompatible to lattices. The second is, how do we even approach such a question? So let's look at the first question. And of course, I can't conclusively answer this question, unless we resolve some longstanding, deep complexity questions. But you can also always reason these things based on our current understanding. So it turns out, from our current understanding, when you at least look at the LPN and the PRGs and HZ0 assumptions, they aren't even known to imply something as basic as public encryption. Whereas on the other hand, if you look at lattice-based hard assumptions such as GAP, SVP, and LWE, they readily imply public encryption. So this kind of indicates that either we currently do not know how to build public encryption or maybe these assumptions are not strong enough to give rise to public encryption. And even complexity theoretically, we know that LWE sits in a structured complexity class of co-AM. Whereas this is simply not known for the LPN and PRG and HZ0, our current understanding is that really they are a mini-crypt style of assumptions. OK. Now, when it comes to the other assumption that we are making, the decision assumption, it's a number theoretic assumption. And as of today, we do not know any deductions, either do or from LSS. It's really an interesting open question if you could show an algorithm such as LLL being applicable to solve for dealing. It'll be really giving new insights about this problem. And it'll also open up doors for solving or coming up with new algorithms for not only dealing but other kind of assumptions out there. OK. So these are really exciting questions in themselves. And I hope that community starts focusing on these problems a little more aggressively and hopefully will be able to see answers to such questions over the next few years or so. OK. So this reasonably answers the first question. How about the second one? How do I even show such a result? Well, one way could be that I go after every single primitive and construct them separately. That'll, of course, be counterproductive. What we do in this work is essentially we build something which not only implies these, but a host of other primitives. And that primitive is, of course, distinguishable with the obfuscation. OK. So our main result is that we can build IO based on these three non-latest problems. And I want to stress that this actually improves our previous result, which appeared last year, where we show that you can construct IO from these three assumptions, additionally relying on some exponential hardness of learning with that. OK. In the rest of this talk, we're going to see how this result actually works. So for the rest of this talk, let's say the circuit that we want to obfuscate is C. It takes a little n number of bits as input and outputs one bit. And throughout this talk, I'm going to denote capital N as the quantity 2 to the power. OK. So it turns out if you want to obfuscate a circuit C like this, there's actually a very intuitive obfuscation scheme, which is simply the truth table. You write down inputs from 1 to capital N, and then the output C of 1 to C of capital N. And this is not going to reveal anything about the circuit that we're obfuscating, right? Other than then, we're going to be able to do it. OK. However, of course, there's a very fundamental flaw with this scheme. The flaw is that the time it takes to obfuscate this is proportional to capital N. So basically, I'm going to have the circuit C capital N times. This doesn't qualify for being a legitimate obfuscation scheme. So on one hand, you have this trivial construction. On the other hand, you'd like to construct an obfuscation scheme where the time it takes should be polynomial in the side of the circuit C. And there's a huge gap right now between the two. So an actual question which has also been asked in the cryptographic community is that can I improve upon the truth table construction a little bit? So can I construct an obfuscation scheme where the time it takes to obfuscate grows like N to the 0.99 and to the 0.01 factor loss? Turns out that in beautiful prior work, it was shown that such an improvement is enough to take us all the way to IO. That if I can construct such a non-privileged scheme, then relying on any assumption that gives rise to public encryption in particular dealing, you can build IO. So for the rest of the stuff, our goal is to actually construct such a non-privileged scheme. At this point, I'd like to remark that our previous work actually doesn't manage to construct this. We construct an obfuscation scheme where the time it takes can grow in N. However, the size is small. And for such non-privileged obfuscation scheme, the only way we know how to go to IO is by additionally relying on IW and that's another source of place where we need to use IW in up to this point. However, in this talk, we only focus on the running time of the obfuscator to be small. So now let's go over our approach. So what is our approach? Well, intuitively, if you think about a non-privileged IO, it's just some sort of an encryption of a special input seed for that. What is C tilde? It just consists of maybe some circuit, some randomness, and something like that. But it's not, and by the way, we want to ensure that the size of C tilde is small, like N to the 0.99. And the running time of this encryption is also small. But it's not just any encryption. It's an encryption which hides everything about the circuit C, except magically, let's see learn functions of the form ux of C tilde equals to C of x for every input x in capital N. OK, so it lets you learn the truth table, but nothing else. In other words, if you could construct such an encryption scheme where you can learn the truth table and nothing else, the size of this encryption or the running time is small, then you would be done. Unfortunately, we are not quite there yet. And the reason is that we haven't really simplified anything. As such, this circuit ux of C tilde equals to C of x is quite complex in that it runs the circuit C itself on x. So we haven't really achieved anything. And current techniques don't let us construct such encryption schemes. So reasonable question to ask here. Can I replace these ux of C tilde with something relatively much more simpler? The answer to that question is yes. Classical works have shown that if you use PRGs and C0 assumption that they are using, then you can effectively replace this with much simpler functions. How simple? So let's say the locality of the PRG that we use is t. So what is locality? Locality is the number. So in PRGs and C0, every output bit actually can depend only on a constant number of inputs. And that number, that constant is local. So what we're shown is that if PRGs with locality exists, then you can replace ux of C tilde with specifically chosen 3D plus 1 local functions. So every output bit just depends on 3D plus 1 bits. Therefore, it's a 3D plus 1 degree polynomial. The minimum value of D that you could choose in literature for such PRGs is 5. Therefore, it turns out that the minimum degree that you can use is 16. So as a consequence of all this, you can come up with an encryption scheme which hides everything about the circuit, except magically lets you learn specifically chosen, specifically designed degree 16 functions once that are given to you by this theorem. The point of these functions is that these functions hide everything about the circuit, except they only let you learn the truth table and nothing else. That's the security property. So now if I can construct such an encryption scheme which allows me to compute where I encrypt C tilde and lets you learn degree 16 functions like this, I'll be done. So the question is, what is known for such an encryption scheme? So it turns out we're not quite there yet. In that, hypothetically speaking, if these were not degree 16, these were degree 2 polynomials. They're not degree 2, but assume that they were the degree 2 polynomials over some prime field. Then actually there is quadratic functional encryption which have been studied for quite some time, which you can base on DLM, and you'd be done. However, the problem is that these functions they are not degree 2, they are actually degree 16 as they are saying. So what do we do in this work? We come up with a way to pre-process C tilde, such that pre-processing is efficient to do. And at the same time, the degree reduces to 2. Ux of C tilde can be computed by degree 2 polynomial over C tilde. So that's what I'm going to talk about. But note that they should already ring a bell. You shouldn't really expect that take arbitrary computation of degree 16. And you shouldn't really expect that you should be able to pre-process such that the pre-processing is simultaneously short and then at the same time, the degree reduces to 2. You shouldn't really expect that. In fact, that's not what we exactly do. We work different kind of pre-processing models. So we allow for a public input. So we're going to take C tilde, pre-processing into two components, a public component, which of course, public component is going to hide C tilde. It has to, because it's public. Then you're going to also have a secret component. We're going to only encrypt the secret component. And now this polynomial is allowed to be constant degree polynomial over the public component, but only degree 2 in the product component. Luckily for us, using Pilinger Maps, you can also build in friction schemes supporting these computations which have a public component and then you evaluate constant degree on the public and degree 2 in the secret. And these schemes go by the name of partially hiding potential encryption, which is actually also built specifically for the context of IO in the line of perfect dimension in this slide. For the rest of this talk, we'll ignore the public component and just focus on degree reduction, like kind of intuitively suggesting how you can reduce the degree to 2. The public component will implicitly come. So how do we do it? This is where we're going to use our key assumption, which is the learning paradigm noise. And remember, the goal is to replace this computation ux of C tilde by quadratic functions. We do it in two steps, roughly. In the first step, we solve this problem approximately. So we almost solve this problem. So how do we do that? We will take this C tilde, pre-process it into another short input s such that for most input x, it will now happen that fx of s is equal to ux of C tilde. Already. It's kind of already solved the problem almost. Now, once we have that, what we do, and this is where, by the way, we can use our theme. But once we have that, we can come up with another polynomial in another short input m. And this polynomial is also degree 2. Such that when I add it to what I already computed, it somehow starts giving correct output on every. And this is where we're going to use a surprisingly simple idea of matrix vectorization. So you see the first part first. The goal is to come up with a degree 2 polynomial, which approximately solves the problem. And this is where, actually, we're going to use the most intuitive idea that you can think of, which is to use Lpn to encrypt C tilde. So remember, we wanted to compute degree 16 polynomial sum C tilde. What we do is simply encrypt it using Lpn. So recall what Lpn says. As plus e, where e is a sparse error of your pseudo-random. So what we're going to do, we're going to sample our coefficient matrix A, multiply it with a short dimension secret S. Then we're going to add sparse noise chosen over Zp. And then we're going to add C tilde. We're going to write it as a vector and then add C tilde. And this way, we have formed a vector B, which is by adding all of them, mod B. Now what's the point of all doing all this? The point is, now A and B actually together, they encrypt C tilde, the hide C tilde. And that is because of the Lpn assumption, because A s plus e is pseudo-random. The point is, it's actually encoded with a secret, which is a very small dimension S compared to the length of C tilde. And this is what makes it helpful for the degree compression step. So let's see how. So we have this equation on the right. Remember, our goal is to find a degree 2 function in another short input S such that for most input X, fx of S is equal to ux of S. So I'm going to just give you the candidate and then I'm going to argue both properties. The candidate is simply ux of B minus A s, which is a degree 16 polynomial in the secret and B and A. So let's observe the second property first. The point is B minus A s is nothing but C tilde. And now remember, ux was a 16 local function only depended on 16 bits of C tilde. And error was actually sparse. It's very sparse. So for most of the inputs X, ux of C tilde plus error is exactly going to be ux of C tilde, just because the error is very sparse. So this answers the same question. Now you want to understand why is it okay to do? Why is it degree 2 in S and constant degree in the public comfort in B and A? Well, the idea behind this is that it's a degree 16 polynomial, so it's degrees in B is 16. It's degree in A is also 16 and it's degree in S is also 16. We don't care about its degree in P and A because constant degrees are fine. In S, it's degree 16. However, note that S is very small in dimension. Therefore, I can trivially quadratize it. So when I interpret it in another variable, capital S, which consists of all monomials in small s of degree at most 8. In that variable, it's actually degree 2. And as a consequence of this, they're good because if S is very small, capital S is going to be small. Just to give you a sense, dimension of S is like n to the point 1. Capital S is going to be at most n to the point 8. And this kind of complete start, given why we managed to find a polynomial which approximately computes it on every input text. So now how do we fix the problem? How do we do the second step? That's really intuitive as well. So remember what you want to do. We want to compute this function, right? And what we have managed to do, we have managed to do this function. And if I can come up with a polynomial which computes the difference of these two, then I'll be good, okay? So observe that this is going to be like a sparse vector because it affects already approximate on most of inputs. And the point is that if I look at this function, it's very sparse. So it can effectively arrange it as a matrix and then kind of factorize that matrix. So matrix is going to be sparse. It's going to be like low rank and low rank matrices can be factor, okay? And you will get a compressed M, okay? So as a consequence of this, you can come up with a degree two function which computes the difference. And then you can add it and that way you will get the correct term. Okay, so this really completes more like roughly, and of course I'm hiding a lot of details from other ones. Just wanted to give you the key intuition. So however there's a problem with it in the argument that I showed, the time it takes to pre-process this public and secret part is actually going to be capital N because remember, we are computing the difference and then compressing it, the time it takes to do that is going over every input capital N, right? So the time it's going to take is capital N. So this doesn't solve the problem. And this additionally requires LWE if you want to make this idea work. The key inside of this paper is that if I wanted to do this computation for many, many circuits, let's say K circuits, it turns out that we can actually amortize in K. So we can come up with the way to pre-process such that the time it takes is like N times K to the one minus epsilon for some epsilon plus polynomial in K. Okay, and turns out that this saving in K is enough to get us all the way to I1. That's one of the main contributions for this paper. Now, of course I'm not going to, in a lot of details for this, the key argument is really combinatorial and it realizes on exact circuit implementation for specific RAM programs such as lookups and sorting networks and so on and so forth. I'm not going to go over that in the stock. Okay, and with that, I'd like to thank you for listening in. And I'd like to leave you at some open questions, interesting open questions. So one of the most interesting open questions can I construct FHE from these assumptions like non lattice assumptions, but in a direct manner. Right now I'm going through IO and it's really just a feasibility result. And the question is, can violiner maps and like these assumptions somehow be leveraged to give rise to FHE directly? And then the second question in which I also kind of throughout the talk I mentioned is just beautiful complexity theoretic questions that came along connecting lattice-based problems and other kinds of problems that exist up there. With that, I'd like to thank you.