 Good morning. Welcome to the morning session. This is going to be the most fantastic session of the all conference, so you're lucky to be here. And we start with a fantastic recorded video of IO from pseudo random generators in NC0 LPN and by linear maps or the power of lattice free cryptography is joined to work by Aiyush Jain, Rachel and I meet and there is a record and there is a recorded video by Aiyush. So I'm going to talk about IO from PRG and NC0 LPN and by linear maps. And in particular I want to focus on this facet of our work, which is, which is a really intriguing insight about power of lattice free cryptography. This is joint work with Rachel Lin and Amit Sahai. As you all know, lattices have really revolutionized the whole space of cryptography with such amazing applications and over the past 15 years or so, we have managed to find this really appealing problem of learning with error. And the reason has been that immensely useful. This is simply because it has very nice worst case and which case connections and you can you know this problem has the promise of being as secure as several worst case lattice problems. And not only that turns out it's our frontier problem to build post quantum crypto and and that's because as far as we know, as of today we do not know of any quantum advantage for solving learning with error. And it has just really been nothing short of a great success story, starting from like conventions such as homomorphic encryption, as you know, homomorphic encryption is people are trying to deploy industry today it's currently facing, you know, industrial push. And at the same time, recently actually the works on homomorphic encryption have also won good prize which is a great thing for the entire field. And that it's been useful in so many different things and on this slide I just listed just some of those applications like a really a sub sample for those applications attributes waste encryption multi key FHE function encryption, and so many different things. So the question the real question that we want to ask is that kind of hardness assumption that goes in like no lattice based cryptography, are they really essential in building primitive such as homomorphic encryption. It based on assumptions which have no known connections to lattices, no known reductions to and from that this is assumptions that still may plausibly be conjectured to be secure in an unlikely unfortunate event that it lattice based on this assumptions and a big broken. And what we show is a really, you know, interesting result we show that you can build not only FHE but like most of the applications on the previous slide but you know host up on other kind of applications relying on interesting mix of three assumptions. I'm going to refer it as trio of assumptions. The first assumption is the decision linear assumption over symmetric by linear maps, which is a really popular by linear map assumption. And that is like the learning parity with noise over fields with an error probability of n to the minus delta where delta could be arbitrarily small constant greater than zero. Okay, so just barely sub constant amount of noise in the pn. Okay, and you're using the field version. So there is existence of Boolean PRGs, which are implementable in constant depth, and they expand say copper bits to barely polynomial so copper one to the epsilon for epsilon greater than zero arbitrary constant greater than zero. Okay, so these three assumptions, we actually need some exponential security meaning for every polynomial time attacker that one dish or the distinguishing probability. So these three assumptions is just bounded by some sort of exponential. Okay, so we show, you know, these three assumptions. If all of them are hard, then you can build FHE but post of other. Okay, so now before I proceed you know I want to address couple of questions but first I need to answer. FHE on these kind of three assumptions are these really incompatible. I really need to at least justify you that they're actually incompatible. Right. The second is how do we even approach such a question. So let's look at the first question. And of course I can, can conclusively answer this question unless we resolve some long standing deep complexity questions. But you can also always reason these things based on our current understanding. So, turns out from our current understanding, and you at least look at like, you know, the lpn and the PRG is an excellent assumptions. It's not known to imply something as basic as public infection. Whereas on the other hand, if you look at lattice based heart assumptions such as gap SAP and, you know, LWE, they readily imply public encryption. Okay, so this kind of indicates that, you know, either we currently do not know how to build public encryption or maybe these assumptions are not just not strong enough to give rise to public encryption. And even complexity theoretically, we know that LWE sits in, you know, a structured complexity class of cohen. Whereas, this is simply not known for the lpn and PRG is zero. Our current understanding is that really they are mini crypt style of assumptions. Okay. Now, when it comes to the other assumption that we're making the decision assumption, it's a number theoretic assumption. And as of today, we do not know any reductions either to or from lessons. It's really an interesting open question. If you could show an algorithm such as LLL being applicable to solve for dealing with me like really giving new insights about this problem. And, you know, it'll also open up doors for solve coming up with new algorithms for not only killing but another kind of assumptions out there. Okay, so these are really exciting questions in themselves and I hope that you know community starts focusing on these problems a little more aggressively and hopefully will be able to see answers to such questions over the next few years or so. Okay. So this reasonably answers the first question, how about the second one, how do I even show such a result. Well, one way could be that I folk I go after every single primitive and construct them separately. That'll of course be counterproductive. What we do in this work is essentially we build something which not only you know implies these but host of other primitives. So that primitive of course distinguishable the orchestration. Okay. So, our main result is that we can build IO based on these three non lattice problems. And I want to stress that this actually improves our previous result, which appeared last year, where we show that you can construct IO from these three assumptions, additionally relying on some exponential hardness of learning with that. And the rest of the stock, we're going to see how this result actually works. So for the rest of the stock, let's say the circuit that we want to see if it takes a little n number of bits as input and outputs from it. And throughout the stock I'm going to denote capital N as the quantity to do. Okay. So it turns out if you want to office get a sympathy like this, there's actually a very intuitive of the station team, which is simply the truth table. So write down inputs from one to capital N, and then the output C of one to C of capital. Okay, and this is not going to, you know, reveal anything about the circuit of your office. Right. Other than then put up. Okay. However, of course is a very fundamental law with this game. So the reason is that the time it takes to escape this is proportional to that lens basically up to another subject C capital. This doesn't qualify for being a legitimate. So on one hand, you have this trivial construction. On the other hand, you'd like to construct an application scheme where the time it takes should be polynomial in the side of the subject C. Between the two. So natural question which has also been asked in cryptographic community, is that can I improve upon the truth table construction a little bit. Okay, so can I construct an application scheme where the time it takes to escape rose like and to the point 99 and to the 0.01 factor loss. So I think that in beautiful prior work, it was shown that such an improvement is enough to come to take us all the way to that if I can construct such a non-privileged scheme, then relying on any assumption that gives rise to public encryption in particular dealing, you can build higher. Okay, so for our for the rest of the stuff our goal is to actually construct such a non-privileged. At this point I'd like to remark that our previous work actually doesn't manage to construct this. We construct an application scheme where the time it takes can grow and however the size is small. So for such non-privileged application scheme the only way we know how to go to IO is by additionally relying on other me and that's another source of place where we need to use other. Okay, however, in this talk we only focus on the running time of the skater to be small. Now let's go over our approach. So what is our approach? Well, intuitively if you think about non-privileged IO, it's just some sort of an encryption of a special input seat for that. What is each other? It just consists of maybe some circuits, some randomness and something like that. But it's not and by the way we want to ensure that the size of each other is small like and to the point 99 and the running time of this encryption is also small. So it's not just any encryption. It's an encryption which hides everything about the circuit C except magically lets you learn functions of the form ux of c tilde equals to c of x for every input x in capital N. Okay, so it lets you learn the truth table, but nothing else. In other words, if you could construct such an encryption scheme where you can learn the truth table and nothing else, the size of this encryption or the running time will be small, then you would be done. Okay. Unfortunately, we are not quite there yet. And the reason is that we haven't really simplified anything. As such, this circuit ux of c tilde equals to c of x is quite complex in that it runs the circuit c itself on x and we haven't really achieved anything and current techniques don't let us construct such encryption schemes. Okay, so reasonable question to ask here, can I replace these ux of c tilde with something relatively much more simpler. The answer to the question is yes, classical works have shown that if you use PRG is an nc0 assumption that they are the same. Then you can effectively replace this with much simpler functions, how simple. So let's say the locality of the PRG that we use is t. So what is locality. Locality is the number. So PRG is an nc0 every output bit actually can depend only on a constant number of inputs and that number that constant is local. So, what we're shown is that if PRG is with locality exists, then you can replace ux of c tilde with specifically chosen 3d plus one local functions. So every output bit just depends on 3d plus one bits, therefore it's a 3d plus one degree polynomial. Okay, the minimum value of D that you choose in literature for for such PRG is 5. Therefore, it turns out that the minimum degree that you can use is 16. Okay, so as a consequence of all this. You can come up with an encryption scheme, which hides everything about the circuit except magically let's you learn specifically chosen specifically designed degree 16 functions once that are given to you by these theorem, right. The point of these functions is that these functions hide everything about the circuit, except they only let you learn that with devil and nothing else. Okay, so now if I can construct such an encryption scheme, which allows me to compute where I encrypt c tilde, and let's you learn degree 16 functions like this, I'll be done. So the question is what is known for such an instance. So turns out, we're not quite there yet. In that, hypothetically speaking, if these were not degree 16 these would be two polynomials. Okay, they're not degree two, but assume that they were the two polynomials over some time field. Then actually there is quadratic functional encryption, which have been studied for quite some time, which you can do some dealing. And you'd be done. Right. However, the problem is that these functions they are not degree two they are actually really 16 as they're saying. So what do we do in this work, we come up with a way to pre process c tilde, right, such that pre processing is efficient to do. And at the same time, you know, the degree reduces to the UX of c tilde can be computed by degree two polynomial over c tilde. Okay. So that's what I'm going to talk about. But note that they should already ring the bell, you shouldn't really expect that, you know, take arbitrary computation of degree 16 and we shouldn't really expect that you should be able to pre process such that the processing is simultaneously not at the same time that degree reduces to two. You shouldn't really expect that. In fact, that's not what we exactly do. We work a different kind of pre processing model so we allow for a public. Okay, so we're going to take c tilde pre processing into two components, a public component, which of course, public component is going to hide c tilde, it has to because it's public. You're going to also have a secret component, we're going to only encrypt the secret component. And now this polynomial is allowed to be constant degree polynomial over the public component, but only do we do in the product. Okay, luckily for us, using value maps you can also build encryption scheme supporting these computations which have a public component and you evaluate constant degree on on the public and to be doing the secret. Okay, and these schemes go by the name of partially hiding encryption, which is actually also built specifically for the context of IO and line of work. Okay, for the rest of this talk, we'll ignore the public component and just focus on, you know, degree reduction, like kind of intuitively suggesting how you can reduce the degree to two. Okay, the public component will implicitly. Okay, so how do we do it. This is where we're going to use our key assumption which is the learning that it is noise. And remember the goal is to replace the place this computation ux or c tilde by quadratic functions. We do it in two steps roughly. In the first step we solve this problem, approximately so we know almost all this stuff. Okay, so how do we do that. We will take the c tilde pre process it into another short input s. Such that for most input x, it will now happen that fx of s is equal to ux of c to already. It's kind of already solved the problem almost. Now, once we have that. What we do. We can use our theme, but once we have that we come up with another polynomial, another short input m. This polynomial is also to do to such that when I add it to what I already computed. It somehow starts giving correct output on every. Okay, and this is where we're going to use a surprisingly simple idea of matrix. We're going to see the first part first. The goal is to come up with, you know, this, a degree two polynomial which approximately solves the problem. And this is where actually we're going to use the most intuitive idea that you can think of which is to use LP and to encrypt each other. And remember you wanted to come to the degree 16 polynomials on each other. What we do is simply encrypt it using. So remember, recall what I'll be in says a s plus e, where he is a sparse error appears to do random. So what we're going to do, we're going to sample our coefficient matrix a multiplied with a short dimension secret s, then you're going to add sparse noise or chosen over ZP. And then we're going to add C to that we're going to write it as a vector and then add C to that. And this way we have formed a vector B, which is bad in all of them, not be. Okay. Now what's the point of all doing all this point is now a and B actually together the encrypt C to the the height C to the and that is because of the because a s plus a pseudo random. Okay, the point is it's actually encoded with a secret which is a very small dimension as compared to the length of city. And this is what makes it helpful for the degree compression step. So let's see how to we have this equation on the right. Remember, our goal is to find a degree to function in another short input s such that for most input x, fx of s is equal to ux is suitable. So I'm going to just give you the candidate and then going to argue both. The candidate is simply ux of b minus a s which is a degree 16 polynomial in the secret and B and a. Okay. So let's observe the second property for us. The point is B minus a s is nothing but see to that myself. Right. And now remember ux was a 16 local function only dependent on 16 bits of see to that. And it was actually sparse it's very sparse for most of the inputs x, ux of C to the center is exactly going to be accessible. And just because they're very sparse. So this answers the same question. Now you want to understand why is it okay to do. Why is a degree two in s and constant degree in the public comfort in B and a. Well, the idea behind us is that it's a degree 16 polynomial, so it's degrees in means 16 its degree in is also 16 and it's degree in s is also 16. What's its degree in P and a because constant degrees are fine. In s, it's degree 16. However, note that s is very small in dimension. Therefore, I can trivially quadratize it. So when I interpreted another variable capital S, which consists of all monomials in small s of degree at most eight. And that variable it's actually do me too. Okay. So for this. They're good, because if s is very small capital S is going to be small, just to give you a sense of dimension of s is like n to the point one capital S is what most end to the point. Okay, and this kind of complete start given why you managed to find polynomial which it's on approximately complete set on every input x. So now how do you fix the problem. How do we do the second step. That's really intuitive as well. So remember what you want to do we want to complete this function, right. And what we have managed to do we have managed to do this function. And if I can come come up with a polynomial which computes the difference of these two then I'll be good. Okay, so observe that this is going to be like you know a sparse vector, which is effects already approximate on most of it. The point is that if I look at this function, it's very sparse second effectively arranged as a matrix, and then kind of factorize that matrix so matrix is going to be sparse. It's going to be like low rank, and low rank matrices can be factor. Okay, and you will get a compressed m. Okay, so as a consequence of this. So you can come up with a little function which computes the difference. And then you can add it, and that way you will get the, the correct. Yes, this really completes mode, like roughly and of course I'm hiding a lot of details and just wanted to give you the key intuition. So, however, there's a problem with it in the argument that I showed the time it takes to pre process this public and secret part is actually going to be character and because remember we are computing the difference, we're using it the time it takes to do that is going over every input capital and right so the time it's going to take is capital and so this doesn't solve the problem. And this additional request all the way if you want to make this idea work. The key inside of this paper is that if I wanted to do this computation for many, many circuits, let's say K circuits. Let's say that we can actually amortize in K. So we can come up with the way to pre process such that the time it takes is like n times K to the one minus epsilon for some epsilon plus polynomial in K. Okay, and turns out that this saving in case enough to get us away to live and that's one of the main contributions for this paper. And of course, I'm not going to do a lot in a lot of details for this. The key argument is really combinatorial and it realizes on exact circuit implementation for you know specific RAM programs such as lookups and sorting, sorting networks and so on so forth. I'm not going to go that go over that in stock. Okay. And with that, you know I'd like to thank you for listening in. And I'd like to leave you at some open questions. So one of the most interesting open questions so one of the most interesting open question is, can I construct FHE from these assumptions like non lattice assumptions but in a direct manner. Right now I'm going through IO and it's really just a feasibility result and the question is can buy the maps and like these assumptions somehow we leverage to give rise to FHE tracking. So any question in which I also kind of throughout the talk I mentioned, it's just beautiful complexity clarity question that came along connecting lattice based problems with other kinds of problems that exist out there. With that I'd like to thank you. Okay. So I'm asking if the authors are online if any of the authors is online or online. I'm I use hi. Hi. Can we project him. So do we have questions for the authors. Hi. Hello. Thank you for being here. Do we have any question for. Okay. Since there is no question so thank you very much. And we move to the next talk.