 We're trying to construct IO from low-degree multilinear map. In fact, this is a soft merge between two papers where in the first paper, we achieve it from five linear maps using locality 5 PRG. And in the second paper, we in fact do from just three linear map or trial linear map using blockwise local PRGs. OK, I'll tell you what they are. So first, the goal here is to construct indistinguishability of fascation. We want to have this type of fascators that can compile a circuit into another one that have the same functionality, but now the compiled circuit become unintelligible. And here, by unintelligible, we mean that the fascate circuit should hide one bit of information. That is, which of the following two equivalent circuits is being fascated? By equivalent, we mean that the two circuits should have the same size two starways, or so they should have the identical truth table. And if that's the case, then we want a fascated circuit to be computationally indistinguishable. Even though this notion really hides only one bit of information, it turns out to be extremely powerful. A long line of work has shown that the IO with the addition of some mild assumption like one-way function can already imply almost all of cryptomania and beyond. And furthermore, it seemed to have created a new world of fastopia where there are many even more powerful tool like functional encryption succinct the goblin of tooling machines, et cetera. So essential question is how can we construct IO? And a very successful paradigm so far is to climb up the ladder of multilinear maps. And at the bottom of this ladder are cyclic groups without or with bilinear pairing that we all know and love that we know they have huge implication in cryptomania. When we try to generalize these cyclic groups to let them have higher degree pairing, and this will give us multilinear map. And multilinear map will enable us to do even more amazing things, the simplest example being that trilinear map will let us do four-party key exchange. And if you turn the knob of multilinear parity all the way up to polynomial degree, then you'll get IO and a fastopia. This is great. The only problem here is that, or the candidate construction of multilinear map so far, their security is far from what we would like them to have, as demonstrated in multiple attacks. Therefore, in this work, our starting point is to ask a different question. That is, what are the minimum objects that, in fact, imply IO? What is a high-degree multilinear map? They're very powerful. However, they're heavy to lift. And the hope is that maybe we don't need its full mighty power. And as a wise man once said, is that give me a point to stand and a lever, I can lift up everything. The idea here is perhaps we can lift up those high-degree multilinear map using just low-degree ones. And in order to do that, we need the help of local PRGs, which at the surface seems completely irrelevant. And at the same time, we're also trying to seek for weaker assumptions on those multilinear maps that we can base the security of IO on. With this paradigm in mind, let me quickly survey to you the history of IO construction through the lens of degree. The first generation of construction always use a polynomial degree multilinear map, or, in fact, the stronger variant of graded encoding schemes. Their security are often analyzed in the ideal model or used as strong Uber assumption, with some exceptions. Last year, I gave the first construction of IO based on just constant degree graded encodings with a simple DDH-like assumption called a joint SxDH. And to do that, for the first time, the local PRG came into the picture. And in this work, in the first paper, we first reduced the concrete constant degree needed all the way down to 5 using a locality 5 PRG. And also weakened the security assumption on the multilinear map to just SxDH. An important feature of this construction is that, for the first time, the degree of the multilinear map depends solely on the locality of the PRG and lay equal, whereas previous construction are not tight. However, even with this tightness, we hit a wall because locality 4 PRG simply do not exist. And therefore, in the second paper, we propose a new and relaxed notion of locality called block-wise locality. And we show that it still suffice for constructing IO and hence give us IO from tri-linear maps and the block-wise locality 3 PRGs. And at this point, we hit another wall, which is block locality 2 PRG unfortunately do not exist. And this bars us from constructing IO from just bi-linear maps. Before I move on, I want to mention that the first paper is concurrent with the work of Anans and Shahi. They also construct IO from degree 5 graded encoding schemes. However, their security analysis is more or less in the IDRO model, where we only rely on SxDH. So let me tell you more formally what we achieve. So this object of multilinear map allows us to really encode the ring elements in groups by putting them in the exponent. And I will use this bracket notation here, where the element encoded is in the bracket and the index of the group is in the lower right corner. Such encoding naturally supports homomorphic addition and subtraction and also allows us to test whether encoding encodes 0 or not. And the magic is really in the homomorphic multiplication that we can take encoding in different groups and homomorphically multiply together by encoding the target group of the product. And the degree of the multilinear map is exactly how many encodings that we can multiply together. In the case of trial linear map, we can multiply together just three encodings. What about the SxDH assumption? It's also very simple. It's basically requiring the basic DDH assumption to hold in each individual source group. Take any source group L. We want the encoding of the DDH tuple, AB and AB, to be indistinguishable to encoding of just random elements. And this should hold, even when the generator in all source groups are available. So the main theorem in the first paper says that we can construct IO from sub-exponentially secure SxDH assumption on degree D multilinear map with the help of a sub-exponentially secure locality DEPRG and the sub-exponential LWE. And note that here the degree of the multilinear map needed really equals to the locality. And the main theorem in the second paper basically says we can in fact swap off this locality DEPRG with this weaker block locality DEPRG. With these assumptions, now you can just plug in a locality five PRG to get IO from five linear map, plugging a block-wise locality three PRG to get IO from tri-linear map. So what are those block-wise local PRGs? Well, first let's us recall local PRGs. Well, they allow us to expand the seed towards pseudo-random output, which is a pre-normally longer. And we say a function has locality L if each individual output bit depend on at most L input bits. And there has been a long line of research studying what is the lowest locality we need for PRGs. Currently, for locality five, we have candidate construction, but for locality four, they do not exist. When we go to block-wise locality, as the name suggests, we're gonna swap each input bit with an input block, restricted to only containing logarithmic number of bits. Now a function has a block-wise locality L if each output bit depend on only L input blocks. Now that such a function can in fact have very high locality, actual locality, but it still has the special structure of being local with respect to the input blocks, and that's what we can exploit. So in the second paper, we gave preliminary study on block-wise local PRGs with a natural candidate being just used Goldberg's local function where input bits are replaced with input blocks. These functions are usually parameterized with a bipartite graph that has exactly degree L. And together with a set of predicates, one for each output bit. When you want to evaluate the Ith output bit, YI, you simply take the Ith predicate and evaluate the only set of input blocks that correspond to the neighbor of node I in the graph G. Okay, so these are very simple in the natural construction. What can we say about them? It turns out that when the locality is high enough, which is three or above, there exists those bipartite graphs that have very good expansion. And with this, we can show that there in fact exists block-wise locality three small bias generator. And we can show that this type of function, when you use a suitable non-degenerate predicate, they are assumed already to be one way and still the random using known assumption in the literature of local PRGs. And furthermore, we can show some harness amplification results. This is nice. And suddenly, when you go to block-wise locality equals to two. All these properties go away because they do not have, we do not have good graph with good expansion. And furthermore, it turns out that two recent works show that such PRGs do not exist, except to find a very, very tiny window of expansion that are not known to be sufficient for constructing IO. And I do not have much more time to delve into the block-wise local PRG. I think they are very, very natural primitive and deserve more study. But in the rest of the talk, our focus on giving you an idea of how we can construct IO from this very low-degree multilinear map with the help of those block-local PRGs. And I will only be able to give you some flavor or high-level idea of how this is done, okay? In order to do so that we will need to use go-to-run intermediate object, which is secret key functional encryption for computing this very simple computation of only degree D polynomials. And we need this scheme to have very good efficiency that we'll talk about shortly. And for short, our card on degree DFE, okay? Therefore, in the first step, we want to bootstrap from degree DFE all the way to IO, and this is the place where we need the help of those PRGs. And what we give is the first tight bootstrapping technique, which makes sure that the degree of the FE equals to the locality, or the block locality. And the key idea here is preprocessing. And then in the next step, we are implementing those degree FE that we need using exactly degree D multilinear map. And here again, for the first time, we give a tight FE construction where the degree of the multilinear map is exactly the degree of the function we want to compute. And the key idea here is to somehow recursively compose a very simple primitive functional encryption that computes only in the product, okay? I'll give you some high-level idea of how each step is done, and let's start with the first. So record that functional encryption are basically normal encryption scheme, but augmented with this capability of giving partial decryption keys. We can implement, we can encrypt messages as usual, but now additionally, we can give out partial secret key that are associated with some functional circuit. And when decryption with this partial decryption key, what you get is the output of the function or circuit evaluated on the encrypted message as opposed to the message itself. And the security guarantee is that still, cipher tags of two different sets of message should be indistinguishable, even if you are given with multiple secret keys as long as the outputs are the same, and therefore you cannot tell just simply by decrypting. It turns out that functional encryption are intimately connected with IO. As shown by three elegant work is that, give me one key functional encryption for the class of NC1, computing class of NC1 function, as long as it's compact, we can already construct IO. Here compact means that this FE scheme should be mildly efficient. That is encryption algorithm runs in time polynomial in the message length, which is necessary, but also somehow sublinearly in the size of the function to be computed. Therefore, we really just need to do bootstrapping of FE scheme to go from degree DFE to computing FE for computing NC1 functions. Here we need the degree DFE to have really, really good efficiency. That is, the encryption algorithm runs just linearly in the length of the message and completely independent of the size of the function. All right, so how can we do that? Let's illustrate the idea with the simple locality PRG and then later extend to block-wise locality PRG. So a natural idea already appeared in previous work is to use randomized encoding to reduce degree. What they were enabling us to take NC1 function F and represent it as an NC0 function denoted as RE of F. And the very useful fact to remember is that RE of F has very small degree, can be computed in degree four, in particular degree one in the input bits and degree three in the random bits. This is great because then hopefully we can just use the low degree FE to compute RE of F and it will be done. So let's see. Well here, in order to encrypt a ciphertext, or encrypt a message X, what we'll do is we take the degree DFE and encrypt it together with some randomness. And now, to give a secret key for function F, we just need to give a secret key for a function G that computes the randomized encoding. And the benefit is that because randomized encoding can be computed in degree four, now the degree of FE is just four. Well, of course it cannot be just so easy. It turns out that unfortunately compact nicks do not hold. Why? Because to encrypt the input, encryption will take time at least linear in the input length. Now the input contains the random table for computing the randomized encoding, which is at least as large as the size of the function as opposed to be sublinear. So we do not have compactness. Well to salvage compactness, the natural idea is to say that I will not directly encrypt the random tip it's on, but rather encrypt to the seed of a PRG so that later on in the function G it will first take the seed expanded to seal the random output and then compute the randomized encoding. Now believe me with the math is that with this change now encryption becomes sublinearly efficient. However the drawback is that the degree of the functional encryption goes up. Now it becomes three times the PRG degree plus one. Why? Because RE has degree three in the random bits. And these random bits are now computed using PRGs. And this is only bounded by three times the locality of PRG plus one and this is definitely not tight. To get tight FE bootstrapping our key idea is to do pre-processing. What do we mean? We would like to decompose this function G into two parts A and B. So that a part of the computation corresponding to function A can already be done at the encryption time and the ciphertext encrypts the output of function A. And therefore at the encryption time we only need to do the rest of the computation corresponding to function B. If this can be done then the degree of functional encryption decreases and hopefully that we will be able to bounded with the locality of the PRG. While the only constraint is that we must make sure pre-processing is going to be compact. That is the output of the function A is the sublinear in the function size. As otherwise you might well just compute the function G entirely and you don't need any degree anymore. And the challenge is exactly in doing pre-processing compactly and let's see a little bit why this is difficult. To look into the RE of F it can be expanded into a sum of monomials where each monomial will require multiplying together three output bits of the PRG. Since we're not making an assumption about the structure of the PRG in the worst case each output bit depends on L randomly chosen seed elements. And let's pretend that this simply requires multiplying those seed elements together and this will be the output bit. This is not really true but really helps us to see the challenge. Now to multiply three PRG output bits is requiring us to multiply together three times L randomly chosen seed elements. Well as you can intuitively judge is that multiplication between randomly chosen elements are very hard to pre-compute without quickly blowing up the size because there's not much you can do. There's no structure. So given this is difficult we ask the question what can we pre-compute them? It turns out what we can pre-compute is to multiply together PRG output bits that depend on the same random edges but two different seeds. Why? Because now in this huge product of different seed elements each of the column are in fact aligned. Therefore we can pre-compute the degree three computation between the same random edges cross different seeds and with those pre-computed monomials we can compute the product of PRG output using only the locality of the PRG. Well it turns out this observation is enough and with much work we can massage the function G we want to compute into this form and therefore enable pre-processing. I don't have time to go into the details please see the paper for more detail. And now let's just see that the weather this idea also extended to block locality and why block locality is enough. So the only thing that changed now is that instead of input bits we're working with input blocks and the first difficulty is that each output bit itself can no longer be computed by a low degree or degree L polynomial instead the degree can be way higher. So the first step of the pre-computation is that we're just gonna do the brutal way of compute all the monomials over elementing each of the block. And luckily because block size or logarithmic the blow up is controlled and can be handled. And with those monomials we can now compute each of the output bit using a degree L polynomial. And now the trick that we just talked about before with respect to local PRG come back to apply that we can pre-compute further the degree three computation over those pre-computed monomials and that will facilitate us computing the product of PRG outputs. Okay with the remaining I don't know how many minutes is that I'll try to give you, okay that's enough. I'll try to give you a very high level flavor of how to construct such degree DFE using exactly degree D multilinear map. Let's start with a thought process or we'll just only do this thought process that the only thing I want to compute is a degree D monomial. I want to compute X1 multiply all the way to XD while hiding the input X. The natural way that the first naive idea is that well I have multilinear map let me just encode these input bits or input elements using inside the different groups and the pairing will allow us to compute the product in the target group. And this word somehow suffices for the functionality but security completely falls apart. Why? Because those encodings really do not hide the input which can be actually fixed value. In fact we only assume SXDH or multilinear maps which only give you some security guarantee when the encoded values are random and this is certainly not true for input bits. So the approach in previous work is to have security we're gonna use some, they use the some cryptographic primitive in particular randomized encoding. Now they imagine that instead of getting encoding of the input bits you get encoding of a randomized encoding of those input bits for computing monomial. And then you can simply use pairing to compute the output. The problem now is how do I get to this randomized encoding? Previous work showed that it turns out you can leverage their structure to just use a simple functional encryption for computing in a product to get to this encoding of randomized encoding. The only problem is that the degree of the multilinear map over or needed will be two times the degree of the function as opposed to be exactly equal. So in order to shape of this degree of two is additional factor of two we really need to do computation with every pairing we need to the computation of the function as opposed to leveraging or computing any other cryptographic primitive. It turns out that our first and the very naive intuition is correct except that we need to use a better encoding and this encoding is just a functional encryption for inner product. By recursively composing it you can implement the first naive simple idea and there's no degree, waste of degree. Okay, so let me summarize. In this work with two papers the end message is that we can construct IO using trial and error maps and the blockwise locality 3 PRG. Now ahead of us is a very interesting fork with the question hanging above the head is do there exist trial and error map or not? If it does exist then our next destination is of Vastopia if not then we're still leaving the promised line. And thank you.