 Yeah, so I want to tell you how to remove graded encodings or multilayer maps from functional encryption. And this is the joint work with Rachel Lynn and Omar Peneff. So let me start by turning on my clicker and then recalling how, what exactly is function encryption? So in plain encryption, we know that anyone that has the public key can encrypt. And if you don't have the secret key, you can't really tell an encryption of one plain text from an encryption of the other. And if you have the secret key, then of course, you can fully learn the plain text. In functional encryption, you can also generate partial decryption keys that allow you to learn a specific function of the plain text. And we still require that indistinguishability holds provided that the function doesn't separate these two plain text. The function agrees on the two plain text. So this is the notion of functional encryption. And as it turns out, a crucial aspect that tremendously affects the power of functional encryption is what is exactly the size of the cipher text and how it scales with the circuit size of the functions that you want to support. So let me tell you what is the rough picture here. So we basically know that if we allow the size of cipher text to grow with the circuit size of the functions we want to support, then functional encryption is not really more powerful than plain encryption. And in particular, we can construct it from standard assumptions. In contrast, if we allow the cipher text to even mildly grow with the circuit size, where by mild they mean that say sublinearly in the circuit size, then, sorry, not mildly grow, but mildly be succinct, mildly succinct, then there's a tremendous jump in the power of functional encryption and we already get the immense power of affascation. So this is the power of succinence. And let me also mention that sometimes there's another measure of succinence that we care about which is how the size of cipher text scales with the number of keys. And I'm going to say today applies also there for the sake of this talk, we can think about one key and we really care about how the cipher text scales with the size of the corresponding function. Okay. So naturally there's been a lot of work and this has become constructing succinct function encryption has become a central goal in cryptographic research. And there's been some impressive progress here. So let me tell you what we know roughly. So the first constructions were based on affascation, on indistinguishability affascation. And then following that, there were works that showed that you can actually construct function encryption directly from multinear maps or graded encodings, to date the difference is not really going to matter, I'll say multinear maps, and even from pretty simple assumptions. Now multinear maps is of course an object that we still don't really understand well. And there's been a lot of effort in trying to sort of reduce the gap between multinear maps and standard assumptions or objects that we understand better and concretely bilinear maps. And today sort of the state of the art constructions seem to come very close. We even have constructions based on tri-linear maps and some simple assumptions on local PRGs. But we haven't quite crossed this line yet. We're still not in the world of standard assumptions. Certainly we don't have constructions that are based on bilinear maps. So let me tell you what we do in this work for now at a very high level. So we identify certain sufficient conditions that will allow us to really cross this line and go to bilinear maps. Roughly speaking, what we show is that you can take any functional encryption scheme that uses constant degree maps and sort of reduce the degree to two, to bilinear maps, provided two things. So first, we want the functional encryption scheme to be sufficiently succinct. And I'll say exactly what this means in a second. And second, we want the functional encryption scheme, the construction, to use the multinear maps in a black box way. And hey, when I say black box way, I mean that the construction only makes generic operations. It doesn't really care. It's completely oblivious of the actual representation of these bilinear maps. So this is very similar to the generic group model that many of you probably know. Okay, so let me try to be more precise here about what we show. What we show is the following. We show that you can take any functional encryption scheme that uses a degree D multinear map as a black box and has succinctness that is better than 1 over D. Namely the size of the ciphertext scales with the circuit size to the power at most 1 over D. And if we have that plus other standard assumptions like LWE, then we can get succinct functional encryption from bilinear maps. Now in terms of security, we're trying to capture a large class of constructions here. So basically what we want to start with is any functional encryption scheme that can be shown secure in an ideal generic multinear map model. And as a result also what we get is a construction that is secure in an ideal bilinear model. Okay, so this is the result and exactly how close does it really bring us to constructing functional encryption and obfuscation from standard assumptions. So we're not quite there yet. So if you look at the existing constructions that we have, then first we're definitely black box in the multinear maps. We do have constructions of functional encryption that use multinear maps as a black box, but we're not quite succinct enough. So the succinct is just over this 1 over D term, and it's not sufficient for what we need in order to cross this line to bilinear maps. So this sort of really draws a very fine line between what we want and what we currently have. And you can view it positively as you know there's a specific goal that you can try to achieve if you want to cross this line, that is make our constructions more succinct. But of course there's the flip side. You can also view this as a lower bound perhaps as a negative result that says what are the limits of these functional encryption schemes that we're constructing, in particular making them very succinct should be hard. How hard is hard as constructing obfuscation from bilinear maps? So this is about the interpretation, and for the remaining time I would like to give you a taste of the ideas involved in this result. So the starting point is really a result for obfuscation by Paas and Shalat that shows that in this case if we have a construction of indistinguishability obfuscation, IO, that uses constant degree maps in the black box way, you can actually completely remove the multinear maps. We don't really add any power. Now we have this result and I just mentioned that if you have succinct functional encryption then you can actually construct from it IO. So it may be tempting to think that this might even already give us the result in the sense that perhaps we can take a functional encryption scheme that uses multinear maps as a black box way, use the existing transformations to get corresponding IO and then again remove them using existing results. The reason that this doesn't really work is that quite interestingly the reduction between IO and functional encryption is not a black box reduction. So even if you start from a functional encryption scheme that uses multinear maps as a black box the resulting IO scheme that you will get will actually make use of the explicit representation of the code of these multinear maps. So we have to do something else here. And roughly speaking what we're going to do is we're still going to take a pretty similar route at least in spirit but rather than thinking about IO we're going to think about a relaxed notion of obfuscation that is called XIO, exponentially efficient XIO. And for this notion we do actually have there's an existing reduction from succinct functional encryption to this relaxed notion of XIO that is completely black box. In particular it relativizes if we start with functional encryption that uses multinear maps as a black box we'll get XIO that uses multinear maps as a black box. And then most of the effort in this work is dedicated to show, to showing that if you have such XIO then you can really reduce the degree of the multinear maps and go to bilinear maps. So we won't be able to completely remove them as in the setting of IO but we'll reduce them to bilinear. And then once you have XIO then again there exists reductions, even black box reductions that will take you back to succinct functional encryption and everything you need in order to get indistinguishability obfuscation. So this is sort of the blueprint and let me tell you now what exactly is XIO, what does it mean and how can we reduce the degree of multinear maps in XIO. So usually when we want to obfuscate we have a circuit and this circuit typically corresponds to a much larger truth table. If you write the circuit on each one of the values the truth table will be much larger than the circuit itself. And usually when we obfuscate this circuit we expect that the resulting obfuscation is roughly the same size, maybe up to some polynomial blow up. This notion of exponentially efficient XIO allows the obfuscated circuit not only to grow with the circuit size but also to mildly grow with the size of the truth table, okay? Say sublinearly in the truth table. So it really seems like a much weaker notion. In particular for it to be applicable we should think about circuits that compute a polynomially large truth table but perhaps this polynomial could be huge, much larger than the circuit itself. And it turns out that it's still very powerful in particular if you have this notion and under additional standard assumptions you can again get all the way to 16 function encryption and IO, okay? So now we're going to try and show that if we have XIO using multi-unit maps as a black box we can reduce their degree. To understand how we do this I would like to explain in a very oversimplified way how this is done in the case of IO, not the relaxed version but the actual version of IO and I'll say how you can reduce say from degree D for some constant D to linear maps, okay? Basically discrete log groups. And concretely I'm going to assume that such an obfuscation scheme has a very simple structure. It basically consists of a bunch of ring elements X1 up to Xm and these elements are encoded using our multi-unit groups and all that you can basically do with these elements is perform certain zero tests given by degree D polynomials, okay? This is all that you can do in particular in order to evaluate the obfuscation on specific inputs you just compute certain zero test polynomials test whatever zero and this is also all that the attacker can do. Now if you have such an obfuscation how would you remove the or reduce the degree of the multi-unit map? So one thing that you can do is rather than actually encoding all of these ring elements you can sort of pre-compute all the polynomials of degree at most D, right? This is the only thing that is going to be relevant here and instead of encoding the elements just encode these monomials directly, okay? And now we don't really have to in order to perform these tests we don't really have to evaluate degree D polynomials we can really evaluate linear polynomials in these new encoded elements. Now why does this work? Why is this construction even efficient? So the reason that it's efficient is that there aren't too many monomials, okay? There are M to the D monomials where M is the number of encodings and D is the degree of the of the polynomials that we're trying to handle and overall here M is proportional to the circuit size this is an obfuscation after all so overall we have polynomially many encodings remember that D is a constant here, okay? Now say that you would try to do the same with XIO we don't have IO at our disposal would this still work so you can still write all of these monomials but the problem is that now this is sort of trivial and the reason is that in XIO the size of the obfuscation in particular the number of encoding doesn't only scale with the circuit size but also with the truth table okay and it may be it may very well be much larger than say the truth table to the 1 over D which means that M to the D is really just too much it would completely trivialize this construction the obfuscation would become as large as the entire truth table so we can't really do this and what we're going to do is instead rely on a somewhat stronger notion of XIO that has an additional decomposability feature and it says the following it says that now the obfuscation basically looks as follows just not just a list of encodings it's a list but now you can be divided into blocks into several blocks with two properties so first each of these blocks is going to be very small so perhaps all the encoding scale was the truth table but each of these blocks is very small for the sake of this talk let's say polynomial and the circuit size and moreover this the relevant zero test so whenever you try to evaluate an input you only have to evaluate polynomials that are very local very local in the sense that they only touch say two blocks in this list now we're going to use this decomposable XIO and fortunately you can also show that the existing constructions that we have of XIO from functional encryption those black box construction really do have this decomposability feature and how can you use this decomposability feature in order to reduce the degree now instead of simply encoding all the monomials we're going to encode the monomials corresponding to each block separately okay so the blocks are not too big and the point is that now if you look at these polynomials sense for very local we only touch two blocks then we can we can really rewrite them as bilinear polynomials as degree two polynomials in these new encoded monomials so we managed to reduce the degree from whatever constant D that we had to do and again the size since each block is now very small raised to some constant overall the size is controlled and we still get the compression that XIO has to guarantee good so this is really the rough idea there's much more going under the hood this was really over simplified I want to give you the details you can you can read them in the paper ask me offline let me just mention two additional results that that we have in the paper so what we said doesn't really say anything about say the generic group model or the random oracle model models that are weaker than the bilinear models can we remove such oracles from functional encryption and we do show such a complementing results both for the generic group model and for the random oracle model but here we need to assume a slightly stronger notion of unbounded key function encryption and the not very equivalent to to succeed function encryption in the in the world of non-blackbox reductions but in the world of blackbox reductions we don't really know that for equivalent okay so this is all I want to say and I'll just remind you that that this challenge is still open basically still don't really know how to go to bilinear maps from standard assumptions this is a very active area so I'm looking forward to see what comes next here so thanks