 This is indistinguishability obfuscation from simple-to-say drug problems, new assumptions, new techniques, and simplification for EuroCrip 2021. I'm Romain Guet, and this is joint work with Ayush Jain, Rachel Lin, and Amit Sahaib. First, let me recall to you what is indistinguishability obfuscation. Basically, it's an efficient compiler that takes a program pi and turns it into a functionally equivalent program pi tilde. And if there is another program delta that has the same functionality pi and the same size, then it's computationally hard to distinguish the obfuscation of pi from the obfuscation of delta. So in other words, indistinguishability obfuscation hides, preserves the functionality, but hides the implementation details, the implementation differences between two programs of the same size. And maybe look a little artificial, and in fact, there is a more natural simulation variant notion of obfuscation, but it turns out to be impossible. However, artificial it may seem, the IO notion is extremely versatile and very powerful. In fact, virtually all the existing cryptographic applications that we know of are implied, can be constructed from IO and minimal cryptographic assumption. And beyond these amazing unifying aspects, IO has also been successful at enlarging the cryptographic horizons. In fact, it has helped us envision new construction that was unimaginable before. And some of these objects that I've listed here actually have been later on built from standard assumption. Given this amazing power, it's important to build IO from stable grounds. In particular, we'd like some security proof where the underlying assumption is simple. And in fact, we want to have provable security so that it makes a cryptocryptinist job easier. So in that sense, we want the assumption to have a sexy description. And to be understandable, even by non IO experts, we want the assumption to be instant independent. For instance, if you build an obfuscator for all polynomial size circuits, you do not want one assumption per second. In fact, instead, we'd rather have a constant number of assumptions. All these desirable features for assumption can be simply called as simple to state. And in our work, we build the first IO from simple to state harmless assumption without written in your maps. And this work has been a stepping stone towards the celebrated results generally in Sahai that gave the first IO from well founded assumption, that is long standing and long studied assumptions. Our assumption are simple to state, but one of them is actually new. In fact, we build IO from three assumptions. The first one is learning with error assumption, the standard assumption. Second one is a standard assumption on billionaire maps also known as bearings. And the third one is new. And it's LW is leakage assumption. I'll tell you more about this later. And as is typical for IO, all these assumptions needs to be sub exponentially secure. Now let me talk about this LW with leakage assumption. It's an interaction of two long studied assumptions. First one being LW with binary error. And the second is existence of constant depth Boolean PRGs. So LW with binary error, roughly says that it's hard to distinguish a bunch of linear equation with some binary noise from truly random values. And what's different from the standard LW assumption that this doesn't hold no matter what the number of sample is. In fact, if the number of sample N is roughly lambda, the dimension of the secret, maybe tiny bit more, then there is a reduction from what's case lattice assumption, just like the standard LW assumption. However, if the number of sample N is quadratic in the dimension of the secret, then there is an efficient algorithm of pulling a no time algorithm that breaks the assumption. The parameter regime we are interested in is in between. It's when the number of samples N is slightly less than quadratic in the secret dimension. This regime, best known attack are sub exponential. And this is a parameter we will care about. Now, a constant depth Boolean PRG can be expressed with a constant degree polynomial over the integers. Namely, every output bits of the PRG can be expressed as the output of a function on the input bits. And this function can be written as a polynomial over the integers. And because the circuit is custom depth, that means the polynomial will be of constant degree D. And in fact, if the stretch is not too big with respect to the degree of the PRG, then there are construction that are believed to be secured for which the best attacks are sub exponential. The limit is when the stretch M is slightly less than N to the D over 2. And we use a regime that's significantly lower where we only need a stretch to be N to the slightly more than D over 4. And this constant depth Boolean PRG has a long history of studies, including in our work, where we give some new candidate PRGs. Now, if we put these two things together, we have our new assumption LWU with a D cage, which essentially says that given a PRG of constant depth, that means constant degree. And the total randomness of this generator remains even if we give LWU sample with binary noise, where the noise is the seed of the PRG, a bunch of linear equations with noise, and the evaluation of the PRG on that noise is computationally indistinguishable from this same linear noisy equation and uniformly random values. In some sense, it's sort of like a circular flavor between the security of the PRG and the security of LWU with binary noise. Of course, as I said, this only holds when the number of samples is slightly less than quadratic with a dimension of the secret S. And when the stretch of the PRG is less than n to the d over 2, and in our case, n to the d over 4 and a bit more is sufficient. This is the parameter we consider. In fact, in our paper, we give more details and we give a survey of the different attacks that exist. And as I said, we also give new PRG candidates. I'm not going to tell you more about that. I'll ask you to read the paper if you're interested. So our theorem is that if you assume the polynomial hardness of the LWU assumption, a standard assumption over linear maps, and this new LWU is leakage assumption, then there exists publicly FE with sublinear coefficients. And furthermore, if you assume sub-exponential hardness of this underlying assumption, then we show that you can have there exists a sub-exponentially secure FE, which is known to give IO. Let me show you how we build this publicly FE. We combine three building blocks. Namely, the first one will be a special momorphic encryption that we build from LWU. This is new. The second one is a new FE that is only for restricted class of function that we build from linear maps. And the third one is special PRG from this new LWU leakage assumption and the existence of PRG in NC0. I will tell you more about this. So all of this are new and we combine them to directly build publicly FE. And what's new is that this transformation has only a polynomial security and efficiency loss. All right. And this, as I said, is known to give value. So this is the first time that the publicly FE from polynomial hardness assumption is built. The advantage also is that it's much more direct construction as fewer steps and prior works. So it's both more efficient and also conceptually simpler. Actually, I will take the opportunity to mention that it has been recently discovered that there are some subtle technical issues in one part of the proof of prior works and the security amplification. It doesn't apply to our work because we have simpler bootstrapping. This is an advantage. Our construction is overall much more direct and simple. All right. So in the rest of the talk, I will basically go over all these big blocks. And before that, I will at least define what is publicly sublinearly efficient FE, which is what we want to build. FE is a public key encryption scheme where you have this special secret key that's called master secret key. And it's possible to downgrade this master secret key into weaker or partial secret keys. In particular, if you have a function f and the master secret key, it's possible to provide a so-called functional secret key for that f, sk sub f. And given sk sub f, the encryption will recover not the entire message m, as it was encrypted, but only the value f evaluated on m. In particular, if that allows users to fine tune exactly what kind of information about the message is revealed. And we care about functional encryption schemes, which can handle arbitrary function all circuits. Are there security say that given one secret key sk sub f and one safer text, this information essentially can be simulated from only knowing the value f of m and also the function f, which is not hidden. A technical challenge is to build such a FE where the encryption running time is sub linear in the output size of the function f. In fact, this is not believed. This is proven to give higher. Okay. So this is what we will build. This is what we want. How do we build it? We start with some monomorphic encryption scheme that has some special properties. And such concepts, such notion has been introduced by Agrawal Jose 2017. Basically, it's a homomorphic encryption scheme. So you start with an encryption of bits. You can homomorphically evaluate a function f. So far so good. You can also homomorphically evaluate on the public key. That's new. And there's a special encryption property that says to decrypt this evaluated safer text, all you need to do is perform an inner product of this evaluated public key with the secret key of the scheme and then round. So this suggests a construction for general tempos FE, which is a hybrid construction. It will use a special, this special homomorphic encryption and an FE, which only supports inner product and rounding. Building general purpose FE boils down to building FE for inner product and rounding. Basically the bulk of the work, the evaluation of the arbitrary complex function f is done on the homomorphic encryption. And FE is only used to run the decryption, the special decryption. That's the overall idea. But there's a problem. In fact, we don't really know how to build FE for inner products and rounding, especially the rounding part. This is unknown. And it's crucial that we do the rounding because if we reveal the noise, the FE decryption noise, then the scheme is insecure. So instead of the rounding, our idea was to produce a pseudo random noise. So the FE will not do the rounding, but it will compute this, as I said, pseudo random noise that will hide this FE decryption noise. So we build actually a new special homomorphic encryption scheme from LWE for all circuits that is inspired by the predicate encryption scheme from GVW. All right. So let me give you more detail what we do. We, as I said, we perform the decryption of the homomorphic encryption via the FE. So we FE encrypt the secret key and the C. And then for every bits of the function f, we give the functional secret key that will compute the inner product plus some pseudo random noise computed on the C, which will hide this FE decryption noise. So now you may be wondering, why do we actually use pseudo random noise and not truly random noise? The reason is that we want cipher texts that are short, sublinear in the output side of the function. We actually require PRG with some non-trivial stretch so that the seed is actually less than the output size. That's the reason we use PRG. All right. In fact, using this FE that can compute a PRG is not specific to our work. It's a global general paradigm, as that has been used before. And in particular, before it was built some FE scheme, functional encryption schemes, that can handle degree two polynomials with 16 cipher texts from standard assumptions. So that's great. So naturally, we would like to combine these to run a PRG, which is also a degree two. What does that mean? The PRG, every output bits of the PRG can be expressed as the evaluation of a degree two polynomial over the inputs, just like here. This is exactly what we could handle with such an FE scheme. However, there's no such degree two PRG. We don't know how to build this in a secure way. In fact, there's some evidence that it could even be impossible to do such degree two PRG. It's not good enough. We need something a bit more. So why not degree three? Well, there are candidates of degree three PRG. However, there's no construction of degree three FE. The only known construction of three linear maps, which is not a standard assumption. Given this unfortunate state of affairs, what we do is something in between. We'll essentially give an FE that can compute degree two. We are stuck at degree two from two linear maps, but the FE will build, does something more. It's called a partially hiding FE, I will tell you more later. The message has basically two parts, one part which is public and one part which is secret. It does a degree two on the secret part, but it does something more complex on the public part. That means we can run PRG which is degree two in a secret part of the seed, but degree more than two on the public part of the seed. If you want degree two and a half or something like this, something a bit stronger than degree two. In fact, we built such a special structured seed PRG from these assumptions, provably secure from these assumptions, and we also build a new partially hiding FE from two linear maps. Combining these two solves the problem of building FE that can do inner product and generate some zero random lights. Let's delve more into the details about what these objects are. So partially hiding FE. As I said, it's like now we have a message which has two parts, a public part and a secret part. And we generate functional secret keys so that we can compute NC1 function on the public part and degree two on the private part. And that also regardless of the public part is actually not hidden, right? So this ciphertext and this secret key reveals all of this information, the evaluation of the function, and the public part. And we build this from the linear maps. That can be used with a new kind of PRG, which we call structured seed PRG, which has where the seed has two part, public part and a secret part. And the PRG computes a degree two polynomial on the secret part of the seed and degree D where D could be larger than two polynomial on the public part of the seed. So now let's take some time to think about this structured seed PRG. You may be wondering, well, is this useful at all? Does it actually generalize notion that does it structure seed makes it stronger? Isn't just that PRG of degree two? If you consider seed, the public part of the seed is just a description of the PRG. The reason is no, it's actually gives you something because the secret part and the public part of the seed can be arbitrarily correlated. So it may not be possible to conditionally sample the secret part of the seed given the public. It's actually giving you more power. So here is our structured seed PRG. You start with a normal PRG of constant degree. By normal, I mean, which is not structured seed. The public part of the seed will be a bunch of LWE samples with binary noise. And the secret part of the seed will be powers of the secret S. You take this vector S and you compute all the powers until d over 2 where d, remember d is the degree of the PRG, this guy. Okay. And how do you execute? How do you run the structured seed PRG? Well, basically, what you compute is this vector B from the public seed minus a times S where A is from the public seed and S is from the secret seed. You compute this and you run, it gives you a vector of bits. Actually, this is equal to E. You run the PRG capital J, capital D, sorry, on this. Okay. And so this is sort of random. This is just the PRG random, random seed. And how can you compute this? Well, this is of degree D. So you can perform, the FE allows you to perform degree D computation on the public part of the seed, but only degree two, the secret part of the seed. That's why we need to pre-compute all these powers to the d over 2 of the secret S. All right. So you need to compute a degree D on your own S. However, FE only allows degree two. So you need to compute pre-compute all the degree d over 2. Only the last multiplication can be done using the FE. Let me try to motivate why we do when we build it like this. So basically, you can always pre-compute. If you have a degree D PRG, but you only have a degree 2 FE where D is larger than 2, well, you can always pre-compute the seed. You compute the powers of the seed. However, this pre-computed seed will be too large. And that's the problem. So we need to compress it. There's two ways we compress it. The first way is by using this LW is binary noise, because now we see that the seed of the capital G is bigger than S. So S is a compression of the actual seed. So that's the first number one. Number two is actually, we don't need to compute powers of S up to D, but only D over 2 because we have a quadratic FE. All right. And that turns out to be critical. If you were using a linear FE, like linear in the secret seed, that would be too large. The secret seed will actually make this SPRG meaningless. The stretch would not be enough. All right. So the quadratic compression given by the FE is crucial. All right. So that's it. Let me conclude. So we define the notion of structure of seed PRG. We build it. We give a new construction from the existence of PRG and C0, constant depth, and this new LW with leakage assumption. And by the way, the easy forward work of Jane Nien and Sahai follows this paradigm by building a stricter CPRG. But basically, instead of this LW with leakage assumption, they use learning parity with noise for large fields. All right. There are also other differences in their construction that I'm not going to detail. Basically, that's it. They replace this new assumption by a long-studied assumption, LPN. All right. So together with this SPRG, we build a new partially hiding public key FE that allows NC1 computation on the public part of the input and degree two computation on the secret part of the input. We build this from standard assumption on B-linear maps. This is new when it improves upon prior work, which we're only capable of doing either degree one on the secret part of the input, which as I said, is not enough. Or they were actually not computing on the public part of the input. They were just playing quadratic FE. And also, again, that doesn't, that cannot be combined with suitable PRG. Or they were actually also prior works, which were doing degree two, which is great. NC0, we do NC1, but actually, we do not use NC1. We only use NC0. But the main drawback is they're actually only in the secret key setting. They do not build a public key FE, but a secret key FE. However, we directly build a public key FE. That means less bootstrapping is required, and we directly get a public key FE in the end. Simpler construction, basically. All right. And that's combined with this special homomorphic encryption from BWE. We'll give a big key FE. This is a big picture. All right. Couple of open problems. The first one, as I said, we actually build an FE that's slightly overkill for our SPRG, because we do not exploit the fact that we can compute NC1 on the public part of the input. So could we use that to, for example, get the same but from higher quality PRT. Another line of questions is about this partial hiding FE, which is a primitive that is crucial to our construction and which is also interesting in its own right. Because BHFE, if you think about it, this kind of a mix between the access structure on the public input, like ABE, and a functional encryption on the secret input. So it combines the two things, which is interesting. And in fact, you can ask, well, can we actually improve the expressivity of the BHFE in any possible way? For example, instead of NC1, can we do polynomial size circuits? That's not known. Can we do degree three? Well, that would be giving IO, essentially. So that would be fantastic. But even something weaker, can we, for example, do, let's say, even NC0 for degree two on the private input, but from a post quantum assumption? That's not known. And actually, so here, we do it from pairing. But if we could do it from LWE or lattice based assumption, for example, the whole thing would be plausibly post quantum. That would be very interesting. And also, another line of question is about this LWE with leakage assumption. I'm sure, actually, the follow up, Jane, it's a high mood, the need for this assumption. But still, there are some advantage of our construction in terms of efficiency, but also more fundamentally, this LWE with leakage is a natural problem that's essentially, it will be worthwhile to study because there are actually many other construction that has this sort of circular variance. And in fact, if you think about it, LWE with leakage is a sort of a circular variance. So it will give some insight to study more in depth, this new assumption. It may allow us to understand better all the construction that also use circular assumption. Or there are also some construction that you can think of as using also some sort of LWE with leakage assumption. More control analysis is needed. And that concludes my talk. Thank you for listening. And if you have some questions, you can ask them during the Q&A session during the EuroCrip conference. Thank you.