 I don't need it. I have it already, yeah. So the second talk of the session is Indistinguishability of Fuscation from Constant Degree, Gradient Encoding by Yujia Lin. Thank you for the introduction. Can you hear me well? Seems like I can't shout. All right. OK. So today we're going to talk about circuit of fuscation. This is a primitive that aims to compile a circuit into one that preserves the functionality, meaning that the compiled circuit should produce the same outputs on every input and becomes unintelligible. Intuitively means that it resists any reverse engineering trying to figure out what is some valuable information embedded in the original circuit. We recall such a compiler and a fuscator. And in particular, the notions in the literature have been trying to formalize what exactly this unintelligible mean, and they capture different levels of security. What we look at here today is really the weakest notion called indistinguishability of fuscation or simply IEL. IEL tries to hide just one bit of information. That is, which one of the two equivalent circuits, C1 and C2, is being fuscated here? By equivalence, we mean that those two circuits really have the same identical size as well as the same truth table. And for such two circuits, we require that the IEL-fuscated compiled circuits to have computationally indistinguishable distributions. As you can observe here that IEL is trivial to construct if we're not concerned about efficiency. How? Given a circuit C, just try to find the canonical circuit using unbounded time that is equivalent to the circuiting question. So the real quest here is finding such a compiler that is actually efficient. So for a long time, there was no progress on constructions of IEL. Until 2013, there came a box parachuted down into our community. We open the box and we find the first IEL by garg at all. And since then, three years after in 2016, we have, first of all, many more IEL constructions. And moreover, through this boat of IEL, we have managed to sail from what we know and love as the crypto proper land to this crypto wonderland. And in this land, many talks that we didn't know how to achieve before, for example, multi-party and non-interactive key exchange are primitives that we didn't imagine before. For example, gobbling for tooling machine or RAM computations in time independent of the time complexity now become suddenly possible. And the list of new feasibility data that has been established in the past three years is actually way more than what I could put on the slide. And additionally, IEL also implies many primitives in the crypto proper lands. So it seems that it seems to be charting a much broader map of cryptography for us. Or does it actually? It turns out that all IEL construction today are only candidates, roughly meaning that we don't have a high confidence in their security. And they're built upon this structure called graded encodings. And in the past three years, graded encodings have been going through this continual cycle of proposals of construction and attacks. And very recently, there's even an attack that directly attacked the very first IEL construction by Garga and all when instantiated with graded encodings. Most of the known IEL constructions are not directly broken. I think it is fair to summarize that the state of the affair has been balancing at the border of security and insecurity. Therefore today, does IEL exist? The answer to this question is still uncertain. But if the answer is yes, this amazing land is the world that we live in. And we should just continue to expand the limit of crypto. And if IEL does not exist, then the map reverberate back to before, and all those new primitive we constructed will fall back into a limbo. But then at the very least, we should go primitive by primitive to figure out what is the exact border of feasibility. So answering this question is important. Of course, there are many ways one can imagine to go to make progress towards answering this question. In particular, there will be continuous research on constructions of graded encodings and attacks on them. But that is not the focus of this talk. In this talk, we're going to look at the reduction from IEL to graded encoding because we love reductions. For stronger the reduction, the simpler the graded encoding we need, either to construct them and maybe in the distant future, we can, one day, eliminate its use. In particular, our known IEL candidates rely on polynomial degree graded encodings. And here, a degree D graded encoding can help us to do two things. One is a homophic evaluation of any degree D polynomial. And the second is to zero test its output. And this work, I show that, in fact, constant degree graded encoding suffices for constructing IEL. And therefore, we simplify the functionality of the graded encodings needed for constructing IEL. To really understand this simplification, let me next now look more closely to what graded encodings are and what their degrees mean. So graded encodings generalize this primitive called a multilinear map. Multilinear map allows us to encode a ring element in some group indexed by a label L by putting it in the exponent. And for consistency of notation, I'm gonna use this bracket notation shown in the slide to represent the encoding with the index of the group in the lower right corner. Such a encoding naturally supports a homomorphic addition in the same group as well as testing whether encoding encodes zero or not if we have identity elements. The magic of multilinear map is really at the fact that it allows us to homomorphically multiply multiple encodings in different groups and the producing encoding of the product in some target group. And the degree of such a map is simply how many numbers of encodings that we can multiply together. And when this degree is two, we get bilinear map. Graded encoding generalize multilinear map in the following sense. First, we no longer think about the encodings as being associated with groups, we just think about the mess generic ways of encode some ring element with some label. In particular, the encodings can be noisy. And because the encodings can be noisy, we no longer have zero testing for free and we can only do so for encodings with some special label, we call the zero testing label. Moreover, multiplications can now be done in an incremental way. And the price starts, the output encoding will have a label that grows to the sum of the two input labels. These three capabilities together. Graded encoding schemes or JAS allow us basically to homomorphically evaluate the polynomial and then testing whether the output is zero or not. As long as this polynomial satisfies all the label constraint, by that I mean the fact that addition only operates over encodings with the same label and multiplication as the label up. So the degree of a JAS is simply the maximum degree of all the polynomials that could be homomorphically evaluated and then zero tested. It's easy to observe that the degree must be upper bounded by the norm of this special zero testing label because every multiplication with every multiplication, the label grows. But it's really a separate parameter. In fact, it could be much smaller than the norm of the label because for example, that the noise may grow too big that overwhelm the signal and prevents decoding anymore. In summary, when comparing with polynomial degree JAS, constant degree JAS really supports a much simpler functionality in the sense that only allow us to evaluate the very low degree polynomials. And in fact, in the ideal or generic group model, their powers are strictly separated away. Why? Because ideal polynomial degree JAS is so powerful that it actually implies a VBB of fascination which is known to be impossible in the plain model. Whereas there has been work showing that there's no VBB of fascination from ideal constant degree JAS. And furthermore, the same line of work essentially showed that constant degree JAS cannot be helpful for building black box construction of IEL in the sense that if such a construction exists, then we can generically remove this constant degree JAS and obtaining an IEL without it. In this work, we show that constant degree JAS does help however, non-black box construction of IEL. We show such a construction exists assuming sub-exponential semantic security of JAS, sub-exponential security is a computational assumption over JAS formulated by pas, selan, and tan. The purpose of this talk, what exactly this assumption is, is not going to be important. We also rely on two other assumptions. One is sub-exponential WE and the sub-exponentially secure PRG in NC0. And because all the underlying primitives are sub-exponentially secure, the resulting IO is also sub-exponentially secure. So we do note that our assumption on the PRG is strong. In particular, we require it to have polynomial stretch. And one candidate is Golrak's proposal that essentially is evaluating a system of random KCSP with respect to some primitive Q. And so far there has not been any successful attacks as long as Q is chosen carefully in a non-degenerative way. Though the assumption is quite strong, our theorem can actually be generalized as a generic way to trade the complexity of PRG for the complexity of JAS. And because all the primitives we consider are sub-exponentially secure, in the rest of the talk, I'm gonna omit mentioning it explicitly. All right, so how can we go from constant-degree JAS to IO? We'll do so by going through this intermediate step of building IO for a small subclass S of constant-degree computation. Here by constant-degree computation, we mean the following. We'll say that a Boolean function F has degree D if it is computable by a degree D polynomial. And the punchline is over n-ary. Meaning that no matter in which ring you evaluate this polynomial, it will always agree with the Boolean function F on every possible input in the domain. Constant-degree function, you should think of it as being implemented using lots of additions, but only a few layers of multiplication. As you can imagine, such functions are very weak, complexity-wise. In particular, if this function is total, meaning that its domain contains all the binary string of certain lengths, then shown by Nesha and the shagady, Nesha and the shagady, that such function is contained in NC0. However, if the function is possibly partial, then it could be outside NC0, but still contained in AC0. In other words, our first step is a stronger bootstrapping theorem that goes from IO for a subclass of constant-degree computation all the way to IO for P. In contrast, previous bootstrappings always start from IO for NC1, much stronger. It seems that given this stronger bootstrapping theorem, it is only natural to expect that now to construct an IO for subclass of constant-degree computation should be easy to just apply known IO schemes to this subclass, and by the fact that it has constant-degree, it should follow that constant-degree jazz suffice. It turns out that this expectation is just false. Even though the circuits have constant-degree, it will still require polynomial-degree jazz in particular, the degree will be in proportion to the input length of those sophisticated circuits. Therefore, to actually enable the entire construction, we need to provide a new IO construction that not only rely on the fact that this class has constant-degree, but also the fact that it has certain special structure that we formalize as having constant-type degree. And for any such class of circuits, we show that IO exists using only constant-degree jazz. So for the rest of the talk, I won't have time to go into this new IO construction, or just try to give you some high-level idea of how the bootstrapping theorem works. Okay. So the starting point of our bootstrapping theorem is the recent transformation from functional encryption, or FE, to IO. Unfortunately, so far, we don't have construction of FE from any standard assumption. The best we know how to do is this weaker version called the Boolean FE from the assumption of learning with errors. So Boolean FE, as the name suggests, handles only Boolean circuits. In contrast, the full-fledged FE also handles those multi-bit output circuits. Therefore, the natural question is, can we upgrade? Can we go from the bottom to top? And we show that upgrading is possible if we have IO for a class of circuits that has only constant-degree, assuming PRG and C0. Before going into details of the upgrading, let me first tell you quickly what functional encryptions are. They're basically public-key encryption with partial-decryption keys. Like a public-key encryption, we can generate public-key and secret-key pairs, and we can use the public-key to encrypt messages, which will produce a ciphertext. But with the new capability that we can generate those partial-decryption keys associated with some circuit C. And this partial-decryption key itself could also be viewed as a circuit in general. And with such a partial-decryption key at a decryption time, we could evaluate this circuit on a ciphertext, and that would produce the outputs of the circuit C evaluated on the message M in the clear. So efficiency of functional encryption in particular requires that the encryption time should be polynomial in the length of the public key and the length of the message. In particular, it should be independent of any parameter of the circuit C that you will generate a partial-decryption key for later. The security follows from standard semantic security for public-key encryption. We require that encryption of message one and another are indistinguishable even with access to some partial-decryption key as long as the circuit does not separate those two messages. When we consider the special case of a Boolean functional encryption, we simply mean that this is a scheme that handles only Boolean functions in key gem. So we're gonna denote a Boolean scheme always with the B ahead of the algorithm name. So now comes the interesting part. How can we upgrade a Boolean functional encryption to FE for, say, L-bit output circuits? Our first idea is very natural. Just generate the public key and secret key using the Boolean scheme. And in the key gem, when you're given with a circuit that has, say, L-bit outputs, we're gonna try to first Booleanize this circuit by creating a circuit that outputs the output of C one bit at a time. How? Think about this B that takes two inputs. One is the message, the other is the index of an output bit ranging from one to L. And this circuit will simply output the I-bit in the output YI of C evaluated on M. Play use this scheme, the Boolean scheme to create a secret key for it. How do we encrypt? Well, in order to eventually produce all output bits, what we want is to encrypt every possible pair MI with I ranging from one to L and just publish this whole list of L ciphertext. And with this list of ciphertext at decryption time, we can evaluate the secret key for the Boolean circuit B on every ciphertext and producing every bit in output. Great. It's easy to see that semantic security of this scheme follows from the semantic security of the Boolean scheme. Problem that this scheme is not compact. In particular, the size of the ciphertext scales linearly with the number of output bits, but it really should be independent of any parameter of the circuit C. Idea to get around is to use IO to compress this list of ciphertext. How? We're gonna view this list of ciphertext as the output or as the truth table of a certain circuit E How does it look like? It's gonna have the public and the message hardware E and the input only the index I, it will produce the I ciphertext by encrypting the pair MI using the Boolean scheme. And additionally, it will generate the randomness needed for the encryption by evaluating a PRF. Now, instead of trying to publish this whole list of ciphertext, which is very long, let's just a fast gate this circuit E, which can then allow us to reproduce every ciphertext we need. Now, compactness is restored immediately. Why? Because the circuit E has a compact size and therefore it's a fast gated version and hence the ciphertext. Security is less obvious to see, but using known techniques, we can show that it follows from the security of IO, PRF and the Boolean function encryption. Great. So far, we get that if we have IO for all the circuits that look like E, then we can go from Boolean to FE and eventually to IO. But how about the promise to constant degree? Is this circuit E having constant degree? Absolutely not. In fact, it's a very deep circuit in NC1. So first observation is that the encryption algorithm could be made into NC0 by using randomized encoding. So that's not a problem. The problem is that PRF simply cannot have constant degree because such functions are learnable. And there are other problems that the circuit E is actually more complicated in order for the proof to go through. In particular, it will require us to have a puncturable PRF as opposed to normal PRF. But let's ignore all the additional problems for now and just try to handle the problem of constant degree PRF. So how can we get around that? Well, our idea is to construct a special purpose one that has constant degree just for E. So what is the property of E that we can leverage then? The property is that it only applies to PRF on a polynomial-sized domain. So the hope is that since the domain is polynomial, if we have a constant degree PRG, then we can build a constant degree PRF for a polynomial domain. How? By using the GGM-PRF tree. Remember that the GGM-PRF tree uses a length doubling PRG to create this binary tree that have depth exactly log L, where L is exactly the size of the domain. And each of the pseudo-random block in one of the leaf can be evaluated in D iterations. In each iteration, we expand the PRG once and then choose the right block to be expanded in the next iteration using the corresponding bits in the input. If this function has a high degree, why its degree is at least the number of iterations? And even if the domain is polynomial, this is not good enough for us. So our first idea is to squash the PRF using high fanouts. We're gonna use a quadratic stretch in PRG and this naturally give us a fanout N tree with constant depth. Great. Now this means that we get a PRF that has only constant number of iterations. However, it creates the problem that in order to choose the right block for the next iteration, this choosing procedure where we call the one out of M MOOCs has a high degree. Only a restriction when the choosing index is represented in binary. So the next idea is why don't we just represent it as a unit factor that has exactly one in the right position, then choosing simply corresponding to computing in the product and can be done simply in degree one. And this give us eventually a PRF with very special input representation that has D unit factors inside. And by plugging this PRF into our circuit E that we saw before, we get a class of circuits that have constant degree and that will enable the stronger bootstrapping theorem. In summary, in this work, we simplify the need of jazz from polynomial degree to constant degree for IO. And I believe that this is only the beginning of simplifying jazz. The ideal and ultimate but still very distant goal is of course that we can simplify so much that one day we don't need it anymore. That is of course very far from where we are now but still continually in this direction further simplifying jazz I think is a meaningful interesting question. Okay, thank you. We have time for one question. So you mentioned you also had to modify the IO construction to have to, so that if you want to do IO for constant degree functions, if you use constant degree multilinear maps, can you say a little about what they, you know? You mean a general connection between PRG and the, so the degree, so the reason we use PRG and C0 is because it has constant degree and then they're equivalent. If you have a PRG of any degree D, then you will need a jazz of degree polynomial in D. So there is a polynomial relation between a degree of PRG and the degree of the jazz you need. Okay, so let's find the speaker again.