 Good morning, everybody. Thanks for coming to the first talk. And my name is Ayush, and I'm going to present this paper on robust transforming combiners from indistinguishability obfuscation to functional encryption. And this is joint work with Prabhanjan and Amit. So since 2013, there have been a lot of magical results in cryptography, such as two-round MPC, replacing random miracles, non-interactive multi-party key exchange, and then establishing hardness for PPAD. All these results have been made possible due to indistinguishability obfuscation or functional encryption. So what is this indistinguishability obfuscation? So this obfuscation is a polynomial time algorithm, which takes in a circuit C and spits out another circuit C star with the following two guarantees. The first guarantee is that it should preserve the functionality. That is, for all inputs x, C star of x should be equal to C of x. And the second guarantee is the security guarantee, which basically states that if you have two circuits, C0 and C1, with identical functionality and almost equal size, then it should hold that obfuscation of C0 is computationally indistinguishable from obfuscation of C1. And a related notion is of that function encryption, which was first coined by Sahay and Waters. And then the first candidate construction was given by Gergetal in 2013. So it's just a generalization of an encryption scheme, wherein instead of having just all or none access over the encrypted data, now you have more fine-grained access to the private data. So here, as an example, if you think of this user who wants to encrypt some message x to a cloud, and then this cloud may be adversarial, and he might want to compute some function of this input. So what he can do in a function encryption is that he can query for some function f to the authority that does the setup for function encryption. And then this authority can release a functional key for this functionality. And then it turns out that using this key and the encrypted value of x, he can learn what is the value f of x. And from the security, we roughly want that given this secret key, the adversary or the cloud should not be able to compute anything that's should not be compute anything more than the value f of x. So let's look at what are the known constructions of indistinguishability obfuscation. Actually, there has been tons of work starting with the initial construction of Gurg et al into 2013, and up until recently, the work by Rachel Lynn and Stefano Tissero in 2017. So this is just a non-exhaustive list of all these constructions. So now the question that arises is, out of all these candidates are all candidates of the IO broken. So the answer to this question is, no. We do believe that we have several unbroken IO candidates which have proofs of security in restricted models of security. And now then our goal becomes to find an IO candidate which is secure, even if only one of these candidates is secure. And to state it more formally, if I'm given any arbitrary set of these IO candidates, is it possible to find a candidate that is secure, even if one of the candidates is secure? So this is what I'm going to study in this paper, and this concept is called IO combiner. In fact, what we do study in this work is a stronger form of IO combiner, which is called as a robust IO combiner, which says, which is just an IO combiner, except that it has more relaxed requirement. So we only require that the secure candidate is correct. All other candidates can be arbitrary, and they can completely violate correctness. And this was first studied by two concurrent works. The first, the AG NSY was the work in which I was also involved. The other concurrent work was by Fischlin et al in 2016. So let me now formally define what is a robust IO combiner. So let P, which is P1 through Pn, be a set of n IO candidates or just algorithms. Then there are two algorithms in a robust IO combiner. First algorithm is the obfuscation algorithm, which takes this list of candidates and the circuit C, and it outputs a circuit C star. The C star now is the obfuscation of the circuit C. Then there's an eval algorithm, which takes the list of candidates, the obfuscation C star, the input x, and it outputs y. And now we want two properties to be satisfied in this IO combiner. The first property is that if there exists one such candidate i in P1 through Pn, such that it is both correct and secure, then we want that for any input x, the output of the evaluation algorithm should be correct. That is y is equal to C of x. And the second is the security guarantee, which just says that if there are two equivalent circuits, C0 and C1 of same size, then it should hold that the combiner obfuscation of C0 should look indistinguishable to the obfuscation of C1. OK, good. So why is this robust combiner useful? So it turns out that it actually implies what is called as universal IO. And this was also observed in our previous work. What is a universal IO? So a universal IO, you can think of it as a given fixed scheme, such that if I give you a guarantee that there exists IO, then that scheme itself is a secure IO scheme. And let me now tell you more about what's known previously about these questions. So in crypto 2016, our work gave a candidate construction of a robust combiner from assumptions of DDH and LWE. But in that construction, we required that the secure candidate has to be sub-explanationally secure. Then there was another concurrent work in the same conference, Fishlin et al, which considered the case of combining candidates unconditionally. But in this case, they were able to give results for only constant number of candidates. And so now we ask two following logical questions. The first question is that, let's say we were not just interested in IO, but just we want an application of IO. Is it possible to combine these candidates with only polynomial loss to that application? And the second logical question that we ask is, is it possible to weaken the assumptions from LWE and DDH to, let's say, only one-way functions? So in this direction, we give two results. And the first result is the following. It just says that any combiner can be converted to a robust combiner. So more formally, if you have a combiner that works with only correct candidates, then using a one-way function F, we can actually convert it to a combiner, which is a robust combiner. And this transformation just suffers with polynomial loss. So although a similar result was also observed in RPS work and implicitly in Bitensky and Vacuant-Nathan's work on approximate IO, but that result required some exponentially hard DDH or LWE and also required the underlying candidate to be sub-exponentially secure. Whereas here, we can get this result from one-way functions and also with polynomial loss. So the second result of this paper is the following that if I give you incorrect IO candidates and any one-way function F, then we can construct a compact functional encryption scheme with only polynomial security loss. So then using the previous theorem and the results by Anant and Jain and then Bitensky and Vacuant-Nathan, who prove that using sub-exponentially secure compact functional encryption, it's possible to construct IO. Using that result, we give the following corollary that there exists a sub-exponential universal IO if there exists a one-way function which are sub-exponentially secure. So this notion is also giving rise to a combiner where you start with the candidates of one primitive, that is IO, and you get to another primitive that is FE. So we call this notion transforming combiners. So more formally, if you have N candidates of primitive A, such that only one of them is guaranteed to be both secure and correct, is it possible to construct a secure instantiation primitive B with the polynomial loss? So this is the notion of transforming combiners. And we, in fact, show that there exists transforming combiners, which are robust, and we get transforming combiners from IO to functional encryption. And then using the result of Gerg et al in 2017, we can show that this also yields primitives which are implied by functional encryption such as non-interactive key exchange. Now let me give you a brief overview of our techniques. The first theorem is about compiling a combiner which works with correct candidates to a combiner which is robust. So here, intuitively, what we want to do is take all the candidates that are input to the combiner, and use a one-way function appropriately, and then correct it in some way before feeding it to the combiner itself. So for each candidate P, what we'll do is modify this such that it self checks for correctness. So I'll just tell you the modified candidate. It takes as input the circuit C, and then it uses the underlying candidate P to obfuscate C. And let's say that circuit is C star. And then it samples random input points, and in fact, L of them where L is square of the security parameter. And then it checks that whether at all these points, the obfuscation output is matched with the output of the circuit. So C star of Xi should be equal to C of Xi for all i. Then if this check fails, in that case, it just reveals the circuit C completely. Otherwise, it just outputs the circuit C star. So we make falling to observation from this. The first observation is that this transformation gives you some sort of average case correctness, that is, probability over the input points and over the points of the obfuscator. The C star x is equal to C of x, with probability at least 1 minus 1 by security parameter. And then the second property is that the secure candidate, which we assume to be correct, is unchanged by this transformation because the check in the third point, it always goes through. Then we have got some sort of average case correctness. Now we want to get what is called as worst case correctness. So in order to do that, we revisit the idea of Vitansky and Vekunathan on approximate IO. It's called encrypting inputs. In order to do that, we modify the, we construct, we go to a similar outline, but we construct a different object in order to achieve these results. And the object is the following. We consider a special circuit garbling scheme with the following additional property. The property is that for any two equivalent circuits, C0 and C1, if you garble these circuits to squared bracket C0 and squared bracket C1, then it should hold that the circuit represented by the garbling circuit scheme, hardwired with these garbled circuits, are functionally equivalent. So this might not be true for any garbling scheme, but if we have this property, this allows us to argue what we want with just polynomial security laws. And then we show that such garbled circuits can be constructed from just one way functions. And in fact, I would not go into the details because of the time constraints, but I refer you to the paper for reading more about it. Then we combined the ideas that I just discussed in the following way. So we use the modified obfuscator to just instead of now obfuscating the circuit C itself, we obfuscate the evaluate circuit of the garbling scheme hardwired with the garbled circuit C. And then we release the encoding key of the garbling scheme, the MSK to the evaluator so that he can compute the obfuscation at any point. So using this transformation, it turns out that for any input X, the probability over coins of the obfuscator, C star of X is equal to C of X with probability at least one minus two by the security parameter. And then we can perform the standard BPP amplification to get almost full correctness. Good, so now I'm going to tell you more about how do we combine IO candidates to get a secure IO candidate in this case. So the idea here is really simple. If you have N IO candidates, they should not obfuscate the circuit in the clear because if you're doing that, then if one candidate is broken, in that case it will completely reveal the circuit. So every candidate should in some way just get a secret share of the circuit. And then on any input X, these candidates should be able to jointly compute C of X. So the question that arises is how do we do this? And this was also the methodology in the previous works and this is also what we follow in this work. We rely on MPC techniques. Okay, so let me just recall the approach of the previous work. So in the previous work, if C was the circuit to be obfuscated, we use a non-interactive MPC. And then we secret share the circuit C into C1 through CN. And we treat each CI as an input to the candidate slash party in this protocol. And then we obfuscate the circuit which contains this circuit share CI and the pre-processed state of the MPC using the candidate BI. But note that MPC satisfying such properties are based on assumptions such as LWE and DDH. So this was shown to exist from LWE using Mukherjee and Wix in 16 and then Elidboil and then Neve Gilboa and Ishae in 2017. The question that arises is is it possible to weaken the assumptions by relying on interactive MPC instead? So that is what we do in our work also. We secret share the circuit to C1 through CN using any additive secret sharing. Then we treat each candidate as a party in an interactive MPC protocol. And then we run the MPC protocol for the universal circuit with the reconstruction of circuit C comma X to learn the value of C of X. So this is just a high level approach. So let me go slightly in the detail. So how to evaluate this MPC? We, if you wanted to combine just two candidates, what we could do is have each candidate to obfuscate their next message function in the MPC. So P1 can obfuscate the next message function with C1 hardware and P2 can obfuscate the next message function with C2 hardware. And the reason here, so there is a, when you run this MPC, what happens is that you need OTs. And in fact, you need to run MPC for every input X. Therefore, you would need exponentially many OTs. So this is one problem that I'm going to slightly discuss in detail, but I won't get into the detail of the construction itself. So I'm going to recall what is a random OT. So in the random OT, party P1 wants to, wants to participate in OT protocol with P2. Think of it as having an access to box which has two random strings R0, R1 and a random bit B hardware. Then P1 gets two random string R0, R1 and P2 gets the string B and RB. From security, we want that P1 should learn nothing about B and P2 should learn nothing about R1 minus B. So how do you use this OT? Let me discuss more in detail how to implement OT. So there are following alternatives. We can use an OT protocol. But in this case, the assumptions are stronger. The second idea is that we can pre-process random OTs. But in this case, we would require exponential amount of processing because we want OTs for every input. And the third approach is use PRF keys to generate OTs on the fly. In fact, this is the direction that we are going to follow also in our work. So let's look at more in detail. So the example of combining just two candidates, P1 will now first get an X message function with the PRF key K12 and P2 will offers K the next message function with the PRF key K12. These keys K12 is used to generate OT between party P1 and party P2. But now the problem that arises is that what if candidate P1 is broken? Then it might completely reveal the key K12. So there's no security provided at all by this construction. So I'll just tell you my fix. The fix is the following. This is represented pictorially. Between every pair of parties, I and J, we do the following. In the example for two candidates P1 and P2, we will obfuscate the next message function with the PRF key K12, with the candidate P1 first, and then we will obfuscate it with candidate P2. So in this case, let's say P1 is broken, but P2 is secure, even then the PRF key is not revealed. So this is the basic idea. But we have lots of other roadblocks and with due to time constraints, I cannot go into discussing those roadblocks. But the problems that mainly arises are we have to handle malicious parties and then resetting attacks and we also want to avoid stronger assumptions. We also observed that FE allows us to avoid input-by-input arguments. So we can therefore use our ideas to construct FE from polynomial hardness itself. Okay, with this, I would like to refer you to the paper. So do check our paper. And I'll leave you with some interesting open questions. So first open question is, is it possible to construct an IO combiner from just polynomial security loss? And second open question is, if you have n function encryption candidates, can you get one secure function encryption candidates just from polynomial hardness and assumptions such as one-way functions or EDH? And that's it. Thanks a lot. Any questions? So actually, I have a question. You were using garbled circuits in an interesting way where you gave the master secret key outside. And it seems surprising that you could rely on security when the master secret key is known. Can you explain a little bit how that works? Oh, so, okay. There are two questions that we have to worry about. First question is the correctness question. So in the correctness question, the garbled circuit is encoded in the obfuscation but the master secret key is outside. So the obfuscation itself doesn't see the master secret key. So for him, intuitively, it should hold that if you encode an input X, X1 and input X2, they should be almost equally correct on both the inputs. So the other question is of that of security. For security, I make an observation that eval of C0 with encoding of circuit C0 hardware and eval of C1 with garbled circuit C1 hardware, they are functionally equivalent. So then we can rely on IO security to give a one-step. So you used the security of the garbled circuits only to argue the correctness of the obfuscation, not for the security of the obfuscation. Makes sense.