 So, as she said, this talk is about a merge of two papers, the first one by Prabhanshan Ayush and Amit and the second one by Rachel and me. And I'll start the presentation and then hand over to Ayush for the second part. The goal of both papers was to construct in the synchability obfuscation without relying on multilinear maps. And they both introduced a new type of pseudo-randomness and use security amplification. Synch takes a program, say represented as a circuit that computes a function and turns it into another circuit that computes the same function such that this is unintelligible. There are different ways how this can be formalized and the guarantee IO gives you is that if you have two circuits that compute the same function, then their obfuscations are indistinguishable. Intuitively, this means that the implementation differences in these circuits are hidden. And as we know by now, this notion is extremely useful and has tremendous applications in cryptography. While we can already do many cool things without IO, as we know if you are allowed to use IO and just some additional minimal things, then this opens a whole new world of amazing things for many of which we don't know how to do them without IO. So whether and under which assumptions IO exists is an extremely important question. First constructions of IO relied on multilinear maps, which themselves are very complex object, which has led to many candidates for multilinear maps and attacks on these candidates and new candidates, new attacks and so on. And also direct attacks on these constructions and the new constructions that try to assist these attacks and more attacks and so on. So this is kind of an unsatisfactory situation. So what we would like to do is we want to avoid using multilinear maps and instead rely on assumptions that are easier to analyze and understand. There has been an impressive line of previous research that has managed to bring down the degree needed for the multilinear maps from polynomial degree down to constant and ultimately decreed just three. So what is known so far is that we can get IO from three linear maps, a simple type of PhD and LWE. So our goal was to now go just one step further and reduce the degree to two because for bilinear maps we have good candidates and these are well understood. And so these are our results. We essentially do get IO from bilinear maps and again this is local PhDs and LWE. But we also have to add some new assumption which is essentially a new type of randomness generator that has some weak hiding property and a simple structure and we will discuss this in more detail. But I first want to give you a very brief intuition about how our construction works. The starting point of our construction as well as for previous IO construction is that IO is implied by compact functional encryption. Functional encryption itself is a very fascinating object. It allows you to encrypt some value x and then given a secret key for function f you can decrypt to obtain f of x and the security guarantee is that you learn nothing about x beyond f of x. And compactness here means that the size of the ciphertext grows at most sub-linearly in the size of the functions we want to compute. So we only have to construct such an FE scheme and we need to do this for functions in NC0. So how can we construct such an FE scheme? The basic approach here is to compare this to homomorphic encryption which has a similar structure. It also allows you to encrypt some value x and then anyone can compute functions f on this ciphertext and then obtain a ciphertext for f of x. So the difference here is that the ciphertext of course reveals nothing about f of x. So we need to find a way to decrypt the ciphertext in such a way that this only reveals f of x but not x. And so the idea how we can do this is to again use functional encryption to do this decryption. Now for this to make sense since we want to construct FE of course this FE should be for much simpler class of functions and in that case we obtain bootstrapping from a simple FE to full FE and this is also a approach that has been used before in different FE constructions. Now the crucial question here is how simple this FE scheme can be, so the simpler the better and what it essentially needs to be able to do is to evaluate the decryption of a homomorphic encryption scheme and luckily there are LWE based homomorphic encryption schemes that have a very simple decryption. They essentially just compute an inner product and then if we encrypt bits reduce modular two. And so the inner product is very simple, modular two in principle is also simple but not if you want to express it as a polynomial of flow decree. So what you can do can just say okay let's not do the modular two, let's only compute the inner product. If we do that we obtain FE of x which we want plus some noise. Okay the issue with this is that this noise contains information about x so we cannot just give this out in public, we need to find a way to hide this noise to make this secure. And the idea how we want to hide this is we just add some random value to it such that the sum hopefully hides this value E and this is an approach that has also been used before and is known as noise flooding or noise smudging. And so this randomness that we add here this is precisely where we use this new type of randomness generator that I mentioned before. So the question of how simple this FE scheme can be directly relates to how simple it is to generate this randomness. So we want to make sure that it's as simple as possible and still able to hide this E. And well our candidates essentially have decreed 2.5 and I'll say later what this means. So how can we construct such a simple FE scheme? Starting point is that it's known that bilinear maps imply decreed two function encryption and by massaging these schemes appropriately we managed to extend them such that they also work for decreed 2.5. So that's great. Caviar tier is that for these schemes the outputs must be small. Small means polynomial in the security parameter and the reason is that they do computations in the exponent of some group and if you want to decrypt you essentially have to extract this exponent by brute force which you can only do if this is from a small set. And if you want to use this approach the randomness we can generate also must be small. Okay, so let's do it with small randomness. Problem with that is that the small randomness cannot entirely hide this E. As an example just look at one dimensional case where we have a uniform value in an interval minus B to B. Now if we shift this by 1 then essentially at this corner cases the value E is revealed. If the sum is minus B we know that E must be 0. If the sum is B plus 1 we know that E must have been 1. And then if B is only polynomial then this will happen with non-incligible probability that we are in this bad locations. That's not only true for uniform distribution but for other distributions as well with polynomial support. So that's kind of bad news but there's also some good news namely if we are not in this red area then E is actually hidden. So for example if the sum is 0 we have no clue what E was. So our goal is now to kind of formalize what this means to hide this noise in a weak sense and then obtain a function encryption scheme with some weak security which we then amplify. And so the two papers have like different approaches to generate this randomness and then also for the amplification. And I'll now briefly talk about what we call pseudo-flot smudging generators. They take a short seed which are n elements of a set P and expand them to n to the 1 plus epsilon elements for some constant epsilon. And this stretch is basically what we need for the compactness of the FE scheme. So the outputs must now if you interpret them as integers be of small magnitude to be able to do this brute force decryption. And the security guarantee we want from this is if we add a noise vector to this output then the sum should reveal E only at a few bad coordinates. And for all other coordinates this E should be hidden. This is the basic intuition. So now what's this degree 2.5? Basically what we do is we want this hiding property to still hold if we reveal part of the seed. And then computations over this public input we only count half. So this means we essentially have a degree 3 polynomial but it's only degree 2 over secret inputs. So a bit more formally the seed consists of a public and a private part. And we want that if you're given the public part of the seed then the PFG output should be indistinguishable from some distribution phi that has this hiding property which means that if you're given E plus phi you cannot distinguish E from a fresh sample E prime that is equal to E only on this few bad coordinates. And then you're also told this Z i which is which coordinates are bad. This is essentially the PFG notion we have. And in fact it's enough if this property holds with some one overpoly probability. Now if we use this in our construction what we obtain is an FE scheme that leaks the input X at some bad coordinates but at other coordinates the X is hidden. And now we have to essentially amplify this security to deal with this leakage. And to do so we introduce a new primitive which is a special type of homomorphic secret sharing that we call bit fixing homomorphic sharing. This allows us to basically share the inputs and do this in a clever way to deal with this leakage but I don't have time to discuss this in detail and we'll now hand over to Ayush who will tell you more about the assumptions and their construction. Thank you. Alright, thank you Christian. So my co-author Christian talked about the overall approach that these two works follow and he also told about two aspects in which the artworks differ namely the randomness generation aspect and the hardness amplification aspect. And towards the end he was talking about the notion of pseudofloid smudging generator. Now let me talk about how these aspects are handled by our work and in particular let me start with the notion of perturbation resilient generator. So perturbation resilient generator is also a non Boolean PRG just like a PFG and the syntax is almost identical to as that of a PFG. In particular the delta RG this takes as input n field element as input and it outputs n to the 1 plus epsilon integers for some positive epsilon. So these parameters are set such that this PRG is expanding. Then as Christian was talking about the seed has a definitive structure, seed can be formatted as two parts there is a public part and there is a private part. And then the PRG assumes a structure that is degree to computation over the private part whereas the total degree can be 3. And Christian was also mentioning about it that the output of the PRG is going to be polynomially bounded. However, the security notions in these two objects are slightly different. We define the notion of perturbation resilience where think about a set of permutations delta where deltas are bounded by let us say n. Then it happens that following two distributions are mildly indistinguishable. In the first distribution you have the public part of the seed along with the PRG output. In the second distribution we have the public part of the seed again. But now the PRG output is perturbed with these values delta. And what we ask for is a very modest form of indistinguishability. That is we ask that for any computationally bounded adversity A the probability with which an adversity can distinguish between these two distributions is bounded by a probability as high as 0.99. Now let me proceed on to talking about the assumptions on which you can build such an object. So our assumption actually builds on our variant of LWE where let us say you have a distribution chi which has a standard polynomial bounded standard deviation of n. Now you can think of some parameters for the prime as well as the dimension. I am not going to bore you with parameters but what we have is so what LWE says is that we have this tuple which is set of vectors A i along with inner product of A i with the secret added with some error E. This tuple looks pseudo random. So unfortunately we do not quite know how to build IO based on LWE but what we can do is base it on an assumption where we give out these LWE samples along with this leakage on the error and two independently sampled vectors y and z. So we give out so we give out this LWE sample and this polynomial leakage which is going to be actually a degree 3 polynomial on the error and these two variables y and z. In the next slide I will make formal how this polynomial will look like and what is the assumption that we need. So now I am going to talk about what those polynomials correspond to and in fact there is a lot of slack on how we can use how can instantiate these polynomials and we have couple of instantiation which you can look in the paper. For this talk let me be very concrete and give you a single candidate and the candidate is just this. So the sampler takes as input some parameter n and then it will output n to the 1.4 degree 3 polynomials. Here each polynomial is going to have the following structure each monomial has degree 1 in E degree 1 in y and degree 1 in z. So they are homogeneous and degree 3 and linear in each variables and then the coefficients of these polynomials are randomly chosen from plus 1 and minus 1. Finally, the number of monomials that occur inside each polynomial is going to be exactly n to the point 1. So let me repeat we have n to the 1.4 degree 3 polynomials where you form each polynomial by selecting n to the point 1 monomials randomly like this and then randomly assigning plus 1 minus 1 signs for them and adding them up. So with and so this candidate also can be used to instantiate both delta r g and p f g. But now let me focus on the assumption that we can use to construct delta r g from this sampler. So the assumption is really simple and it can actually fit in just one slide. The assumption just talks about indistinguishability of following two distributions. In the first distribution you have LwE sample as I was talking about and along with that we have the polynomials that were output by the sampler. They are evaluated on the error E along with two independently sampled vectors y and z also from the error distribution. In the second distribution we have LwE samples as before. But now the polynomial samples QL of E, y and z they are perturbed with adversarily chosen values delta L. And again by adversarial I mean that they are allowed to depend on the polynomial but not on the error and y and z inputs of the seed. And what we ask is that these two distributions are very mildly indistinguishable. Again very by very mildly I mean that for any adversary which is bound efficiently bounded. The probability with which an adversary can distinguish between these two distribution is bounded by 0.99. So in particular our assumption can hold even if an adversary can distinguish with 99 percent but not beyond. And I also like to mention why this 99 percent is necessary. So Christian already given a lot of intuition about it. The values of the polynomials the coefficients are plus 1 minus 1 the inputs E, y and z they are bounded. So the evaluation is going to be a bounded polynomial. So with bounded polynomial it is actually unreasonable to assume full security. Although we could have still assumed one by security parameter of indistinguishability but we actually are very conservative and we ask for just 99 percent indistinguishability. All right. So I told you about the assumption. Now let me talk about how you can build these objects delta Rg and Fg from such polynomials and LwE samples. So this was our LwE sample, this was our polynomial leakage. Then what we will do is that we will instantiate this part, the LwE part is the public part of the seed. And then we will just let this be the PRG output, the polynomial evaluation. Now observe that in order to go from LwE sample to these polynomial leakage all we need is a private part of the seed which looks like S tensor, y and z. And it is using simple algebra you can see that you can derive from see a public part of the seed to the PRG output using just degree 2 Fp operations. I would not have time to talk about it but you should it is really easy and you can look it in the paper for details. So another aspect that I want to talk about is the hardness amplification aspect. So as you saw that the assumptions that we have they do not provide full security and there is always an advantage loss in neither both these assumptions. So we actually need to build a machinery which allows you to translate from a weak security to full security. And in AJS work what we can do is we can build a general compiler which says that if you are willing to assume sub exponential hardness of LwE then you can go from you can use any function encryption scheme with weak security that is advantage of 1 minus 1 by poly lambda and you can convert it generically to a fully secure FE. And then using that we can go all the way to IO. The lin and mat work is slightly different in this way they use leakage resilient cryptographic techniques to argue such an amplification for their scheme. So there are new and beautiful ideas in both these work and I would encourage you to look at the paper for details. So with the remaining time left let me just spend couple of minutes on the kinds of crypt analysis that we have done on these assumptions. So first we can already show some kind of sum of square lower bounds against sub exponential time SDP adversaries which apply to a mathematical problem which is very related to these kind of assumptions. I will talk about it in the next slide. But what it shows is that we believe that such a lower bound is an evidence of security against sum of squares adversaries and also algorithms such as spectral attacks and linear programming attacks just because they are known to be inferior to sum of squares algorithm. And then we also ran extensive gradient descent experiments and so far we were not we were not able to observe anything which seemed to be breaking our assumption. That said gradient descent is an algorithm where we actually need to come up with a theoretical framework by which we can analyze these things because it is really hard to figure out how powerful gradient descent can be. There are also algorithms about which we do not know how to reason about at all. So for example since we are giving out leakages in terms of LW sample and the polynomial evaluation a lattice attacks are very reasonable to assume. However, we just do not know how to analyze either positively or negatively lattice attacks on this assumption. And finally, there are also hybrid attacks where you will you can use two different algorithms and and use their results to derive some kind of attack. For example, one reasonable strategy could be to use lattice algorithm in conjunction with sum of squares algorithm and feed input of output of one algorithm to the other. But again we do not know how to argue about this. So, the takeaway point from this slide is that this is really an open area and I would encourage all of you to think about cryptanalysis of these assumptions. Finally, with just one minute I have let me talk about the sum of square lower bound that we can show. So, this is something that I also talked about in the crypto workshop just a couple of days ago. And in a follow up work with was a Sam probation Amit what we can show is that this is highly informal what we can roughly show is that any sum of squares algorithm running algorithm running in time which is bounded sub exponential we cannot do the following. We cannot take these leakages polynomial evaluations and recover the output the input of the polynomial. And another an observation here is that we have completely ignored LW leakage and the reason for that is we just do not know how to incorporate the any finite field arithmetic into the framework of sum of squares. And so, this is also an interesting problem that you should all think about. And with this I would like to conclude my talk and please feel free to ask either me or Christian any of the questions you have you may have. Any questions for the two speakers? Okay, let us thank all the three speakers of the session.