 Thanks, Nier, for the introduction. This is joint work with Dan Bonet, Yuval Ishai, and Amit Sahai. So in the last few years, indistinguishability obfuscation has really emerged as a central hub for cryptography. We have many amazing applications that you can all build from indistinguishability obfuscation and some very simple primitives. And at a very high level, and as you've seen from the previous talks, and indistinguishability obfuscation allows you to take a program and scramble it so that you can hide secrets within the software itself. And this has emerged as a very powerful tool. But we've seen, if we take a closer look at it, what we end up seeing is that there are very, very big constant factors. So when you want to actually use indistinguishability obfuscation to instantiate your favorite cryptographic primitive, you suddenly see polynomial, is a polynomial time algorithm, and yet the constant factors are something like 2 to the 100. So we want this primitive, and yet it still seems very far away from being concretely realizable. And let me just preface here that when we look at the constant factors as big as 2 to the 100, deploying indistinguishability obfuscation is not an engineering problem. It's not just about finding more efficient ways of computing these things. They're actually fundamental theoretical challenges that we have to overcome in order to make these things practically realizable. So our goal in this project is quite ambitious. We wanted to find an obfuscation complete primitive that suffices for obfuscating any arbitrary functionality with good concrete efficiency, bring IO into practice. And what does that entail? Well, our goal is to identify a functionality whose idealized obfuscation can be used for general functionalities. And moreover, we want this to be efficient, namely, if you want to obfuscate an arbitrary program, this idealized functionality should only need to be invoked once. So it's sort of this obfuscation complete primitive. What suffices to obfuscate an arbitrary program? And our solution is going to look at, it's going to be basically based on the original bootstrapping constructions for obfuscation. Namely, our obfuscation complete primitive consists of FHE decryption and snark verification. I'm going to get into the details on each of these primitives subsequently in the talk. And as a concurrent goal, once we have defined our obfuscation complete primitive, it really boils down to finding efficient representations of these underlying constructions, namely FHE decryption and snark verification. Fully homomorphic encryption seems to be pretty well understood at this point. But an independent goal of this work has also been looking at finding snarks with efficient verification processes that are amenable to the existing obfuscation constructions. So there are two goals of this project. One is improving the concrete efficiency of obfuscation, and two, and sort of as a byproduct, finding more efficient snark constructions. So let me give you a sketch today of how we build obfuscation for general functionalities. There are three main types of approaches, and they all rely on, and all of these constructions rely fundamentally on multilinear maps. So the first kind of way of building obfuscation for general circuits relies on a beautiful construction based on bootstrapping introduced by Garek Gentry-Helevi-Rekova-Sahayan-Waters in 2013, and the starting point here is we begin with a multilinear map. And using multilinear maps, we actually build a weak class of obfuscation, namely obfuscation for log depth circuits, for NC1 circuits, or branching programs. And then in a very nice work, what we can do is we can actually bootstrap NC1 obfuscation to obfuscation for general circuits. I will describe this bootstrapping transformation in greater detail in subsequent slides. So this, while it looks like a very simple, very natural, very nice, clean framework, and we actually want to instantiate it, and we actually use it to obfuscate even a simple functionality, like an AES or a block cipher, what we immediately see is that we need a multilinear map capable of supporting over two to the 100 levels of multilinearity and publishing over two to the 100 encodings. This is an astronomical number, and very, very far from something that we can practically implement using modern computing resources. So a subsequent line of work by Zimmerman and as well as Apple Bomb member Kersky, looked at directly obfuscating circuits, and this sort of resolves the second problem where we have to publish two to the 100 encodings. Unfortunately, due to the noise growth in existing multilinear map candidates, the levels of multilinearity required is still over two to the 100, and thus very, very far from concretely realizable. In a recent line of work initiated by Lin in 2016, we've have now non-blackbox constructions of obfuscation. So these are constructions that rely only on constant degree multilinear maps, and this constant in recent work has been reduced to something as low as three. So using just trilinear maps, we can actually construct obfuscation. However, there is one big caveat. They require non-blackbox use of the underlying multilinear map, and if you look at the internal details of existing multilinear map constructions, they are really, really complicated. And as a result, non-blackbox constructions are going to present major hurdles in terms of concrete implementability. So the focus of this work will be on constructing obfuscation and making use that make non-blackbox use of the underlying multilinear map, and we're going to focus on actually the simplest or the original construction of obfuscation based on bootstrapping. So this is a two-stage pipeline where we start with the multilinear maps, build obfuscation for branching programs, and then leverage obfuscation for branching programs to obtain obfuscation for general circuits. Most of the prior work in this area have focused on improving the efficiency of the first step of this pipeline, namely the obfuscation for branching programs. In this work, we're going to look at the second stage of the pipeline, and this is where we're going to extract our obfuscation complete primitive. Let me just give you a high-level summary of our main result in an obfuscation front. To obfuscate AES using our new construction, we require a multilinear map capable of supporting about 4,000 levels of multilinearity, and if you look at it compared to the existing work, this is many, many orders of magnitude of improvement. We went from 2 to the 100 to roughly 2 to the 12th. Okay, so let me first remind you how we bootstrap obfuscation. What is our obfuscation complete primitive that we seek to build? So to obfuscate a circuit, what we do is we first encrypt it using a fully homomorphic encryption scheme. Now, using the fully homomorphic properties of the underlying encryption scheme, the evaluator can homomorphically compute the circuit on any input of its choosing, and in doing so, obtain an encryption of the circuit evaluation. Now, we need some way of taking the encryption of the circuit evaluation and extracting from it the actual output. So what we're going to do is we're going to rely on our obfuscation for branching programs and publish a decryption circuit, in particular, an obfuscation of the decryption circuit that will implement the FHG decryption. Now, certainly we can't give the evaluator an arbitrary decryption oracle, because otherwise the evaluator can simply decrypt the circuit. So what we're going to require, in addition, is that the evaluator provide a proof that when it did the circuit evaluation that the ciphertext it obtains to be decrypted should actually be one that was derived from evaluating the program on an honestly generated input. And so the evaluator includes a proof and the proofstrapping algorithm will first check the proof before decrypting. So let's look at concretely what the program looks like that we need to obfuscate. It will contain embedded inside it a decryption key for the underlying FHG scheme and maybe a CRS or some other verification state used for the proof verification process. And the functionality that we need to obfuscate needs to do two things. It needs to verify a proof and if the proof... Let's click this. It needs to verify a proof and then it also needs to decrypt the resulting ciphertext. So if we look at what kind of obfuscation constructions we have today the ones that at least in this proofstrapping construction they operate over branching programs. So when we actually want to build this obfuscation complete primitive we want something that can be easily implemented by a branching program and the complexity is generally captured by the length of the branching program. So we need FHG decryption and SNARG there and proof verification to be implementable by simple and short branching programs. And luckily for us the existing works on lattice-based FHG schemes and all of these existing constructions or in most of these existing constructions at least decryption can be implemented by a rounded inner product which actually is amenable to branching program based computation. So what we really need now is simply a way to verify these proofs efficiently. And that's where we get to, uh oh, okay. And that's where we get to the need for SNARGs. So first let me remind you what a SNARG is. So our overarching goal in improving the concrete efficiency of obfuscation is building better SNARGs or better branching program friendly succinct non-interactive arguments. Okay, so let me remind you what a succinct non-interactive argument is. So this is also known as a SNARG. It consists of three algorithms. There's a setup algorithm that takes a security parameter and outputs a common reference string and a verification state. And then there's the familiar proof and verify algorithms that a prover algorithm takes in the CRS, takes in a statement and a witness, outputs a proof. The verification algorithm takes in the proof, takes in the statement, and is possibly the verification state and decides whether to accept or reject. A SNARG should satisfy the usual notions of completeness and computational soundness, but the important property is that it should be succinct. Recall that the obfuscated program needs to both take as input the proof as well as verify the proof. So succinctness is paramount to the success of this work. So the succinctness property in the succinct non-interactive argument system stipulates that the proof size and the runtime of the verifier should all be polylogrhythmic in the running time of the computation that's being verified, or in the case of circuit satisfiability, polylogrhythmic in the size of the circuit. More concretely, what it means is that the runtime of the verifier can be polynomial in a security parameter, the length of the statement, polylogrhythmic in a circuit size, and similarly for the proof size. So our main result in this work on constructing more branching program-efficient SNARGs, especially tailored for bootstrapping obfuscation, is we actually obtain new designated verifier SNARGs with qualitatively better properties that I will summarize below. So the designated verifier part of this statement means that the verification stage should be secret, and it's also in a preprocessing model which allows us to basically have a more expensive setup procedure. So the setup procedure that generates the CRS need not be super efficient. It can be run in time polynomial in the running time of the computation or the circuit size. But when this model, what we achieve is the first SNARG that simultaneously satisfies what we call quasi-optimal succinctness and quasi-optimal prover complexity. Let me briefly describe for you what I mean by these terms. When I state the asymptotics for our SNARG construction, these are going to be numbers or complexity reported for achieving negligible soundness error against provers of bounded size, of size two to the lambda. So when I say a proof system satisfies quasi-optimal succinctness, that means the length of the proof should be quasi-linear in a security parameter. So lambda times some polylog-rhythmic terms. When I say a proof system satisfies quasi-optimal prover complexity, what I mean is that the amount of work that the prover has to do to generate a proof is only quasi-linear in the size of a circuit and does not depend, say, linearly in a security parameter. So in most existing SNARG constructions today, and I will summarize those at the end of the talk, the prover complexity or the prover overhead will be linear or worse in a security parameter. SNARG would be quasi-optimal prover complexity if that overhead is only polylog-rhythmic. So we give the first SNARG from any assumption that actually simultaneously satisfies both of these properties. And as an additional bonus, because our new SNARG constructions can be instantiated based on lattice assumptions, they are plausibly resist quantum attacks. So we give post-quantum secure SNARGs and they also work over polynomial size fields, which will be very important when we look at the concrete example of bootstrapping obfuscation. Let me now describe for you how our SNARG construction works. Our starting point is a building these pre-processing SNARGs builds on a very beautiful work by Butanski, Chiesa, Ishae, Ostrowski, and Paneth. Their construction takes a two-step approach. First, they begin with an information theoretic primitive called a linear PCP, and they compile that to a two-round linear interactive proof system. Then, on top of the two-round and linear interactive proof, they provide a cryptographic compiler, namely linear-only encryption, that compiles it to a pre-processing SNARG. Let me briefly revisit for you some of the core building blocks of this construction. First, we have linear PCPs, first used in a work by Ishae Kusilovic and Ostrowski, and a linear PCP is just a long proof vector. And in a linear PCP model, we have a verifier, and a verifier is basically given access to a linear function. And what the verifier can do is it can submit a query vector to the linear PCP oracle. And the linear PCP oracle is simply going to compute the inner product between the query and the proof vector. And this can repeat several times and at the end, the verifier decides whether to accept or reject the proof. There are several concrete instantiations of linear PCP-based constructions based either on the Hadamard code or based on the quadratic spam programs and quadratic errors metacrograms of Genaro, Gentry, Parno and Rakoba. So oftentimes, and very importantly for our construction, the verifier is actually oblivious, namely the queries that the verifier submits to the linear PCP oracle do not depend on a statement being proved, nor do they depend on the previous responses. So what this means is that we can have an equivalent view where the verifier, instead of submitting many vectors, it can pack all of its queries together in a single query matrix where the columns are precisely the queries that it would have submitted to the linear PCP oracle. And in this case, the linear PCP oracle will compute the matrix vector product. Now, in order to go from linear PCP as the pre-processing snarks, the key idea is that the oblivious verifier would first commit to its queries because its queries don't depend on a statement being proved or the responses of the previous results. It can actually just commit to these queries and publish them as part of the CRS. What the honest prover would then do is it would take a statement and its witness, it would construct from it a linear PCP and then it would simulate the operation of the linear PCP oracle, namely it would compute the matrix vector product and send it to the verifier. Now, this is great analogy, but it doesn't quite work. And there are so many problems with this basic construction. So first of all, the malicious prover can actually choose the proof based on knowledge of the queries. If we simply publish the queries in the clear in the CRS, this cannot possibly work because the prover can now choose its proof based on what the verifier is going to check. And moreover, the malicious prover is not really constrained to only evaluating linear functions. The prover can do arbitrary things when constructing its proof. So we need some way of addressing both of these problems. So for the first problem, it's actually very easy. Instead of publishing the queries in the clear, we're going to encrypt them using an additively homomorphic encryption scheme. So the additive homomorphism allows the prover to still compute the responses over the encrypted query vectors. And the verifier at the end would decrypt the encrypted proof that the prover constructs and then apply the underlying linear PCP verification procedure. So it turns out that the second problem is actually a more severe one, which is that the malicious prover can actually apply different linear functions or different functions altogether to the different components of the query matrix. The way that we address this is we're actually going to make a cryptographic conjecture. This is the second step of a compilation framework where we impose a cryptographic, where we use crypto in order to complete the transformation. In our work, we rely on a new primitive we call linear-only vector encryption. Let me briefly describe to you what a linear-only vector encryption scheme is. So first, a vector encryption scheme is just an additively homomorphic encryption scheme that where the plain text space is a vector space over some field or some ring. So instead of encrypting scalars, we encrypt vectors at a time. And the encryption scheme should be both semantically secure, as well as be additively homomorphic. Namely, it should be possible given the encryptions of many vectors to compute an encryption of a linear combination of those vectors. The second part, which is the important property, is the linear-only property. And here, we have a fairly strong non-falsifiable assumption which basically states the following. For any adversary that's given access to a collection of ciphertexts and is going to then produce a ciphertext, there exists some extractor that can explain the adversary's behavior. Namely, the extractor takes as input this collection of ciphertexts and it outputs a linear combination such that any ciphertext that the adversary produces can be also derived by just simply computing a linear combination on the underlying plaintext vectors. So the way that this works is the following. So when we take our linear PCP, we take our query matrix and we're going to encrypt each row of the query matrix using our linear-only vector encryption scheme. And then, once the prover constructs a ciphertext, well, linear-only property says that whatever the prover strategy is, it can be explained by taking a linear function of the queries themselves. And this means that by soundness of the underlying linear PCP system against linearly bounded provers, we get secure soundness of the resulting snark construction. Let me just briefly compare with the framework of the tons to get all. So as I said earlier, their framework starts by taking a linear PCP and applying it first, applying an information-theoretic construction where they impose an additional consistency check to force the prover to commit to apply linear functions to the different query vectors. In our work, we skip this step and give a direct compilation from linear PCPs by making a stronger cryptographic assumption, namely linear-only vector encryption that bounds the prover to only apply consistent linear functions to the verifier's queries. So how do we actually instantiate this construction? So the main conjecture in this work is that rega-based encryption over the standard lattices, named, and specifically, the variant due to Pyker-Vicuntu-Nathan waters provide is a linear-only vector encryption scheme. So let me just show you how the decryption functionality works because that's the only part that really matters for the application to obfuscation. So in a decryption in the underlying rega-based encryption scheme, it's just computing a rounded inner product. The secret key is a matrix, the ciphertext is a single vector, and a decryption operation just computes the inner product between the secret key matrix and the ciphertext and then rounds to a small field element. And one way that you can see this if you're familiar with rega-based encryption is that each row is essentially an independent rega-secret key and there's one ciphertext vector that encrypts all of the entries of the vector. So once we actually take our linear-only vector encryption scheme and combine it with existing and known linear PCP constructions, we actually get a pre-processing snark. And here I give you some concrete comparisons with other constructions of snarks today. So I think the takeaway here is that using our new compiler because it gives a direct compilation from linear PCPs directly to pre-processing snarks, we actually obtain the first quasi-optimal snark if we instantiate using a ring-learning with errors-based assumption. And moreover, because our new constructions actually rely on lattice-based assumptions where all of the existing ones rely primarily on pairing-based assumptions, these are the first snark constructions that are plausibly post-quantar resistant and at the same time provide qualitatively and asymptotically better performance in terms of prover complexity as well as proof size. So at the beginning of the talk, I presented to you the problem of improving the concrete efficiency of obfuscation. So let me conclude by giving a few more remarks. So recall that to bootstrap obfuscation it suffices to obfuscate a program that implements FHE decryption and snark verification. Using existing FHE encryption techniques and our new lattice-based snark candidates, we obtain an obfuscation complete primitive that requires a multilinear map that supports two to the 12, the roughly 4,000 degrees of multilinearity and requires publishing about two to the 44 encodings. These numbers are fairly large but unlikely to not be feasible today but they are much, much better than a two to the 100 needed in previous black box constructions and we hope that future work will continue to improve upon these numbers. And moreover, by looking into this problem of bootstrapping obfuscation, it actually led us to constructing better snark constructions. In particular, our work gives of new direct and a more direct framework of building snarks directly from linear PCPs and these yield both the first quasi succinct construction from standard lattices as well as the first quasi optimal snark from any assumption, in our case, from the ring learnings with Sarah's problems plus this linear only vector encryption property. Let me conclude with a several open problems. All of the snark constructions that I described here based on lattices are only secretly verifiable or namely, they're in a designated verifier model where the sound is only holds if the proof system to verify the proofs requires knowledge of the secret decryption key. One important question is whether we can do it in a publicly verifiable snark get an analog of pairing based snark constructions. Another question is trying to achieve a stronger notion of quasi optimality. So at the beginning of the talk, when I introduced our metric for assessing the asymptotics of a snark construction, the goal was to achieve negligible soundness error against two to the lambda bound approvers. You can also stipulate that we should try to achieve the stronger notion of actually two to the minus lambda soundness against the same kind of approvers. And finally, our new matter-based snark candidates, we should assess the concrete efficiency of them because they seem to be lighter weight. They have asymptotically stronger properties compared to the existing pairing-based candidates. So we're in the process of actually developing implementations of these latter-based snarks and comparing them against the existing pairing-based candidates. And with that, I'll take the questions. Thank you very much. Thank you, David. So we have time for a question or two? Yeah, Elaine? So I have a question about what do you mean by the quasi-optimal snark? Can you explain or explain again? Sure. So quasi-optimality consists of two properties. So first is quasi succinctness which says to achieve soundness that's negligible in a security parameter against approvers of size 2 to the lambda. What we want is that the proof size is only quasi-linear in a security parameter. And the second property we want is that the prover complexity, so the amount of work it takes to generate a proof, that overhead should only be polylogrhythmic in a security parameter, so namely, quasi-linear in a circuit size. Yes, so that's what our new snark constructions achieve and is the first snark from any assumption that simultaneously achieves both of these properties. OK, time for one more question. I have a small question. So regarding the proof of security, so currently it's sort of like strong assumptions, right? Like virtual black box, et cetera. Is there hope to sort of base it on sustainability obfuscation, things that we believe to be sort of? Yeah, so right now I would view this construction as sort of a heuristic construction of obfuscation. So in terms of basing it on indistinguishability obfuscation, I think that basic bootstrapping construction is going to be rather difficult to make work in terms of a security reduction, because relaxing from statistical soundness to computational soundness will introduce problems there. So I think there's some theoretical challenges to even making a basic bootstrapping approach, but maybe with newer techniques we can do something about that as well.