 Hi, my name is Daniel Wakes, and I'm going to tell you about a new candidate construction of indistinguishable obfuscation via oblivious LWB sampling. This is joint work with Hotekwi. Let me start by giving you our results in a nutshell. We give a new candidate construction of indistinguishable obfuscation, or IO. Our construction relies on learning with error style techniques, but we are unable to show security under the LWB assumption on its own. Instead, we formulate a new simple indistinguishable based assumption that has a circular security flavor, and we show security under this new assumption. The overall construction is plausibly post-quantum secure and conceptualized simple. It relies on a new primitive that we call oblivious LWB sampling, and we show that oblivious LWB sampling implies IO under standard assumption, just regular LWB, but then we construct oblivious LWB sampling using our new assumption. So let me start by telling you what IO is. It gives you a way of taking a program P and obfuscating it to derive that program P tilde, which is functional equivalent to P. So on every input X, P of X and P tilde of X give you the exact same output, but the obfuscated program P tilde should hide all of the internal implementation details of the original program P. And the way this is formalized is that if you take two functional equivalent programs, P1 and P2, and you obfuscate them, then you cannot distinguish which one of the programs you started with. The obfuscation of P1 is computationally indistinguishable from the obfuscation of P2. A long series of works have shown that if you have IO, you can use it to construct a large number of really magical cryptographic primitives that we don't know how to construct in any other way. Things like non-interactive key agreement, functional encryption, succinct Arbalgram, and so on and so forth. So the big question is, how do we construct IO? And let me give you a brief history of prior constructions. So the initial constructions of IO rely on something called multilinear maps. And the good news about these constructions is that they're plausibly post-quantum secure. Moreover, these constructions have received a lot of crypto analysis. Some of them were broken, but the ones that survived received a lot of attention. And the hope is that if they haven't been broken yet, then they may or they're likely to be actually secure. But unfortunately, these constructions don't have any kind of approval security or reduction for many nice or falsifiable assumption. Instead, we just have a construction, candidate construction of application. The assumption is that the construction is secure. And more recent series of works constructs IO using functional encryption. And the series culminated in a really beautiful recent work of Jane, Lynn, and Sahai from last year, which showed how to construct IO from a number of well-studied assumptions, namely learning parity with noise, SXDH assumption, that's an assumption on bilinear maps, learning without errors or LW assumption and pseudonym generators in NCZO. So this is a really celebrated result. But on the downside, it's clear that this construction is not post-quantum secure because it relies on bilinear maps. And moreover, the construction is complicated and relies on this combination of many different assumptions working in tandem. There are a few other miscellaneous approaches. For example, tensor product construction, a construction based on affine determinant programs. And very recently, construction by Barker-Skedal using a framework that they call split FHG or split-fully homomorphic encryption, which they implemented relying on an interplay between the LWE and decision-composite-residuosity assumption. So this last work was really the main inspiration starting point for our result. And here we give a new construction of IO that relies on an intermediate primitive we call functional encodings, which is really a small variant of the split FHG primitive in the work of Barker-Skedal. And it's relatively easy to show once you have functional encodings, you can use those to construct XIO, that stands for exponential efficient IO, which you can then leverage to build the full notion of IO. And all of these are provably secure and understand LWE. So the hard part is then how to instantiate functional encodings. And we give a new way to do this using LWE-style cryptosystems. In more detail, we actually start with the fully homomorphic encryption scheme that we call the dual gentry-side waters or dual GSW FHE. It's a small variant of the GSW FHE. And we combine it with a new primitive that we call oblivious LWE sampling to get functional encodings. So we show a theorem. So oblivious LWE sampling is a new primitive that we define and we abstract it out as a standalone primitive that has its own definition. And we give a theorem that if you have this primitive of oblivious LWE sampling that then together with LWE assumption that lets you prove a build functional encryption and XIO and IO. So the only part where we need some new non-standard assumptions is in the construction of oblivious LWE sampling itself. We give a new construction of this object under a non-standard version of non-standard assumptions. But one thing I wanna point out is that this gives a new approach of constructing IO from a generic primitive oblivious LWE sampling where the generic primitive does not involve general computation. This should be seen in contrast to other generic ways of getting IO from primitives like functional encryption which do involve some general notion of computation. Okay, so let me tell you what functional encodings are and I'll tell you how to construct them. So in a functional encoding scheme we have a secret value X and you should think of this as a relatively small value, L bits. And we wanna hide this value X but we wanna reveal the outputs of several functions on this value X. We wanna reveal the outputs of f1 of X up to fq of X where these functions fi are public functions. Everyone knows them. And you should think, so one way to do this would be to just write down the outputs of all of these functions. But we wanna do this using a much smaller description, much smaller way of revealing the outputs and just writing down all of these outputs. So here we should think of each function output as being m bits where m is relatively large and also the number, the total number of function outputs that we're giving out is large. So the way we're gonna do this or the way functional encodings do this is by encoding the value X into some encoding key. And for each function fi, we're also going to give an opening of the function fi. This opening depends on the randomness of the encoding, the input X or it depends on everything really and the function fi itself. The only thing that makes this non-trivial why we don't just give out fi of X as the opening is that we want the size of each of these openings to be small, much less than m bits, m to the one minus epsilon bits for some epsilon greater than zero. Now, if you have the encoding and you have all of these openings, then you can use them to decode each of the function outputs. And what this gives you is by combining the encoding and all of the openings, we have a short description of all of these function outputs at one of X up to Q of X, at least as long as Q is large, then in an amortized sense, the total size of the encoding and all of the openings is much smaller than the total size of all of the function outputs. For security, we consider an adversary that sees the encoding and each of the Q openings. And we want to say that the adversary does not learn the hidden value X. We can define two types of security and distinguish a bit based security says that as long as we have two values X and X prime for which all of the functions have the same output, then you cannot distinguish X from X prime given the values that the adversary sees. And then we have a simulation based notion of security that says that we can actually simulate everything that the adversary sees just given the function outputs. And this will be actually the notion that we'll focus on in this work simulation based security. It's fairly easy to see that simulation based security requires a CRS and we're going to consider this notion in the CRS or common reference string model where the CRS can be arbitrarily long. We don't care about the length of the CRS. It's also relatively easy to see that once you have functional encodings they give you XIO where XIO stands for exponential efficient IO, but you can just think of it as IO for circuits with a polynomial domain. In other words, the inputs are numbers values between one and N where capital N is some large polynomial. And the reason for this is actually easy to see. It's really the same promise function coding. So we want to hide the circuit C while revealing the output of the circuit on all of the N inputs, C1 up to CN. So that's exactly the same problem as before just instead of X, we have the circuit C and some epilogues we have C1 up to C of N. And so it's easy to see that if you have functional encodings then it directly gives you XIO, okay? So the goal now is to build functional encodings via the, to construct functional encodings. And we do so by relying on something called the dual GSW for the homomorphic encryption scheme which we'll then combine with the WSLW sampling. But first I want to show you that actually just with this dual GSW scheme, you almost get something that looks like a function coding except it's missing a crucial security property. So let me tell you what dual GSW is. In fact, I'm not going to tell you the scheme itself. I'll just tell you some nice properties that the scheme has. You can see our paper for the scheme itself. So in dual GSW, you can think of as a fully homomorphic encryption scheme where there's a public key, which is a LW matrix A, like a tall, thin matrix, M by N matrix where M is much bigger than N. And there's a way to encrypt an input X under this public key and derive some ciphertext C. And you can actually show that this encryption is semantically secure. The ciphertext C hides X under the LW assumption. Moreover, there's a homomorphic computation that you can perform on the ciphertext C to evaluate some function F. And you get a new ciphertext C sub F. And this new ciphertext has the following structure. So this is the main property of this encryption. The value of the ciphertext C sub F is just an LW sample, A times some secret RF plus some small error E sub F. So this is an LW sample plus the output of the function F of X times Q over two. So here the output of the function is M bits where M is also the height of this matrix A. And lastly, there's also a way to, if you have the function F, the input X and the randomness R that was used to encrypt it, then you can also figure out this LW secret R sub F contained in the ciphertext C sub F. And if I give you the secret R sub F, I'll think of it as an opening. You can break open the ciphertext C sub F and recover the output F of X because you can subtract out A times R sub F. You get F of X plus some small error, F of X times Q over two plus some small error and you can correct for that error. So in particular that means I can give you a small opening R sub F to a big function of output F of X and that should really look a lot like functional encodings. So is this a good functional encoding scheme? Well, in terms of the opening sizes and in terms of the sizes, yes, it has the right sizes. So I can give you a small opening of size N lock Q to a big output of size M where M can be arbitrarily larger than N lock Q. I can have a much smaller opening than the output size which is what we wanted. But what about security? So let's start with the good news. The ciphertext C hides the input X under LWE. So we have some security property but that's not what we wanted for functional encodings. For function encodings we wanted more. We wanted to say that even if I evaluate some functions F and open up the resulting ciphertext C sub F by giving you R sub F, then that doesn't reveal anything else other than the function output F of X. And unfortunately, that's not true for this scheme. If I give you R sub F, it might reveal additional information about X beyond the output F of X. So we're gonna try to fix this. And as a warmup, let's see how to fix this in the case where I'm only ever going to give you one opening. Remember for functional encodings, I need to have security even if I give you many openings, Q openings where Q can be very large. But let's start with just one opening security. So to get that, we can just augment our construction before and add a random LWE sample to the encoding. And then when we evaluate the ciphertext C sub F we can just add in this extra LWE sample. And what that does is it just re-randomizes the evaluated ciphertext. And now we can give you the opening RF plus S where we add in the secret S from this LWE sample. And what that ensures is that because we're adding in a random S we essentially are re-randomizing the opening and ensuring that the new opening does not reveal anything about X beyond the output F of X. It's relatively easy to show that this construction is secure if we only give out one opening. It's simulation secure. And the simulator can essentially program this LWE sample to program in any output at once. So how do we take this and generalize it beyond one opening security? How do we generalize it to a security with Q openings? So the first idea might be let's add Q different LWE samples to the ciphertext instead of just one LWE sample B to get Q opening security. So that would work, but unfortunately that would mean the ciphertext size grows with Q and that's not what we wanted. We wanted to have encodings that are smaller, that are much smaller, that whose size is independent of Q. Another option would be to add these bunch of LWE samples not to the encoding but to the CRS. So far we haven't used the CRS. Let's put the LWE samples in there. But unfortunately that doesn't work either because when we give out the opening we need to know the LWE secrets S contained in these samples. And if they're just in some common reference string nobody knows the secret including the honest algorithms that need to provide the openings. So instead we're going to solve this by introducing a new primitive called oblivious LWE sampling. This is a primitive that essentially lets you obliviously create LWE samples without knowing a secret in a way where even if I do open them up even if I do give you the secrets these samples look like random LWE samples. In more detail, we're going to consider setting where there's a long CRS but the CR should be completely independent of the matrix A. So in particular, it cannot just like contain LWE samples itself. In addition, there's going to be a short value P that does depend on A, but a short at size should be independent of the number of openings Q that I'm going to give out. And together there's long CRS and the short value P will determine capital Q different LWE samples BI equals ASI plus EI. And the security says that these samples are essentially look uniform. They look like random LWE samples even if I give you the openings to them. Unfortunately, I don't have time to give you the full definition but it's a simulation based definition of security. So now I'm going to show you how to construct oblivious LWE samplings. And actually we're going to do this by relying on the same dual GSW encryption scheme, FHE, that we use to construct functional encoding. We're also going to use that same scheme to construct oblivious LWE sampling itself by combining it with a pseudo random function. Let me start with a simplified construction. This construction doesn't have any kind of a CRS and it doesn't achieve our simulation based definition but we're going to see how to augment it to do so. And the idea is let me set this short value P to be an GSW encryption, this dual GSW encryption of a random PRF key K. Now to generate the I LWE sample, we're going to homomorphically evaluate the following function GI of K. This function essentially generates a pseudo random LWE sample AS plus E where S and E are sampled using the PRF with index I for the I sample. Okay, so we're just generating a pseudo random LWE sample homomorphically. So to provide sample, we're going to homomorphically evaluate this function and the result, the output of that homomorphic evaluation is some value that looks like this, A times RG plus EG. So this is the randomness that comes from the homomorphic evaluation plus the output of the function GI of K but the output of the function GI of K is itself an LWE sample AS plus E. So in full, we get this LWE sample shown over here on the bottom. It's the sum of two LWE samples. The blue one comes from the homomorphic evaluation and the red one comes from the output of the PRF. So unfortunately, this notion does not have a CRS. There's no simulator for it and it does not satisfy our definition of oblivious LWE sampling but actually it may be good enough when used in the full construction of functional encodings as far as we know, we don't have any attack on it but that's not what we want. We want to actually meet the definition we set out and to do that, we're going to augment the construction from the previous slide with the stuff in purple here. So we're going to add a CRS which consists of just a bunch of random vectors B I had and we're going to augment this short value P to be an encryption, not just of a PRF K but also of a flag bit beta and beta is going to just be set to zero. When we do the homomorphic evaluation to produce the I sample, we're going to evaluate this function GI of K which is the same as before. It computes absolute random LWE sample S plus E but in addition, we're going to add in the bit beta the flag bit beta times the I value in the CRS times B I had. Now remember that beta set to zero in real life and so in real life, this purple thing is just zero and so we get the same kind of LWE sample we got on the previously. But now we have an opportunity for a simulator to program the CRS to get to cause the oblivious to generate LWE samples to be once it likes and it does so by putting by changing the CRS not to be uniformly random but to consist of LWE samples B I had and then it sets the flag bit beta to one. That way the LWE sample from the CRS is incorporated into the obliviously generated LWE sample that's produced by this process. So to prove the security of this construction we need to rely on this new assumption and here it is, this is the entire assumption on the slide. So the assumption is an indistinguishable base assumption that says you cannot generate or you cannot distinguish between the cases where the flag bit beta is zero and beta is one given the following values. So let me read off what these values are. You're first given the LWE matrix A. You're given the bunch of LWE samples B I had from the CRS. You're given this GSW encryption of the PRF key K and the bit beta, which is either zero or one. And using these values already and you can evaluate the oblivious LWE sampling procedure to create LWE samples that look like this value on the bottom. It's a sum of three LWE samples, a blue one, which comes from the homomorphic evaluation, a red one, which comes from the output of the PRF and a purple one, which comes from the CRS. Okay, so actually the purple one is incorporated when beta is one, but not when beta is zero. And as the last part of the distribution I'm actually going to open up this LWE sample that you generated for you. And the assumption says that you still cannot tell whether beta is zero or one, even if I generate these LWE samples and open them up and give you the underlying LWE secret SI. So I wanna claim that this assumption has a circular security flavor and the reason is the following. So we need to rely on the encryption here being secured to hide the bit beta. But when we open up the LWE sample SI we give you some, this includes some value R sub GI which depends on the homomorphic evaluation and depends on the encryption randomness. But we wanna rely on the fact that we're adding in this pseudo random sample as sub I star to argue that we're really blinding this value R sub GI, which might reveal something about the encryption randomness. So we're relying on the PRF output to hide information about the encryption randomness. On the other hand, the encryption encrypts the PRF key. So we need to rely on the security of the encryption and on the security of the encryption randomness to protect the PRF key itself. And you see that this is, this requires some circularity in this assumption. So that's all I wanted to say about the assumption and the construction. I wanna just briefly mention some concurrent and follow up works. In particular, there are two works that were concurrent to ours and a work by Gay and Pass and a work by Berkersky et al. They offer similar results in the sense that they construct an indistinguishable obfuscation from LWE style assumption but then plausibly post quantum secure. They require new types of assumptions. And also these new assumptions have some circular security flavor but the exact abstractions and assumptions are different in these works. And so it's worth looking at all of them. I also wanted to mention a work from the past crypto by Hopkins, Jane and Lynn. And they actually showed that all of the assumptions in all three of these works, our work and the other two can be broken in their full generality. So for example, for our work, that means it can be broken if we instantiate them using a contrived PRF or more concrete like contrived implementation of the circuit that does the PRF evaluation. And I think the takeaway from this work is that these schemes are still possibly secure. They don't really invalidate the whole approach but we need to be more careful in how we instantiate it. And last I wanted to mention a work that's going to appear at the upcoming TCC in which we give a concrete implementation instantiation of the approach from the stock. And so we give a concrete PRF and concrete algorithms for evaluating it. And it leads to a simplified assumption that's really amenable to crypto analysis and we do some analysis of this assumption showing that some simple attacks fail. So that's all I want to say. Thank you very much for listening.