 This talk is on counter-examples to new circular security assumptions underlying indistinguishability obfuscation. It's joint work by me, Sam Hopkins, I use Jane and Rachel Lynn. So as a starting point, let's introduce indistinguishability obfuscation. All we need to know for this talk is that it's an extremely useful cryptographic primitive and provably secure constructions of indistinguishability obfuscation imply secure constructions of a tremendous wealth of exciting cryptographic primitives. So indistinguishability obfuscation or IO has been the object of intense study for the last 10 years or so and only very recently has there been a construction of IO which can be proved secure based on relatively standard assumptions. So for at least a decade IO proved to be an incredibly elusive target, an incredibly elusive goal to construct in a provably secure fashion. Now looking back on the history of IO constructions, a perhaps kind of pessimistic point of view on them is that the field has been engaged in a sort of cat and mouse game in which heuristic constructions were proposed and then they were attacked which led to further heuristic constructions, then attacks, then more constructions, and so on. And although now we have the benefit of hindsight at the time, it may well have looked like this was just not going anywhere, that we were just going in circles. But a more charitable interpretation than I think with the benefit of hindsight one that you can easily argue for is that even though it might have looked like we were going in circles with this sort of cat and mouse game, actually underneath it all there was a notion of progress. The notion of progress was that the kind of heuristic assumptions that were used underlying these constructions got simpler and simpler and simpler over time until eventually about a year ago now, or a little bit more, this supposed cat and mouse game actually led to a provably secure construction of IO based on these standard few assumptions by Jane Lin and Sahai. So this construction and story is not the main subject of the talk, but it sets the background for what we're going to try to do and what the aims of our paper are. So now that we know one provably secure assumption of IO, there are a couple of natural goals. First, the Jane Lin Sahai construction does not use assumptions which are known to be not quantum secure. So it would be good if we could now find constructions of IO that are secure in a post-quantum world. Furthermore, the Jane Lin Sahai construction is pretty complicated and so it would be very nice to have simpler constructions of indistinguishability obfuscation. Towards both of these goals, a natural approach is to try to base IO on only lattice-based assumptions. Because lattice-based assumptions are typically secure against quantum attacks or are thought to be secure against quantum attacks, this can lead to post-quantum secure IO. And if we can get rid of many of the assumptions that are underlying the Jane Lin Sahai scheme, we could hope that the resulting construction will be simpler. Towards this goal, we've seen several exciting recent works which describe new simple IO constructions which, while not being provably secure under well-studied assumptions, do manage to state very clean, attackable, verifiable assumptions under which they can prove security of their schemes. So these new schemes are due to Brackersky.languard, Malvolta, Gay and Pass, and We and Wix. And especially these second to state very nice compact assumptions under which they can prove security of their schemes. And these assumptions are basically lattice-based. So while they're not just standard learning with errors, they seem to be relatively small modifications of learning with errors style assumptions. And both of them modify the learning with errors assumptions in similarly flavored ways. And I'm going to spend the next few minutes describing how those modifications go. So both of them have an assumption of the flavor of learning with errors plus some kind of circular security, plus some kind of leakage of some randomness of encryption to the attacker. And both these papers, Gay and Pass and We and Wix, show that under this kind of LWE++ assumption, they can get provably secure IO. So that's very exciting. It suggests that there is an avenue to simple and post-quantum secure IO under which would be based just on lattice problems. But it now means that it's imperative to try to analyze this type of assumption. We need to understand if these assumptions are secure or if they're not, how can they be modified to be secure and how can we modify the constructions accordingly towards the aim of post-quantum secure IO and simpler constructions. So our paper is crypto-analyzing assumptions of this flavor. So now I can say our results in a nutshell. Our result, now we are, as I described earlier in the talk, in this construction-attack-construction-attack type cycle. We're in the attack part of the cycle and our results are a crypto-analytic attack on these assumptions, the particular instantiations of this flavor of assumption that are in the Gay, Pass and We and Wix work. And we show that the assumptions in those works, as stated, are false. So we give attacks on those assumptions. However, to be clear, the underlying strategy and constructions of all of these papers are not broken by our work and in particular the strategy that they give for constructing IO from lattices may well be feasible. It may well be possible to give construction of IO based on this strategy. It just means that the formulation of the assumptions that are underlying them have to be modified. So our hope, as we described before, is that these attacks can lead to refined and ideally secure versions of these assumptions and eventually to post-quantum secure IO. So for the rest of the talk, I'm going to focus in on just one of these two constructions, the work by Gay and Pass, and our attack on the assumption of We and Wix is relatively similar. Okay, so let's fix in our minds a nice, whatever that means, fully homomorphic encryption scheme. For instance, for this talk, we can think just about the Gentry-Sahai waters GSW FHE scheme. As I said, the Gay-Pass assumption has this kind of LWE++ flavor, LWE++ some other stuff. So let's go through the other stuff. The other stuff has two components. The first is a circular security type of assumption. So let's review circular security. Circular security is the notion that if so-called key cycles are released to an attacker, that the attacker cannot benefit from this. So suppose this is a two-circular security assumption here. On this slide, suppose that in addition to, suppose that we set up two copies of an FHE scheme with SecretKey1 and PublicKey1, SecretKey2 and PublicKey2, and then we publish the following two ciphertexts. We publish an encryption of SecretKey2 rather under PublicKey1 and an encryption of SecretKey1 under PublicKey2. It's believed that even given these ciphertexts, the underlying FHE schemes remain secure, at least for natural FHE schemes such as GSW. And in fact, this belief underlies the security of unleveled, fully homomorphic encryption. So whether or not you consider this a well-founded assumption, not completely clear, but it is at least widely enough believed that we are willing to rely on it for unleveled FHE. So that's circular security. The second type of assumption is we're going to call randomness leakage. The second plus in the LWB++. So this requires a little bit more setup to describe formally, but the idea is that the attacker gets to see some of the randomness of encryption that results when the FHE scheme is run, when the FHE scheme does some evaluations of a circuit on some chosen plaintext. So the way that this is instantiated in the work of GaN Pass, they call shielded randomness leakage. And I'll go ahead and describe what that means. So here's the usual security game, or almost the usual security game for CPA security, except with this modified line here. So as in the usual security game, the adversary chooses two messages M0 and M1, and then a bit 01 is uniformly randomly sampled, and the attacker gets to see the encryption of the message MB. So either M0 or M1. The goal is for the attacker to guess which one of the messages she has seen. But before doing so, she can call a certain oracle that GaN Pass call the SRL oracle a polynomial number of times. So what is this oracle? This is the oracle that is going to give her access to some kind of information about randomness of encryption after FHE evals. Here's the SRL oracle O. Whenever the SRL oracle is called, a fresh encryption of 0 under some randomness of encryption R star is generated. Let's call that ciphertext CT star. Now, the adversary chooses, given CT star, which she gets to see, the adversary chooses some circuit F, which takes messages and outputs Boolean values. The idea is that the adversary is going to get to see some leakage on randomness of encryption that results when FHE eval is run using the circuit F. So when the circuit F is homomorphically evaluated. What the oracle does is homomorphically evaluates the circuit F on the message MB, and then it leaks a certain block of randomness. The randomness should have the following property. So the randomness that it leaks is composed of two parts, R star, which is this randomness of encryption from the fresh encryption of 0. This is hiding the more interesting piece of randomness, R sub F. So what is R sub F? R F is randomness that if you used it as the randomness of encryption for the message F of MB, if you encoded F of MB under this randomness R F, you would get the same randomness of encryption as you do when you run eval on F and MB. So you can think of R F CT star as the randomness that is sitting in the ciphertext that you would get if you run eval on F and MB. If you just leaked to this randomness alone, it would be easy to then recover, it would be easy to break the scheme. So it's hidden, it's shielded by this other randomness R star. Okay, so this is a little bit complicated, but the basic idea is that the adversary gets to see some leakage on the randomness of encryption that results from running FHE eval on a chosen circuit F of CT star. Yeah, F which depends on CT star, it depends on the ciphertext that itself depends on the shielding randomness R star. Okay, so it turns out that actually this shielding works, at least if you consider it in isolation. So game pass show that if you ignore circular security and you just think of LWE plus this kind of randomness leakage, this SRL oracle, this is actually secure if the FHE scheme you use is GSW under the learning with errors assumption. So where have we arrived? Basically, we have this LWE plus plus assumption, this whole thing. If you look at LWE plus one of them, circular security, we reasonably think that that is secure, at least we base unleveled FHE on that assumption. If you look at LWE plus the other one, randomness leakage, it's provably secure under LWE, at least for a natural crypto scheme like GSW. And so it's kind of reasonable to think that you could add both of these assumptions at the same time and still get IEL. Indeed, that is the conjecture that game pass put forward. So they call this the OSRL CIRC conjecture. And it goes like this. It says that for natural FHE schemes S such as GSW, if S is too circular secure and S is SRL secure, then it's secure with both kinds of leakages simultaneously, the encrypted key cycle and the SRL oracle leakages. We show that this conjecture is false by using, as a counter example, GSW. We'll construct an attack when the FHE scheme is GSW. Some previous versions of our work, which circulated, used instead of vanilla GSW, a slightly modified GSW scheme, which we still counted as natural, but actually we're able to conduct this attack even when the scheme is just regular old GSW. No modifications are necessary. So for the rest of the talk, I'll go ahead and describe our attack. So here's how it goes. Remember that what the attacker gets to see is for starters, let's say the cipher text of MB, they get to see the encryption of the chosen plain text, and they get to see a key cycle. And what they get to do is choose a circuit, when they call this SRL oracle, we're going to call the SRL oracle, they get to choose this circuit that maps messages to 01, and the choice of circuit gets to depend on two things. It gets to depend on CT star, the cipher text, which is a fresh encryption of 0 under randomness R star that's going to get used for shielding, and the circuit gets to depend on the key cycle. And then our attacker, our attack gets to observe this shielded randomness leakage, Rf minus R star, where Rf is going to contain some interesting information, but it's shielded by R star. So how are we going to choose this circuit? Our choice of the circuit is going to use the following first observation. So here, just to introduce some notation, I'm going to write the private key of the main crypto system that MB is encrypted under as U, and R is, you know, a matrix that is randomness of encryption for the message MB. So the first observation is that if we choose certain gates in F, we can make FHE val of F, move some information about the message MB into the randomness of encryption. So let's go ahead and assume that MB is just a single bit, so 0 or 1. And let's notice that if we just multiply the bit by 0, and as our circuit, and then we FHE val on that, what will happen is that we're going to move information about MB into the randomness part of the encryption. So under GSW, this is what would happen if you do that multiplication. So here is our ciphertext of MB. It's public key times randomness plus message times a gadget matrix. To multiply by 0, we multiply by this thing. This is just some encryption of 0. And if you do the multiplication, what you get is the public key times some big matrix. This is now the randomness of encryption in the new resulting ciphertext. And notice that MB is sitting in here. Some other stuff is sitting here too, but MB is sitting here. Now, this seems nice. And notice that if we use this as our circuit F, then when we do the SRL oracle, we will get to observe this matrix. But the whole point of their shielding is that it's actually fine if we see this matrix, like it's not going to break security, because we're not going to get to see this matrix directly, instead it's going to be shielded by this additional randomness R star. The second key idea is that in the presence of the encrypted key cycle, we can use the key cycle to access R star inside our function F, because F is going to be homomorphically evaluated. F can take as sort of an additional hard-coded input. We can think of F as taking the private key for the crypto scheme, because it has access to a ciphertext of that private key. So this is the key idea. Once you have this idea implementing the attack, there are probably several ways to do it, and I'm just going to describe one way to do it, because there should be several ways to do it once you decide to use the key cycle to access the shielding randomness. So here's the way we do it. We use inside F, we use the secret key 1, which is encrypted in our key cycle under public key 2, so F can depend on secret key 1. So what we'll do is run a decryption algorithm, the GSW decryption algorithm inside the circuit F, because F gets to depend on the secret key, and we'll decrypt the ciphertext CT star, which gives us access to the shielding randomness R star. In fact, what we can get is not exactly this randomness itself, but we'll get it times some short vector. This is like the short vector that comes out of GSW decryption. And now if we take the inner product of that vector with some chosen vector V, we can get, after the SRL leakage, we can get something that looks like the following. So we're going to compose this trick with the trick I described earlier, where we multiply by 0 to move some information about the message part of an encryption over into the randomness part of an encryption. What we'll get is some junk plus something useful. So before when we used this trick, we had just message bit times some randomness R. Now we're going to get message bit plus this thing about R star. And then there's the shielding randomness R star, which we'll show up before the SRL thing gets released. Now what we need to do is get rid of this other kind of garbage. The useful thing is that we know this matrix here, the gadget matrix applied to some ciphertext that we know. And we had the freedom to choose this vector V. So what we can do is choose a vector V, which is in the kernel of this matrix. It's a Boolean matrix that's random, so it has a non-trivial kernel with decent probability, like one-half. And that means that what we can actually find by doing this is this term will vanish, and we will be left only with this stuff. So what we can get after the SRL leakage is something like this. What we can then do, it turns out, without going into too much detail, is use things like this to build a linear system, which is solved by the short vector E. So at the end of the day, the attack goes as follows. The attacker calls the SRL oracle with some carefully chosen circuit, which includes the GSW decryption algorithm, and then uses this trick to multiply by zero, which moves information about message over into the randomness. The adversary gets the randomness leakage. This has been carefully chosen so that the shielding interacts in a known way with the message that's hiding in the shielded part of the randomness leakage, and then the adversary uses the result to set up a linear system, and the linear system will have a solution if the message bit is zero, and it will not have a solution if the message bit is one, and that way the adversary can break the crypto scheme. So just some brief conclusions. So where did we end up with? So we were studying IO constructions, candidate IO constructions based only on lattice-style assumptions, and these assumptions, they have this LWE plus plus flavor, plus two things, LWE plus a circular security assumption, and plus some randomness leakage on FHE evals, both of which on their own seem plausibly secure. But we show that when you put these two kinds of leakages together, at least the way it's done concretely in these papers by Gay and Pass and we in Wix, the resulting assumption is attackable. And in general, the conclusion we draw is that there's not, at least as far as we can tell, there's not a sort of general principle that security of these leakages on their own implies security together. Instead, the security of an assumption like this is going to have to be sensitive to a lot of details about the structure of the leakages. So of course, a natural next question is whether one can get a more specific version of this assumption, whether you can instantiate an assumption like this in a way that avoids the attacks that we're describing. And there is some work in this direction by this set of authors that I think will be publicly available shortly. And of course, you know, it's not just whatever construction is proposed here, it can be possible to give many other constructions which avoid our attack and it's very interesting to try and do so.